WorldWideScience

Sample records for interactome database service

  1. Construction and application of a protein and genetic interaction network (yeast interactome).

    Science.gov (United States)

    Stuart, Gregory R; Copeland, William C; Strand, Micheline K

    2009-04-01

    Cytoscape is a bioinformatic data analysis and visualization platform that is well-suited to the analysis of gene expression data. To facilitate the analysis of yeast microarray data using Cytoscape, we constructed an interaction network (interactome) using the curated interaction data available from the Saccharomyces Genome Database (www.yeastgenome.org) and the database of yeast transcription factors at YEASTRACT (www.yeastract.com). These data were formatted and imported into Cytoscape using semi-automated methods, including Linux-based scripts, that simplified the process while minimizing the introduction of processing errors. The methods described for the construction of this yeast interactome are generally applicable to the construction of any interactome. Using Cytoscape, we illustrate the use of this interactome through the analysis of expression data from a recent yeast diauxic shift experiment. We also report and briefly describe the complex associations among transcription factors that result in the regulation of thousands of genes through coordinated changes in expression of dozens of transcription factors. These cells are thus able to sensitively regulate cellular metabolism in response to changes in genetic or environmental conditions through relatively small changes in the expression of large numbers of genes, affecting the entire yeast metabolome.

  2. Interactome of the hepatitis C virus: Literature mining with ANDSystem.

    Science.gov (United States)

    Saik, Olga V; Ivanisenko, Timofey V; Demenkov, Pavel S; Ivanisenko, Vladimir A

    2016-06-15

    A study of the molecular genetics mechanisms of host-pathogen interactions is of paramount importance in developing drugs against viral diseases. Currently, the literature contains a huge amount of information that describes interactions between HCV and human proteins. In addition, there are many factual databases that contain experimentally verified data on HCV-host interactions. The sources of such data are the original data along with the data manually extracted from the literature. However, the manual analysis of scientific publications is time consuming and, because of this, databases created with such an approach often do not have complete information. One of the most promising methods to provide actualisation and completeness of information is text mining. Here, with the use of a previously developed method by the authors using ANDSystem, an automated extraction of information on the interactions between HCV and human proteins was conducted. As a data source for the text mining approach, PubMed abstracts and full text articles were used. Additionally, external factual databases were analyzed. On the basis of this analysis, a special version of ANDSystem, extended with the HCV interactome, was created. The HCV interactome contains information about the interactions between 969 human and 11 HCV proteins. Among the 969 proteins, 153 'new' proteins were found not previously referred to in any external databases of protein-protein interactions for HCV-host interactions. Thus, the extended ANDSystem possesses a more comprehensive detailing of HCV-host interactions versus other existing databases. It was interesting that HCV proteins more preferably interact with human proteins that were already involved in a large number of protein-protein interactions as well as those associated with many diseases. Among human proteins of the HCV interactome, there were a large number of proteins regulated by microRNAs. It turned out that the results obtained for protein

  3. Cell Interactomics and Carcinogenetic Mechanisms

    CERN Document Server

    Baianu, IC; Report to the Institute of Genomics

    2004-01-01

    Single cell interactomics in simpler organisms, as well as somatic cell interactomics in multicellular organisms, involve biomolecular interactions in complex signalling pathways that were recently represented in modular terms by quantum automata with ‘reversible behavior’ representing normal cell cycling and division. Other implications of such quantum automata, modular modeling of signaling pathways and cell differentiation during development are in the fields of neural plasticity and brain development leading to quantum-weave dynamic patterns and specific molecular processes underlying extensive memory, learning, anticipation mechanisms and the emergence of human consciousness during the early brain development in children. Cell interactomics is here represented for the first time as a mixture of ‘classical’ states that determine molecular dynamics subject to Boltzmann statistics and ‘steady-state’, metabolic (multi-stable) manifolds, together with ‘configuration’ spaces of metastable quant...

  4. Information flow analysis of interactome networks.

    Directory of Open Access Journals (Sweden)

    Patrycja Vasilyev Missiuro

    2009-04-01

    Full Text Available Recent studies of cellular networks have revealed modular organizations of genes and proteins. For example, in interactome networks, a module refers to a group of interacting proteins that form molecular complexes and/or biochemical pathways and together mediate a biological process. However, it is still poorly understood how biological information is transmitted between different modules. We have developed information flow analysis, a new computational approach that identifies proteins central to the transmission of biological information throughout the network. In the information flow analysis, we represent an interactome network as an electrical circuit, where interactions are modeled as resistors and proteins as interconnecting junctions. Construing the propagation of biological signals as flow of electrical current, our method calculates an information flow score for every protein. Unlike previous metrics of network centrality such as degree or betweenness that only consider topological features, our approach incorporates confidence scores of protein-protein interactions and automatically considers all possible paths in a network when evaluating the importance of each protein. We apply our method to the interactome networks of Saccharomyces cerevisiae and Caenorhabditis elegans. We find that the likelihood of observing lethality and pleiotropy when a protein is eliminated is positively correlated with the protein's information flow score. Even among proteins of low degree or low betweenness, high information scores serve as a strong predictor of loss-of-function lethality or pleiotropy. The correlation between information flow scores and phenotypes supports our hypothesis that the proteins of high information flow reside in central positions in interactome networks. We also show that the ranks of information flow scores are more consistent than that of betweenness when a large amount of noisy data is added to an interactome. Finally, we

  5. The Topology of the Growing Human Interactome Data

    Directory of Open Access Journals (Sweden)

    Janjić Vuk

    2014-06-01

    Full Text Available We have long moved past the one-gene-one-function concept originally proposed by Beadle and Tatum back in 1941; but the full understanding of genotype-phenotype relations still largely relies on the analysis of static, snapshot-like, interaction data sets. Here, we look at what global patterns can be uncovered if we simply trace back the human interactome network over the last decade of protein-protein interaction (PPI screening. We take a purely topological approach and find that as the human interactome is getting denser, it is not only gaining in structure (in terms of now being better fit by structured network models than before, but also there are patterns in the way in which it is growing: (a newly added proteins tend to get linked to existing proteins in the interactome that are not know to interact; and (b new proteins tend to link to already well connected proteins. Moreover, the alignment between human and yeast interactomes spanning over 40% of yeast’s proteins - that are involved in regulation of transcription, RNA splicing and other cellcycle- related processes-suggests the existence of a part of the interactome which remains topologically and functionally unaffected through evolution. Furthermore, we find a small sub-network, specific to the “core” of the human interactome and involved in regulation of transcription and cancer development, whose wiring has not changed within the human interactome over the last 10 years of interacome data acquisition. Finally, we introduce a generalisation of the clustering coefficient of a network as a new measure called the cycle coefficient, and use it to show that PPI networks of human and model organisms are wired in a tight way which forbids the occurrence large cycles.

  6. "Fuzziness" in the celular interactome: a historical perspective.

    Science.gov (United States)

    Welch, G Rickey

    2012-01-01

    Some historical background is given for appreciating the impact of the empirical construct known as the cellular protein-protein interactome, which is a seemingly de novo entity that has arisen of late within the context of postgenomic systems biology. The approach here builds on a generalized principle of "fuzziness" in protein behavior, proposed by Tompa and Fuxreiter.(1) Recent controversies in the analysis and interpretation of the interactome studies are rationalized historically under the auspices of this concept. There is an extensive literature on protein-protein interactions, dating to the mid-1900s, which may help clarify the "fuzziness" in the interactome picture and, also, provide a basis for understanding the physiological importance of protein-protein interactions in vivo.

  7. A critical and Integrated View of the Yeast Interactome

    Directory of Open Access Journals (Sweden)

    Stephen G. Oliver

    2006-04-01

    Full Text Available Global studies of protein–protein interactions are crucial to both elucidating gene function and producing an integrated view of the workings of living cells. High-throughput studies of the yeast interactome have been performed using both genetic and biochemical screens. Despite their size, the overlap between these experimental datasets is very limited. This could be due to each approach sampling only a small fraction of the total interactome. Alternatively, a large proportion of the data from these screens may represent false-positive interactions. We have used the Genome Information Management System (GIMS to integrate interactome datasets with transcriptome and protein annotation data and have found significant evidence that the proportion of false-positive results is high. Not all high-throughput datasets are similarly contaminated, and the tandem affinity purification (TAP approach appears to yield a high proportion of reliable interactions for which corroborating evidence is available. From our integrative analyses, we have generated a set of verified interactome data for yeast.

  8. Grouping annotations on the subcellular layered interactome demonstrates enhanced autophagy activity in a recurrent experimental autoimmune uveitis T cell line.

    Directory of Open Access Journals (Sweden)

    Xiuzhi Jia

    Full Text Available Human uveitis is a type of T cell-mediated autoimmune disease that often shows relapse-remitting courses affecting multiple biological processes. As a cytoplasmic process, autophagy has been seen as an adaptive response to cell death and survival, yet the link between autophagy and T cell-mediated autoimmunity is not certain. In this study, based on the differentially expressed genes (GSE19652 between the recurrent versus monophasic T cell lines, whose adoptive transfer to susceptible animals may result in respective recurrent or monophasic uveitis, we proposed grouping annotations on a subcellular layered interactome framework to analyze the specific bioprocesses that are linked to the recurrence of T cell autoimmunity. That is, the subcellular layered interactome was established by the Cytoscape and Cerebral plugin based on differential expression, global interactome, and subcellular localization information. Then, the layered interactomes were grouping annotated by the ClueGO plugin based on Gene Ontology and Kyoto Encyclopedia of Genes and Genomes databases. The analysis showed that significant bioprocesses with autophagy were orchestrated in the cytoplasmic layered interactome and that mTOR may have a regulatory role in it. Furthermore, by setting up recurrent and monophasic uveitis in Lewis rats, we confirmed by transmission electron microscopy that, in comparison to the monophasic disease, recurrent uveitis in vivo showed significantly increased autophagy activity and extended lymphocyte infiltration to the affected retina. In summary, our framework methodology is a useful tool to disclose specific bioprocesses and molecular targets that can be attributed to a certain disease. Our results indicated that targeted inhibition of autophagy pathways may perturb the recurrence of uveitis.

  9. Quantum Interactomics and Cancer Molecular Mechanisms: I. Report Outline

    CERN Document Server

    Baianu, I C

    2004-01-01

    Single cell interactomics in simpler organisms, as well as somatic cell interactomics in multicellular organisms, involve biomolecular interactions in complex signalling pathways that were recently represented in modular terms by quantum automata with ‘reversible behavior’ representing normal cell cycling and division. Other implications of such quantum automata, modular modeling of signaling pathways and cell differentiation during development are in the fields of neural plasticity and brain development leading to quantum-weave dynamic patterns and specific molecular processes underlying extensive memory, learning, anticipation mechanisms and the emergence of human consciousness during the early brain development in children. Cell interactomics is here represented for the first time as a mixture of ‘classical’ states that determine molecular dynamics subject to Boltzmann statistics and ‘steady-state’, metabolic (multi-stable) manifolds, together with ‘configuration’ spaces of metastable quant...

  10. Inferring modules from human protein interactome classes

    Directory of Open Access Journals (Sweden)

    Chaurasia Gautam

    2010-07-01

    Full Text Available Abstract Background The integration of protein-protein interaction networks derived from high-throughput screening approaches and complementary sources is a key topic in systems biology. Although integration of protein interaction data is conventionally performed, the effects of this procedure on the result of network analyses has not been examined yet. In particular, in order to optimize the fusion of heterogeneous interaction datasets, it is crucial to consider not only their degree of coverage and accuracy, but also their mutual dependencies and additional salient features. Results We examined this issue based on the analysis of modules detected by network clustering methods applied to both integrated and individual (disaggregated data sources, which we call interactome classes. Due to class diversity, we deal with variable dependencies of data features arising from structural specificities and biases, but also from possible overlaps. Since highly connected regions of the human interactome may point to potential protein complexes, we have focused on the concept of modularity, and elucidated the detection power of module extraction algorithms by independent validations based on GO, MIPS and KEGG. From the combination of protein interactions with gene expressions, a confidence scoring scheme has been proposed before proceeding via GO with further classification in permanent and transient modules. Conclusions Disaggregated interactomes are shown to be informative for inferring modularity, thus contributing to perform an effective integrative analysis. Validation of the extracted modules by multiple annotation allows for the assessment of confidence measures assigned to the modules in a protein pathway context. Notably, the proposed multilayer confidence scheme can be used for network calibration by enabling a transition from unweighted to weighted interactomes based on biological evidence.

  11. Federated Database Services for Wind Tunnel Experiment Workflows

    Directory of Open Access Journals (Sweden)

    A. Paventhan

    2006-01-01

    Full Text Available Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support effective data management, near-realtime data movement and custom data processing. Many existing solutions exploit the database as a passive metadata catalog. In this paper, we present an approach that makes use of federation of databases to host data-centric wind tunnel application workflows. The user is able to compose customized application workflows based on database services. We provide a reference implementation that leverages typical business tools and technologies: Microsoft SQL Server for database services and Windows Workflow Foundation for workflow services. The application data and user's code are both hosted in federated databases. With the growing interest in XML Web Services in scientific Grids, and with databases beginning to support native XML types and XML Web services, we can expect the role of databases in scientific computation to grow in importance.

  12. CERN database services for the LHC computing grid

    International Nuclear Information System (INIS)

    Girone, M

    2008-01-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed

  13. CERN database services for the LHC computing grid

    Energy Technology Data Exchange (ETDEWEB)

    Girone, M [CERN IT Department, CH-1211 Geneva 23 (Switzerland)], E-mail: maria.girone@cern.ch

    2008-07-15

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  14. Dynamic zebrafish interactome reveals transcriptional mechanisms of dioxin toxicity.

    Directory of Open Access Journals (Sweden)

    Andrey Alexeyenko

    2010-05-01

    Full Text Available In order to generate hypotheses regarding the mechanisms by which 2,3,7,8-tetrachlorodibenzo-p-dioxin (dioxin causes toxicity, we analyzed global gene expression changes in developing zebrafish embryos exposed to this potent toxicant in the context of a dynamic gene network. For this purpose, we also computationally inferred a zebrafish (Danio rerio interactome based on orthologs and interaction data from other eukaryotes.Using novel computational tools to analyze this interactome, we distinguished between dioxin-dependent and dioxin-independent interactions between proteins, and tracked the temporal propagation of dioxin-dependent transcriptional changes from a few genes that were altered initially, to large groups of biologically coherent genes at later times. The most notable processes altered at later developmental stages were calcium and iron metabolism, embryonic morphogenesis including neuronal and retinal development, a variety of mitochondria-related functions, and generalized stress response (not including induction of antioxidant genes. Within the interactome, many of these responses were connected to cytochrome P4501A (cyp1a as well as other genes that were dioxin-regulated one day after exposure. This suggests that cyp1a may play a key role initiating the toxic dysregulation of those processes, rather than serving simply as a passive marker of dioxin exposure, as suggested by earlier research.Thus, a powerful microarray experiment coupled with a flexible interactome and multi-pronged interactome tools (which are now made publicly available for microarray analysis and related work suggest the hypothesis that dioxin, best known in fish as a potent cardioteratogen, has many other targets. Many of these types of toxicity have been observed in mammalian species and are potentially caused by alterations to cyp1a.

  15. Smart Location Database - Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  16. RNA-Binding Proteins Revisited – The Emerging Arabidopsis mRNA Interactome

    KAUST Repository

    Köster, Tino

    2017-04-13

    RNA–protein interaction is an important checkpoint to tune gene expression at the RNA level. Global identification of proteins binding in vivo to mRNA has been possible through interactome capture – where proteins are fixed to target RNAs by UV crosslinking and purified through affinity capture of polyadenylated RNA. In Arabidopsis over 500 RNA-binding proteins (RBPs) enriched in UV-crosslinked samples have been identified. As in mammals and yeast, the mRNA interactomes came with a few surprises. For example, a plethora of the proteins caught on RNA had not previously been linked to RNA-mediated processes, for example proteins of intermediary metabolism. Thus, the studies provide unprecedented insights into the composition of the mRNA interactome, highlighting the complexity of RNA-mediated processes.

  17. RNA-Binding Proteins Revisited – The Emerging Arabidopsis mRNA Interactome

    KAUST Repository

    Kö ster, Tino; Marondedze, Claudius; Meyer, Katja; Staiger, Dorothee

    2017-01-01

    RNA–protein interaction is an important checkpoint to tune gene expression at the RNA level. Global identification of proteins binding in vivo to mRNA has been possible through interactome capture – where proteins are fixed to target RNAs by UV crosslinking and purified through affinity capture of polyadenylated RNA. In Arabidopsis over 500 RNA-binding proteins (RBPs) enriched in UV-crosslinked samples have been identified. As in mammals and yeast, the mRNA interactomes came with a few surprises. For example, a plethora of the proteins caught on RNA had not previously been linked to RNA-mediated processes, for example proteins of intermediary metabolism. Thus, the studies provide unprecedented insights into the composition of the mRNA interactome, highlighting the complexity of RNA-mediated processes.

  18. Mapping the Small Molecule Interactome by Mass Spectrometry.

    Science.gov (United States)

    Flaxman, Hope A; Woo, Christina M

    2018-01-16

    Mapping small molecule interactions throughout the proteome provides the critical structural basis for functional analysis of their impact on biochemistry. However, translation of mass spectrometry-based proteomics methods to directly profile the interaction between a small molecule and the whole proteome is challenging because of the substoichiometric nature of many interactions, the diversity of covalent and noncovalent interactions involved, and the subsequent computational complexity associated with their spectral assignment. Recent advances in chemical proteomics have begun fill this gap to provide a structural basis for the breadth of small molecule-protein interactions in the whole proteome. Innovations enabling direct characterization of the small molecule interactome include faster, more sensitive instrumentation coupled to chemical conjugation, enrichment, and labeling methods that facilitate detection and assignment. These methods have started to measure molecular interaction hotspots due to inherent differences in local amino acid reactivity and binding affinity throughout the proteome. Measurement of the small molecule interactome is producing structural insights and methods for probing and engineering protein biochemistry. Direct structural characterization of the small molecule interactome is a rapidly emerging area pushing new frontiers in biochemistry at the interface of small molecules and the proteome.

  19. Database Aspects of Location-Based Services

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard

    2004-01-01

    in the databases underlying high-quality services. Several integrated representations - which capture different aspects of the same infrastructure - are needed. Further, all other content that can be related to geographical space must be integrated with the infrastructure representations. The chapter describes...... the general concepts underlying one approach to data modeling for location-based services. The chapter also covers techniques that are needed to keep a database for location-based services up to date with the reality it models. As part of this, caching is touched upon briefly. The notion of linear referencing......Adopting a data management perspective on location-based services, this chapter explores central challenges to data management posed by location-based services. Because service users typically travel in, and are constrained to, transportation infrastructures, such structures must be represented...

  20. Organization of physical interactomes as uncovered by network schemas.

    Science.gov (United States)

    Banks, Eric; Nabieva, Elena; Chazelle, Bernard; Singh, Mona

    2008-10-01

    Large-scale protein-protein interaction networks provide new opportunities for understanding cellular organization and functioning. We introduce network schemas to elucidate shared mechanisms within interactomes. Network schemas specify descriptions of proteins and the topology of interactions among them. We develop algorithms for systematically uncovering recurring, over-represented schemas in physical interaction networks. We apply our methods to the S. cerevisiae interactome, focusing on schemas consisting of proteins described via sequence motifs and molecular function annotations and interacting with one another in one of four basic network topologies. We identify hundreds of recurring and over-represented network schemas of various complexity, and demonstrate via graph-theoretic representations how more complex schemas are organized in terms of their lower-order constituents. The uncovered schemas span a wide range of cellular activities, with many signaling and transport related higher-order schemas. We establish the functional importance of the schemas by showing that they correspond to functionally cohesive sets of proteins, are enriched in the frequency with which they have instances in the H. sapiens interactome, and are useful for predicting protein function. Our findings suggest that network schemas are a powerful paradigm for organizing, interrogating, and annotating cellular networks.

  1. Embryonic stem cell interactomics: the beginning of a long road to biological function.

    Science.gov (United States)

    Yousefi, Maram; Hajihoseini, Vahid; Jung, Woojin; Hosseinpour, Batol; Rassouli, Hassan; Lee, Bonghee; Baharvand, Hossein; Lee, KiYoung; Salekdeh, Ghasem Hosseini

    2012-12-01

    Embryonic stem cells (ESCs) are capable of unlimited self-renewal while maintaining pluripotency. They are of great interest in regenerative medicine due to their ability to differentiate into all cell types of the three embryonic germ layers. Recently, induced pluripotent stem cells (iPSCs) have shown similarities to ESCs and thus promise great therapeutic potential in regenerative medicine. Despite progress in stem cell biology, our understanding of the exact mechanisms by which pluripotency and self-renewal are established and maintained is largely unknown. A better understanding of these processes may lead to discovery of alternative ways for reprogramming, differentiation and more reliable applications of stem cells in therapies. It has become evident that proteins generally function as members of large complexes that are part of a more complex network. Therefore, the identification of protein-protein interactions (PPI) is an efficient strategy for understanding protein function and regulation. Systematic genome-wide and pathway-specific PPI analysis of ESCs has generated a network of ESC proteins, including major transcription factors. These PPI networks of ESCs may contribute to a mechanistic understanding of self-renewal and pluripotency. In this review we describe different experimental approaches for the identification of PPIs along with various databases. We discuss biological findings and technical challenges encountered with interactome studies of pluripotent stem cells, and provide insight into how interactomics is likely to develop.

  2. A "candidate-interactome" aggregate analysis of genome-wide association data in multiple sclerosis

    DEFF Research Database (Denmark)

    Mechelli, Rosella; Umeton, Renato; Policano, Claudia

    2013-01-01

    of genes whose products are known to physically interact with environmental factors that may be relevant for disease pathogenesis) analysis of genome-wide association data in multiple sclerosis. We looked for statistical enrichment of associations among interactomes that, at the current state of knowledge......, may be representative of gene-environment interactions of potential, uncertain or unlikely relevance for multiple sclerosis pathogenesis: Epstein-Barr virus, human immunodeficiency virus, hepatitis B virus, hepatitis C virus, cytomegalovirus, HHV8-Kaposi sarcoma, H1N1-influenza, JC virus, human innate...... immunity interactome for type I interferon, autoimmune regulator, vitamin D receptor, aryl hydrocarbon receptor and a panel of proteins targeted by 70 innate immune-modulating viral open reading frames from 30 viral species. Interactomes were either obtained from the literature or were manually curated...

  3. A highly efficient approach to protein interactome mapping based on collaborative filtering framework.

    Science.gov (United States)

    Luo, Xin; You, Zhuhong; Zhou, Mengchu; Li, Shuai; Leung, Hareton; Xia, Yunni; Zhu, Qingsheng

    2015-01-09

    The comprehensive mapping of protein-protein interactions (PPIs) is highly desired for one to gain deep insights into both fundamental cell biology processes and the pathology of diseases. Finely-set small-scale experiments are not only very expensive but also inefficient to identify numerous interactomes despite their high accuracy. High-throughput screening techniques enable efficient identification of PPIs; yet the desire to further extract useful knowledge from these data leads to the problem of binary interactome mapping. Network topology-based approaches prove to be highly efficient in addressing this problem; however, their performance deteriorates significantly on sparse putative PPI networks. Motivated by the success of collaborative filtering (CF)-based approaches to the problem of personalized-recommendation on large, sparse rating matrices, this work aims at implementing a highly efficient CF-based approach to binary interactome mapping. To achieve this, we first propose a CF framework for it. Under this framework, we model the given data into an interactome weight matrix, where the feature-vectors of involved proteins are extracted. With them, we design the rescaled cosine coefficient to model the inter-neighborhood similarity among involved proteins, for taking the mapping process. Experimental results on three large, sparse datasets demonstrate that the proposed approach outperforms several sophisticated topology-based approaches significantly.

  4. Serial interactome capture of the human cell nucleus.

    Science.gov (United States)

    Conrad, Thomas; Albrecht, Anne-Susann; de Melo Costa, Veronica Rodrigues; Sauer, Sascha; Meierhofer, David; Ørom, Ulf Andersson

    2016-04-04

    Novel RNA-guided cellular functions are paralleled by an increasing number of RNA-binding proteins (RBPs). Here we present 'serial RNA interactome capture' (serIC), a multiple purification procedure of ultraviolet-crosslinked poly(A)-RNA-protein complexes that enables global RBP detection with high specificity. We apply serIC to the nuclei of proliferating K562 cells to obtain the first human nuclear RNA interactome. The domain composition of the 382 identified nuclear RBPs markedly differs from previous IC experiments, including few factors without known RNA-binding domains that are in good agreement with computationally predicted RNA binding. serIC extends the number of DNA-RNA-binding proteins (DRBPs), and reveals a network of RBPs involved in p53 signalling and double-strand break repair. serIC is an effective tool to couple global RBP capture with additional selection or labelling steps for specific detection of highly purified RBPs.

  5. Next Generation Protein Interactomes for Plant Systems Biology and Biomass Feedstock Research

    Energy Technology Data Exchange (ETDEWEB)

    Ecker, Joseph Robert [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Trigg, Shelly [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Univ. of California, San Diego, CA (United States). Biological Sciences Dept.; Garza, Renee [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Song, Haili [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; MacWilliams, Andrew [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Nery, Joseph [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Reina, Joaquin [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Bartlett, Anna [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Castanon, Rosa [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Goubil, Adeline [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Feeney, Joseph [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; O' Malley, Ronan [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Huang, Shao-shan Carol [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Zhang, Zhuzhu [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Galli, Mary [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.

    2016-11-30

    Biofuel crop cultivation is a necessary step in heading towards a sustainable future, making their genomic studies a priority. While technology platforms that currently exist for studying non-model crop species, like switch-grass or sorghum, have yielded large quantities of genomic and expression data, still a large gap exists between molecular mechanism and phenotype. The aspect of molecular activity at the level of protein-protein interactions has recently begun to bridge this gap, providing a more global perspective. Interactome analysis has defined more specific functional roles of proteins based on their interaction partners, neighborhoods, and other network features, making it possible to distinguish unique modules of immune response to different plant pathogens(Jiang, Dong, and Zhang 2016). As we work towards cultivating heartier biofuel crops, interactome data will lead to uncovering crop-specific defense and development networks. However, the collection of protein interaction data has been limited to expensive, time-consuming, hard-to-scale assays that mostly require cloned ORF collections. For these reasons, we have successfully developed a highly scalable, economical, and sensitive yeast two-hybrid assay, ProCREate, that can be universally applied to generate proteome-wide primary interactome data. ProCREate enables en masse pooling and massively paralleled sequencing for the identification of interacting proteins by exploiting Cre-lox recombination. ProCREate can be used to screen ORF/cDNA libraries from feedstock plant tissues. The interactome data generated will yield deeper insight into many molecular processes and pathways that can be used to guide improvement of feedstock productivity and sustainability.

  6. A study on relational ENSDF databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Song Xiangxiang; Ye Weiguo; Liu Wenlong; Feng Yuqing; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Guo Zhiyu; Huang Xiaolong; Liu Tingjin; China Inst. of Atomic Energy, Beijing

    2007-01-01

    A relational ENSDF library software is designed and released. Using relational databases, object-oriented programming and web-based technology, this software offers online data services of a centralized repository of data, including international ENSDF files for nuclear structure and decay data. The software can easily reconstruct nuclear data in original ENSDF format from the relational database. The computer programs providing support for database management and online data services via the Internet are based on the Linux implementation of PHP and the MySQL software, and platform independent in a wider sense. (authors)

  7. Managing Database Services: An Approach Based in Information Technology Services Availabilty and Continuity Management

    Directory of Open Access Journals (Sweden)

    Leonardo Bastos Pontes

    2017-01-01

    Full Text Available This paper is held in the information technology services management environment, with a few ideas of information technology governance, and purposes to implement a hybrid model to manage the services of a database, based on the principles of information technology services management in a supplementary health operator. This approach utilizes fundamental nuances of services management guides, such as CMMI for Services, COBIT, ISO 20000, ITIL and MPS.BR for Services; it studies harmonically Availability and Continuity Management, as most part of the guides also do. This work has its importance because it keeps a good flow in the database and improves the agility of the systems in the accredited clinics in the health plan.

  8. Interactome of Obesity: Obesidome : Genetic Obesity, Stress Induced Obesity, Pathogenic Obesity Interaction.

    Science.gov (United States)

    Geronikolou, Styliani A; Pavlopoulou, Athanasia; Cokkinos, Dennis; Chrousos, George

    2017-01-01

    Obesity is a chronic disease of increasing prevalence reaching epidemic proportions. Genetic defects as well as epigenetic effects contribute to the obesity phenotype. Investigating gene (e.g. MC4R defects)-environment (behavior, infectious agents, stress) interactions is a relative new field of great research interest. In this study, we have made an effort to create an interactome (henceforth referred to as "obesidome"), where extrinsic stressors response, intrinsic predisposition, immunity response to inflammation and autonomous nervous system implications are integrated. These pathways are presented in one interactome network for the first time. In our study, obesity-related genes/gene products were found to form a complex interactions network.

  9. Large Science Databases – Are Cloud Services Ready for Them?

    Directory of Open Access Journals (Sweden)

    Ani Thakar

    2011-01-01

    Full Text Available We report on attempts to put an astronomical database – the Sloan Digital Sky Survey science archive – in the cloud. We find that it is very frustrating to impossible at this time to migrate a complex SQL Server database into current cloud service offerings such as Amazon (EC2 and Microsoft (SQL Azure. Certainly it is impossible to migrate a large database in excess of a TB, but even with (much smaller databases, the limitations of cloud services make it very difficult to migrate the data to the cloud without making changes to the schema and settings that would degrade performance and/or make the data unusable. Preliminary performance comparisons show a large performance discrepancy with the Amazon cloud version of the SDSS database. These difficulties suggest that much work and coordination needs to occur between cloud service providers and their potential clients before science databases – not just large ones but even smaller databases that make extensive use of advanced database features for performance and usability – can successfully and effectively be deployed in the cloud. We describe a powerful new computational instrument that we are developing in the interim – the Data-Scope – that will enable fast and efficient analysis of the largest (petabyte scale scientific datasets.

  10. Development of IAEA nuclear reaction databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Zerkin, V.; Trkov, A. [International Atomic Energy Agency, Dept. of Nuclear Sciences and Applications, Vienna (Austria)

    2008-07-01

    From mid-2004 onwards, the major nuclear reaction databases (EXFOR, CINDA and Endf) and services (Web and CD-Roms retrieval systems and specialized applications) have been functioning within a modern computing environment as multi-platform software, working under several operating systems with relational databases. Subsequent work at the IAEA has focused on three areas of development: revision and extension of the contents of the databases; extension and improvement of the functionality and integrity of the retrieval systems; development of software for database maintenance and system deployment. (authors)

  11. Study of relational nuclear databases and online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Guo Zhiyu; Liu Wenlong; Ye Weiguo; Feng Yuqing; Song Xiangxiang; Huang Gang; Hong Yingjue; Liu Tinjin; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Liu Chi; Chen Jiaer; Huang Xiaolong

    2004-01-01

    A relational nuclear database management and web-based services software system has been developed. Its objective is to allow users to access numerical and graphical representation of nuclear data and to easily reconstruct nuclear data in original standardized formats from the relational databases. It presents 9 relational nuclear libraries: 5 ENDF format neutron reaction databases (BROND), CENDL, ENDF, JEF and JENDL), the ENSDF database, the EXFOR database, the IAEA Photonuclear Data Library and the charged particle reaction data from the FENDL database. The computer programs providing support for database management and data retrievals are based on the Linux implementation of PHP and the MySQL software, and are platform-independent. The first version of this software was officially released in September 2001

  12. Expression of DISC1-interactome members correlates with cognitive phenotypes related to schizophrenia.

    Science.gov (United States)

    Rampino, Antonio; Walker, Rosie May; Torrance, Helen Scott; Anderson, Susan Maguire; Fazio, Leonardo; Di Giorgio, Annabella; Taurisano, Paolo; Gelao, Barbara; Romano, Raffaella; Masellis, Rita; Ursini, Gianluca; Caforio, Grazia; Blasi, Giuseppe; Millar, J Kirsty; Porteous, David John; Thomson, Pippa Ann; Bertolino, Alessandro; Evans, Kathryn Louise

    2014-01-01

    Cognitive dysfunction is central to the schizophrenia phenotype. Genetic and functional studies have implicated Disrupted-in-Schizophrenia 1 (DISC1), a leading candidate gene for schizophrenia and related psychiatric conditions, in cognitive function. Altered expression of DISC1 and DISC1-interactors has been identified in schizophrenia. Dysregulated expression of DISC1-interactome genes might, therefore, contribute to schizophrenia susceptibility via disruption of molecular systems required for normal cognitive function. Here, the blood RNA expression levels of DISC1 and DISC1-interacting proteins were measured in 63 control subjects. Cognitive function was assessed using neuropsychiatric tests and functional magnetic resonance imaging was used to assess the activity of prefrontal cortical regions during the N-back working memory task, which is abnormal in schizophrenia. Pairwise correlations between gene expression levels and the relationship between gene expression levels and cognitive function and N-back-elicited brain activity were assessed. Finally, the expression levels of DISC1, AKAP9, FEZ1, NDEL1 and PCM1 were compared between 63 controls and 69 schizophrenic subjects. We found that DISC1-interactome genes showed correlated expression in the blood of healthy individuals. The expression levels of several interactome members were correlated with cognitive performance and N-back-elicited activity in the prefrontal cortex. In addition, DISC1 and NDEL1 showed decreased expression in schizophrenic subjects compared to healthy controls. Our findings highlight the importance of the coordinated expression of DISC1-interactome genes for normal cognitive function and suggest that dysregulated DISC1 and NDEL1 expression might, in part, contribute to susceptibility for schizophrenia via disruption of prefrontal cortex-dependent cognitive functions.

  13. Expression of DISC1-interactome members correlates with cognitive phenotypes related to schizophrenia.

    Directory of Open Access Journals (Sweden)

    Antonio Rampino

    Full Text Available Cognitive dysfunction is central to the schizophrenia phenotype. Genetic and functional studies have implicated Disrupted-in-Schizophrenia 1 (DISC1, a leading candidate gene for schizophrenia and related psychiatric conditions, in cognitive function. Altered expression of DISC1 and DISC1-interactors has been identified in schizophrenia. Dysregulated expression of DISC1-interactome genes might, therefore, contribute to schizophrenia susceptibility via disruption of molecular systems required for normal cognitive function. Here, the blood RNA expression levels of DISC1 and DISC1-interacting proteins were measured in 63 control subjects. Cognitive function was assessed using neuropsychiatric tests and functional magnetic resonance imaging was used to assess the activity of prefrontal cortical regions during the N-back working memory task, which is abnormal in schizophrenia. Pairwise correlations between gene expression levels and the relationship between gene expression levels and cognitive function and N-back-elicited brain activity were assessed. Finally, the expression levels of DISC1, AKAP9, FEZ1, NDEL1 and PCM1 were compared between 63 controls and 69 schizophrenic subjects. We found that DISC1-interactome genes showed correlated expression in the blood of healthy individuals. The expression levels of several interactome members were correlated with cognitive performance and N-back-elicited activity in the prefrontal cortex. In addition, DISC1 and NDEL1 showed decreased expression in schizophrenic subjects compared to healthy controls. Our findings highlight the importance of the coordinated expression of DISC1-interactome genes for normal cognitive function and suggest that dysregulated DISC1 and NDEL1 expression might, in part, contribute to susceptibility for schizophrenia via disruption of prefrontal cortex-dependent cognitive functions.

  14. PostGIS-Based Heterogeneous Sensor Database Framework for the Sensor Observation Service

    Directory of Open Access Journals (Sweden)

    Ikechukwu Maduako

    2012-10-01

    Full Text Available Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote sensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication and work load on the service. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database level. In this paper, we propose and present a heterogeneous sensor database framework or model for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level.  And how this can be integrated in the Sensor Observation Service, SOS to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible.

  15. Mining protein interactomes to improve their reliability and support the advancement of network medicine

    KAUST Repository

    Alanis Lobato, Gregorio

    2015-09-23

    High-throughput detection of protein interactions has had a major impact in our understanding of the intricate molecular machinery underlying the living cell, and has permitted the construction of very large protein interactomes. The protein networks that are currently available are incomplete and a significant percentage of their interactions are false positives. Fortunately, the structural properties observed in good quality social or technological networks are also present in biological systems. This has encouraged the development of tools, to improve the reliability of protein networks and predict new interactions based merely on the topological characteristics of their components. Since diseases are rarely caused by the malfunction of a single protein, having a more complete and reliable interactome is crucial in order to identify groups of inter-related proteins involved in disease etiology. These system components can then be targeted with minimal collateral damage. In this article, an important number of network mining tools is reviewed, together with resources from which reliable protein interactomes can be constructed. In addition to the review, a few representative examples of how molecular and clinical data can be integrated to deepen our understanding of pathogenesis are discussed.

  16. Mining protein interactomes to improve their reliability and support the advancement of network medicine

    Directory of Open Access Journals (Sweden)

    Gregorio eAlanis-Lobato

    2015-09-01

    Full Text Available High-throughput detection of protein interactions has had a major impact in our understanding of the intricate molecular machinery underlying the living cell, and has permitted the construction of very large protein interactomes. The protein networks that are currently available are incomplete and a significant percentage of their interactions are false positives. Fortunately, the structural properties observed in good quality social or technological networks are also present in biological systems. This has encouraged the development of tools, to improve the reliability of protein networks and predict new interactions based merely on the topological characteristics of their components. Since diseases are rarely caused by the malfunction of a single protein, having a more complete and reliable interactome is crucial in order to identify groups of inter-related proteins involved in disease aetiology. These system components can then be targeted with minimal collateral damage. In this article, an important number of network mining tools is reviewed, together with resources from which reliable protein interactomes can be constructed. In addition to the review, a few representative examples of how molecular and clinical data can be integrated to deepen our understanding of pathogenesis are discussed.

  17. A rapid and accurate approach for prediction of interactomes from co-elution data (PrInCE).

    Science.gov (United States)

    Stacey, R Greg; Skinnider, Michael A; Scott, Nichollas E; Foster, Leonard J

    2017-10-23

    An organism's protein interactome, or complete network of protein-protein interactions, defines the protein complexes that drive cellular processes. Techniques for studying protein complexes have traditionally applied targeted strategies such as yeast two-hybrid or affinity purification-mass spectrometry to assess protein interactions. However, given the vast number of protein complexes, more scalable methods are necessary to accelerate interaction discovery and to construct whole interactomes. We recently developed a complementary technique based on the use of protein correlation profiling (PCP) and stable isotope labeling in amino acids in cell culture (SILAC) to assess chromatographic co-elution as evidence of interacting proteins. Importantly, PCP-SILAC is also capable of measuring protein interactions simultaneously under multiple biological conditions, allowing the detection of treatment-specific changes to an interactome. Given the uniqueness and high dimensionality of co-elution data, new tools are needed to compare protein elution profiles, control false discovery rates, and construct an accurate interactome. Here we describe a freely available bioinformatics pipeline, PrInCE, for the analysis of co-elution data. PrInCE is a modular, open-source library that is computationally inexpensive, able to use label and label-free data, and capable of detecting tens of thousands of protein-protein interactions. Using a machine learning approach, PrInCE offers greatly reduced run time, more predicted interactions at the same stringency, prediction of protein complexes, and greater ease of use over previous bioinformatics tools for co-elution data. PrInCE is implemented in Matlab (version R2017a). Source code and standalone executable programs for Windows and Mac OSX are available at https://github.com/fosterlab/PrInCE , where usage instructions can be found. An example dataset and output are also provided for testing purposes. PrInCE is the first fast and easy

  18. Systematic differences in signal emitting and receiving revealed by PageRank analysis of a human protein interactome.

    Directory of Open Access Journals (Sweden)

    Donglei Du

    Full Text Available Most protein PageRank studies do not use signal flow direction information in protein interactions because this information was not readily available in large protein databases until recently. Therefore, four questions have yet to be answered: A What is the general difference between signal emitting and receiving in a protein interactome? B Which proteins are among the top ranked in directional ranking? C Are high ranked proteins more evolutionarily conserved than low ranked ones? D Do proteins with similar ranking tend to have similar subcellular locations? In this study, we address these questions using the forward, reverse, and non-directional PageRank approaches to rank an information-directional network of human proteins and study their evolutionary conservation. The forward ranking gives credit to information receivers, reverse ranking to information emitters, and non-directional ranking mainly to the number of interactions. The protein lists generated by the forward and non-directional rankings are highly correlated, but those by the reverse and non-directional rankings are not. The results suggest that the signal emitting/receiving system is characterized by key-emittings and relatively even receivings in the human protein interactome. Signaling pathway proteins are frequent in top ranked ones. Eight proteins are both informational top emitters and top receivers. Top ranked proteins, except a few species-related novel-function ones, are evolutionarily well conserved. Protein-subunit ranking position reflects subunit function. These results demonstrate the usefulness of different PageRank approaches in characterizing protein networks and provide insights to protein interaction in the cell.

  19. Systematic differences in signal emitting and receiving revealed by PageRank analysis of a human protein interactome.

    Science.gov (United States)

    Du, Donglei; Lee, Connie F; Li, Xiu-Qing

    2012-01-01

    Most protein PageRank studies do not use signal flow direction information in protein interactions because this information was not readily available in large protein databases until recently. Therefore, four questions have yet to be answered: A) What is the general difference between signal emitting and receiving in a protein interactome? B) Which proteins are among the top ranked in directional ranking? C) Are high ranked proteins more evolutionarily conserved than low ranked ones? D) Do proteins with similar ranking tend to have similar subcellular locations? In this study, we address these questions using the forward, reverse, and non-directional PageRank approaches to rank an information-directional network of human proteins and study their evolutionary conservation. The forward ranking gives credit to information receivers, reverse ranking to information emitters, and non-directional ranking mainly to the number of interactions. The protein lists generated by the forward and non-directional rankings are highly correlated, but those by the reverse and non-directional rankings are not. The results suggest that the signal emitting/receiving system is characterized by key-emittings and relatively even receivings in the human protein interactome. Signaling pathway proteins are frequent in top ranked ones. Eight proteins are both informational top emitters and top receivers. Top ranked proteins, except a few species-related novel-function ones, are evolutionarily well conserved. Protein-subunit ranking position reflects subunit function. These results demonstrate the usefulness of different PageRank approaches in characterizing protein networks and provide insights to protein interaction in the cell.

  20. Experience with ATLAS MySQL PanDA database service

    International Nuclear Information System (INIS)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D; De, K; Ozturk, N

    2010-01-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  1. Experience with ATLAS MySQL PanDA database service

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, Y; Wlodek, T; Hover, J; Smith, J; Wenaus, T; Yu, D [Physics Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States); De, K; Ozturk, N [Department of Physics, University of Texas at Arlington, Arlington, TX, 76019 (United States)

    2010-04-01

    The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.

  2. Computer application for database management and networking of service radio physics

    International Nuclear Information System (INIS)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-01-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Micros of Office) our service implements this philosophy on the canter's computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  3. Database Optimizing Services

    Directory of Open Access Journals (Sweden)

    Adrian GHENCEA

    2010-12-01

    Full Text Available Almost every organization has at its centre a database. The database provides support for conducting different activities, whether it is production, sales and marketing or internal operations. Every day, a database is accessed for help in strategic decisions. The satisfaction therefore of such needs is entailed with a high quality security and availability. Those needs can be realised using a DBMS (Database Management System which is, in fact, software for a database. Technically speaking, it is software which uses a standard method of cataloguing, recovery, and running different data queries. DBMS manages the input data, organizes it, and provides ways of modifying or extracting the data by its users or other programs. Managing the database is an operation that requires periodical updates, optimizing and monitoring.

  4. Requirements for the next generation of nuclear databases and services

    Energy Technology Data Exchange (ETDEWEB)

    Pronyaev, Vladimir; Zerkin, Viktor; Muir, Douglas [International Atomic Energy Agency, Nuclear Data Section, Vienna (Austria); Winchell, David; Arcilla, Ramon [Brookhaven National Laboratory, National Nuclear Data Center, Upton, NY (United States)

    2002-08-01

    The use of relational database technology and general requirements for the next generation of nuclear databases and services are discussed. These requirements take into account an increased number of co-operating data centres working on diverse hardware and software platforms and users with different data-access capabilities. It is argued that the introduction of programming standards will allow the development of nuclear databases and data retrieval tools in a heterogeneous hardware and software environment. The functionality of this approach was tested with full-scale nuclear databases installed on different platforms having different operating and database management systems. User access through local network, internet, or CD-ROM has been investigated. (author)

  5. Field validation of food service listings: a comparison of commercial and online geographic information system databases.

    Science.gov (United States)

    Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian

    2012-08-01

    Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.

  6. Integrasi Database DISDUKCAPIL dan Database KPU Kabupaten Maros Memanfaatkan Web Services

    Directory of Open Access Journals (Sweden)

    Frans N. Allokendek

    2013-01-01

    Abstract Many problems are encountered in the implementation of Local Election, which are caused by both the committee and participants. The problems that frequently occurred are the unavailability of data on the list of updated potential population to be voters in Local Election, the swallowing number of voters due to double data, and the limited time to verify documents. The similar problems that are encountered by General Election Committee of Maros Regency. Web service is a technology that includes a set of standards allowing two computer applications that can communicate with each other and exchange data in Internet. In the study, web services are used to communicate two different applications: SIAK of  the Demography and Civil Registration Office of Maros Regency and SIDP of the General Election Committee of Maros Regency. The study is followed by making the design of system and implementation such as a prototype data integration system between the database of SIAK of the Demography and Civil Registration Office of Maros Regency and that of General KPU in Maros Regency by utilizing web service technology. The result is the valid Fixed Voter List.   Keywords—Web Services, Data Integration, Fixed Voter List

  7. The CMS ECAL database services for detector control and monitoring

    International Nuclear Information System (INIS)

    Arcidiacono, Roberta; Marone, Matteo; Badgett, William

    2010-01-01

    In this paper we give a description of the database services for the control and monitoring of the electromagnetic calorimeter of the CMS experiment at LHC. After a general description of the software infrastructure, we present the organization of the tables in the database, that has been designed in order to simplify the development of software interfaces. This feature is achieved including in the database the description of each relevant table. We also give some estimation about the final size and performance of the system.

  8. The Novice User and CD-ROM Database Services. ERIC Digest.

    Science.gov (United States)

    Schamber, Linda

    This digest answers the following questions that beginning or novice users may have about CD-ROM (a compact disk with read-only memory) database services: (1) What is CD-ROM? (2) What databases are available? (3) Is CD-ROM difficult to use? (4) How much does CD-ROM cost? and (5) What is the future of CD-ROM? (15 references) (MES)

  9. Protein Inference from the Integration of Tandem MS Data and Interactome Networks.

    Science.gov (United States)

    Zhong, Jiancheng; Wang, Jianxing; Ding, Xiaojun; Zhang, Zhen; Li, Min; Wu, Fang-Xiang; Pan, Yi

    2017-01-01

    Since proteins are digested into a mixture of peptides in the preprocessing step of tandem mass spectrometry (MS), it is difficult to determine which specific protein a shared peptide belongs to. In recent studies, besides tandem MS data and peptide identification information, some other information is exploited to infer proteins. Different from the methods which first use only tandem MS data to infer proteins and then use network information to refine them, this study proposes a protein inference method named TMSIN, which uses interactome networks directly. As two interacting proteins should co-exist, it is reasonable to assume that if one of the interacting proteins is confidently inferred in a sample, its interacting partners should have a high probability in the same sample, too. Therefore, we can use the neighborhood information of a protein in an interactome network to adjust the probability that the shared peptide belongs to the protein. In TMSIN, a multi-weighted graph is constructed by incorporating the bipartite graph with interactome network information, where the bipartite graph is built with the peptide identification information. Based on multi-weighted graphs, TMSIN adopts an iterative workflow to infer proteins. At each iterative step, the probability that a shared peptide belongs to a specific protein is calculated by using the Bayes' law based on the neighbor protein support scores of each protein which are mapped by the shared peptides. We carried out experiments on yeast data and human data to evaluate the performance of TMSIN in terms of ROC, q-value, and accuracy. The experimental results show that AUC scores yielded by TMSIN are 0.742 and 0.874 in yeast dataset and human dataset, respectively, and TMSIN yields the maximum number of true positives when q-value less than or equal to 0.05. The overlap analysis shows that TMSIN is an effective complementary approach for protein inference.

  10. Serum Amyloid P Component (SAP) Interactome in Human Plasma Containing Physiological Calcium Levels.

    Science.gov (United States)

    Poulsen, Ebbe Toftgaard; Pedersen, Kata Wolff; Marzeda, Anna Maria; Enghild, Jan J

    2017-02-14

    The pentraxin serum amyloid P component (SAP) is secreted by the liver and found in plasma at a concentration of approximately 30 mg/L. SAP is a 25 kDa homopentamer known to bind both protein and nonprotein ligands, all in a calcium-dependent manner. The function of SAP is unclear but likely involves the humoral innate immune system spanning the complement system, inflammation, and coagulation. Also, SAP is known to bind to the generic structure of amyloid deposits and possibly to protect them against proteolysis. In this study, we have characterized the SAP interactome in human plasma containing the physiological Ca 2+ concentration using SAP affinity pull-down and co-immunoprecipitation experiments followed by mass spectrometry analyses. The analyses resulted in the identification of 33 proteins, of which 24 were direct or indirect interaction partners not previously reported. The SAP interactome can be divided into categories that include apolipoproteins, the complement system, coagulation, and proteolytic regulation.

  11. Arabidopsis G-protein interactome reveals connections to cell wall carbohydrates and morphogenesis.

    Science.gov (United States)

    Klopffleisch, Karsten; Phan, Nguyen; Augustin, Kelsey; Bayne, Robert S; Booker, Katherine S; Botella, Jose R; Carpita, Nicholas C; Carr, Tyrell; Chen, Jin-Gui; Cooke, Thomas Ryan; Frick-Cheng, Arwen; Friedman, Erin J; Fulk, Brandon; Hahn, Michael G; Jiang, Kun; Jorda, Lucia; Kruppe, Lydia; Liu, Chenggang; Lorek, Justine; McCann, Maureen C; Molina, Antonio; Moriyama, Etsuko N; Mukhtar, M Shahid; Mudgil, Yashwanti; Pattathil, Sivakumar; Schwarz, John; Seta, Steven; Tan, Matthew; Temp, Ulrike; Trusov, Yuri; Urano, Daisuke; Welter, Bastian; Yang, Jing; Panstruga, Ralph; Uhrig, Joachim F; Jones, Alan M

    2011-09-27

    The heterotrimeric G-protein complex is minimally composed of Gα, Gβ, and Gγ subunits. In the classic scenario, the G-protein complex is the nexus in signaling from the plasma membrane, where the heterotrimeric G-protein associates with heptahelical G-protein-coupled receptors (GPCRs), to cytoplasmic target proteins called effectors. Although a number of effectors are known in metazoans and fungi, none of these are predicted to exist in their canonical forms in plants. To identify ab initio plant G-protein effectors and scaffold proteins, we screened a set of proteins from the G-protein complex using two-hybrid complementation in yeast. After deep and exhaustive interrogation, we detected 544 interactions between 434 proteins, of which 68 highly interconnected proteins form the core G-protein interactome. Within this core, over half of the interactions comprising two-thirds of the nodes were retested and validated as genuine in planta. Co-expression analysis in combination with phenotyping of loss-of-function mutations in a set of core interactome genes revealed a novel role for G-proteins in regulating cell wall modification.

  12. Database Description - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Database Description General information of database Database name SKIP Stemcell Database...rsity Journal Search: Contact address http://www.skip.med.keio.ac.jp/en/contact/ Database classification Human Genes and Diseases Dat...abase classification Stemcell Article Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database...ks: Original website information Database maintenance site Center for Medical Genetics, School of medicine, ...lable Web services Not available URL of Web services - Need for user registration Not available About This Database Database

  13. Quality of Service: a study in databases bibliometric international

    Directory of Open Access Journals (Sweden)

    Deosir Flávio Lobo de Castro Junior

    2013-08-01

    Full Text Available The purpose of this article is to serve as a source of references on Quality of Service for future research. After surveying the international databases, EBSCO and ProQuest, the results on the state of the art in this issue are presented. The method used was the bibliometrics, and 132 items from a universe of 13,427 were investigated. The analyzed works cover the period from 1985 to 2011. Among the contributions, results and conclusions for future research are presented: i most cited authors ii most used methodology, dimensions and questionnaire; iii most referenced publications iv international journals with most publications on the subject, v distribution of the number of publications per year; vi authors networks vii educational institutions network; viii terms used in the search in international databases; ix the relationships studied in 132 articles; x criteria for choice of methodology in the research on quality of services; xi most often used paradigm, and xii 160 high impact references.

  14. Interactomes, manufacturomes and relational biology: analogies between systems biology and manufacturing systems

    Science.gov (United States)

    2011-01-01

    Background We review and extend the work of Rosen and Casti who discuss category theory with regards to systems biology and manufacturing systems, respectively. Results We describe anticipatory systems, or long-range feed-forward chemical reaction chains, and compare them to open-loop manufacturing processes. We then close the loop by discussing metabolism-repair systems and describe the rationality of the self-referential equation f = f (f). This relationship is derived from some boundary conditions that, in molecular systems biology, can be stated as the cardinality of the following molecular sets must be about equal: metabolome, genome, proteome. We show that this conjecture is not likely correct so the problem of self-referential mappings for describing the boundary between living and nonliving systems remains an open question. We calculate a lower and upper bound for the number of edges in the molecular interaction network (the interactome) for two cellular organisms and for two manufacturomes for CMOS integrated circuit manufacturing. Conclusions We show that the relevant mapping relations may not be Abelian, and that these problems cannot yet be resolved because the interactomes and manufacturomes are incomplete. PMID:21689427

  15. Interactomic approach for evaluating nucleophosmin-binding proteins as biomarkers for Ewing's sarcoma.

    Science.gov (United States)

    Haga, Ayako; Ogawara, Yoko; Kubota, Daisuke; Kitabayashi, Issay; Murakami, Yasufumi; Kondo, Tadashi

    2013-06-01

    Nucleophosmin (NPM) is a novel prognostic biomarker for Ewing's sarcoma. To evaluate the prognostic utility of NPM, we conducted an interactomic approach to characterize the NPM protein complex in Ewing's sarcoma cells. A gene suppression assay revealed that NPM promoted cell proliferation and the invasive properties of Ewing's sarcoma cells. FLAG-tag-based affinity purification coupled with liquid chromatography-tandem mass spectrometry identified 106 proteins in the NPM protein complex. The functional classification suggested that the NPM complex participates in critical biological events, including ribosome biogenesis, regulation of transcription and translation, and protein folding, that are mediated by these proteins. In addition to JAK1, a candidate prognostic biomarker for Ewing's sarcoma, the NPM complex, includes 11 proteins known as prognostic biomarkers for other malignancies. Meta-analysis of gene expression profiles of 32 patients with Ewing's sarcoma revealed that 6 of 106 were significantly and independently associated with survival period. These observations suggest a functional role as well as prognostic value of these NPM complex proteins in Ewing's sarcoma. Further, our study suggests the potential applications of interactomics in conjunction with meta-analysis for biomarker discovery. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. When Location-Based Services Meet Databases

    Directory of Open Access Journals (Sweden)

    Dik Lun Lee

    2005-01-01

    Full Text Available As location-based services (LBSs grow to support a larger and larger user community and to provide more and more intelligent services, they must face a few fundamental challenges, including the ability to not only accept coordinates as location data but also manipulate high-level semantics of the physical environment. They must also handle a large amount of location updates and client requests and be able to scale up as their coverage increases. This paper describes some of our research in location modeling and updates and techniques for enhancing system performance by caching and batch processing. It can be observed that the challenges facing LBSs share a lot of similarity with traditional database research (i.e., data modeling, indexing, caching, and query optimization but the fact that LBSs are built into the physical space and the opportunity to exploit spatial locality in system design shed new light on LBS research.

  17. Integration of multiple biological features yields high confidence human protein interactome.

    Science.gov (United States)

    Karagoz, Kubra; Sevimoglu, Tuba; Arga, Kazim Yalcin

    2016-08-21

    The biological function of a protein is usually determined by its physical interaction with other proteins. Protein-protein interactions (PPIs) are identified through various experimental methods and are stored in curated databases. The noisiness of the existing PPI data is evident, and it is essential that a more reliable data is generated. Furthermore, the selection of a set of PPIs at different confidence levels might be necessary for many studies. Although different methodologies were introduced to evaluate the confidence scores for binary interactions, a highly reliable, almost complete PPI network of Homo sapiens is not proposed yet. The quality and coverage of human protein interactome need to be improved to be used in various disciplines, especially in biomedicine. In the present work, we propose an unsupervised statistical approach to assign confidence scores to PPIs of H. sapiens. To achieve this goal PPI data from six different databases were collected and a total of 295,288 non-redundant interactions between 15,950 proteins were acquired. The present scoring system included the context information that was assigned to PPIs derived from eight biological attributes. A high confidence network, which included 147,923 binary interactions between 13,213 proteins, had scores greater than the cutoff value of 0.80, for which sensitivity, specificity, and coverage were 94.5%, 80.9%, and 82.8%, respectively. We compared the present scoring method with others for evaluation. Reducing the noise inherent in experimental PPIs via our scoring scheme increased the accuracy significantly. As it was demonstrated through the assessment of process and cancer subnetworks, this study allows researchers to construct and analyze context-specific networks via valid PPI sets and one can easily achieve subnetworks around proteins of interest at a specified confidence level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    International Nuclear Information System (INIS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-01-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it 'multi-tier'. The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  19. RNA/DNA Hybrid Interactome Identifies DXH9 as a Molecular Player in Transcriptional Termination and R-Loop-Associated DNA Damage.

    Science.gov (United States)

    Cristini, Agnese; Groh, Matthias; Kristiansen, Maiken S; Gromak, Natalia

    2018-05-08

    R-loops comprise an RNA/DNA hybrid and displaced single-stranded DNA. They play important biological roles and are implicated in pathology. Even so, proteins recognizing these structures are largely undefined. Using affinity purification with the S9.6 antibody coupled to mass spectrometry, we defined the RNA/DNA hybrid interactome in HeLa cells. This consists of known R-loop-associated factors SRSF1, FACT, and Top1, and yet uncharacterized interactors, including helicases, RNA processing, DNA repair, and chromatin factors. We validate specific examples of these interactors and characterize their involvement in R-loop biology. A top candidate DHX9 helicase promotes R-loop suppression and transcriptional termination. DHX9 interacts with PARP1, and both proteins prevent R-loop-associated DNA damage. DHX9 and other interactome helicases are overexpressed in cancer, linking R-loop-mediated DNA damage and disease. Our RNA/DNA hybrid interactome provides a powerful resource to study R-loop biology in health and disease. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  20. Investigation of PARP-1, PARP-2, and PARG interactomes by affinity-purification mass spectrometry

    Directory of Open Access Journals (Sweden)

    Isabelle Maxim

    2010-04-01

    Full Text Available Abstract Background Poly(ADP-ribose polymerases (PARPs catalyze the formation of poly(ADP-ribose (pADPr, a post-translational modification involved in several important biological processes, namely surveillance of genome integrity, cell cycle progression, initiation of the DNA damage response, apoptosis, and regulation of transcription. Poly(ADP-ribose glycohydrolase (PARG, on the other hand, catabolizes pADPr and thereby accounts for the transient nature of poly(ADP-ribosylation. Our investigation of the interactomes of PARP-1, PARP-2, and PARG by affinity-purification mass spectrometry (AP-MS aimed, on the one hand, to confirm current knowledge on these interactomes and, on the other hand, to discover new protein partners which could offer insights into PARPs and PARG functions. Results PARP-1, PARP-2, and PARG were immunoprecipitated from human cells, and pulled-down proteins were separated by gel electrophoresis prior to in-gel trypsin digestion. Peptides were identified by tandem mass spectrometry. Our AP-MS experiments resulted in the identifications of 179 interactions, 139 of which are novel interactions. Gene Ontology analysis of the identified protein interactors points to five biological processes in which PARP-1, PARP-2 and PARG may be involved: RNA metabolism for PARP-1, PARP-2 and PARG; DNA repair and apoptosis for PARP-1 and PARP-2; and glycolysis and cell cycle for PARP-1. Conclusions This study reveals several novel protein partners for PARP-1, PARP-2 and PARG. It provides a global view of the interactomes of these proteins as well as a roadmap to establish the systems biology of poly(ADP-ribose metabolism.

  1. Intelligent databases assist transparent and sound economic valuation of ecosystem services.

    Science.gov (United States)

    Villa, Ferdinando; Ceroni, Marta; Krivov, Sergey

    2007-06-01

    Assessment and economic valuation of services provided by ecosystems to humans has become a crucial phase in environmental management and policy-making. As primary valuation studies are out of the reach of many institutions, secondary valuation or benefit transfer, where the results of previous studies are transferred to the geographical, environmental, social, and economic context of interest, is becoming increasingly common. This has brought to light the importance of environmental valuation databases, which provide reliable valuation data to inform secondary valuation with enough detail to enable the transfer of values across contexts. This paper describes the role of next-generation, intelligent databases (IDBs) in assisting the activity of valuation. Such databases employ artificial intelligence to inform the transfer of values across contexts, enforcing comparability of values and allowing users to generate custom valuation portfolios that synthesize previous studies and provide aggregated value estimates to use as a base for secondary valuation. After a general introduction, we introduce the Ecosystem Services Database, the first IDB for environmental valuation to be made available to the public, describe its functionalities and the lessons learned from its usage, and outline the remaining needs and expected future developments in the field.

  2. Chapter 51: How to Build a Simple Cone Search Service Using a Local Database

    Science.gov (United States)

    Kent, B. R.; Greene, G. R.

    The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.

  3. Bcl2-associated Athanogene 3 Interactome Analysis Reveals a New Role in Modulating Proteasome Activity*

    Science.gov (United States)

    Chen, Ying; Yang, Li-Na; Cheng, Li; Tu, Shun; Guo, Shu-Juan; Le, Huang-Ying; Xiong, Qian; Mo, Ran; Li, Chong-Yang; Jeong, Jun-Seop; Jiang, Lizhi; Blackshaw, Seth; Bi, Li-Jun; Zhu, Heng; Tao, Sheng-Ce; Ge, Feng

    2013-01-01

    Bcl2-associated athanogene 3 (BAG3), a member of the BAG family of co-chaperones, plays a critical role in regulating apoptosis, development, cell motility, autophagy, and tumor metastasis and in mediating cell adaptive responses to stressful stimuli. BAG3 carries a BAG domain, a WW domain, and a proline-rich repeat (PXXP), all of which mediate binding to different partners. To elucidate BAG3's interaction network at the molecular level, we employed quantitative immunoprecipitation combined with knockdown and human proteome microarrays to comprehensively profile the BAG3 interactome in humans. We identified a total of 382 BAG3-interacting proteins with diverse functions, including transferase activity, nucleic acid binding, transcription factors, proteases, and chaperones, suggesting that BAG3 is a critical regulator of diverse cellular functions. In addition, we characterized interactions between BAG3 and some of its newly identified partners in greater detail. In particular, bioinformatic analysis revealed that the BAG3 interactome is strongly enriched in proteins functioning within the proteasome-ubiquitination process and that compose the proteasome complex itself, suggesting that a critical biological function of BAG3 is associated with the proteasome. Functional studies demonstrated that BAG3 indeed interacts with the proteasome and modulates its activity, sustaining cell survival and underlying resistance to therapy through the down-modulation of apoptosis. Taken as a whole, this study expands our knowledge of the BAG3 interactome, provides a valuable resource for understanding how BAG3 affects different cellular functions, and demonstrates that biologically relevant data can be harvested using this kind of integrated approach. PMID:23824909

  4. The Footprint Database and Web Services of the Herschel Space Observatory

    Science.gov (United States)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data

  5. Pre-Service Teachers' Use of Library Databases: Some Insights

    Science.gov (United States)

    Lamb, Janeen; Howard, Sarah; Easey, Michael

    2014-01-01

    The aim of this study is to investigate if providing mathematics education pre-service teachers with animated library tutorials on library and database searches changes their searching practices. This study involved the completion of a survey by 138 students and seven individual interviews before and after library search demonstration videos were…

  6. Accessing the SEED genome databases via Web services API: tools for programmers.

    Science.gov (United States)

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  7. Improved microarray-based decision support with graph encoded interactome data.

    Directory of Open Access Journals (Sweden)

    Anneleen Daemen

    Full Text Available In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG, protein-protein interactions (OPHID and miRNA-gene targeting (microRNA.org outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.

  8. Sequential Elution Interactome Analysis of the Mind Bomb 1 Ubiquitin Ligase Reveals a Novel Role in Dendritic Spine Outgrowth*

    Science.gov (United States)

    Mertz, Joseph; Tan, Haiyan; Pagala, Vishwajeeth; Bai, Bing; Chen, Ping-Chung; Li, Yuxin; Cho, Ji-Hoon; Shaw, Timothy; Wang, Xusheng; Peng, Junmin

    2015-01-01

    The mind bomb 1 (Mib1) ubiquitin ligase is essential for controlling metazoan development by Notch signaling and possibly the Wnt pathway. It is also expressed in postmitotic neurons and regulates neuronal morphogenesis and synaptic activity by mechanisms that are largely unknown. We sought to comprehensively characterize the Mib1 interactome and study its potential function in neuron development utilizing a novel sequential elution strategy for affinity purification, in which Mib1 binding proteins were eluted under different stringency and then quantified by the isobaric labeling method. The strategy identified the Mib1 interactome with both deep coverage and the ability to distinguish high-affinity partners from low-affinity partners. A total of 817 proteins were identified during the Mib1 affinity purification, including 56 high-affinity partners and 335 low-affinity partners, whereas the remaining 426 proteins are likely copurified contaminants or extremely weak binding proteins. The analysis detected all previously known Mib1-interacting proteins and revealed a large number of novel components involved in Notch and Wnt pathways, endocytosis and vesicle transport, the ubiquitin-proteasome system, cellular morphogenesis, and synaptic activities. Immunofluorescence studies further showed colocalization of Mib1 with five selected proteins: the Usp9x (FAM) deubiquitinating enzyme, alpha-, beta-, and delta-catenins, and CDKL5. Mutations of CDKL5 are associated with early infantile epileptic encephalopathy-2 (EIEE2), a severe form of mental retardation. We found that the expression of Mib1 down-regulated the protein level of CDKL5 by ubiquitination, and antagonized CDKL5 function during the formation of dendritic spines. Thus, the sequential elution strategy enables biochemical characterization of protein interactomes; and Mib1 analysis provides a comprehensive interactome for investigating its role in signaling networks and neuronal development. PMID:25931508

  9. Protecting Database Centric Web Services against SQL/XPath Injection Attacks

    Science.gov (United States)

    Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique

    Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.

  10. IIS--Integrated Interactome System: a web-based platform for the annotation, analysis and visualization of protein-metabolite-gene-drug interactions by integrating a variety of data sources and tools.

    Science.gov (United States)

    Carazzolle, Marcelo Falsarella; de Carvalho, Lucas Miguel; Slepicka, Hugo Henrique; Vidal, Ramon Oliveira; Pereira, Gonçalo Amarante Guimarães; Kobarg, Jörg; Meirelles, Gabriela Vaz

    2014-01-01

    High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two

  11. Footprint Database and web services for the Herschel space observatory

    Science.gov (United States)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  12. Fedora Content Modelling for Improved Services for Research Databases

    DEFF Research Database (Denmark)

    Elbæk, Mikael Karstensen; Heller, Alfred; Pedersen, Gert Schmeltz

    A re-implementation of the research database of the Technical University of Denmark, DTU, is based on Fedora. The backbone consists of content models for primary and secondary entities and their relationships, giving flexible and powerful extraction capabilities for interoperability and reporting....... By adopting such an abstract data model, the platform enables new and improved services for researchers, librarians and administrators....

  13. Atomic collision databases and data services -- A survey

    International Nuclear Information System (INIS)

    Schultz, D.R.

    1997-01-01

    Atomic collision databases and data services constitute an important resource for scientific and engineering applications such as astrophysics, lighting, materials processing, and fusion energy, as well as an important knowledge base for current developments in atomic collision physics. Data centers and research groups provide these resources through a chain of efforts that include producing and collecting primary data, performing evaluation of the existing data, deducing scaling laws and semiempirical formulas to compactly describe and extend the data, producing the recommended sets of data, and providing convenient means of maintaining, updating, and disseminating the results of this process. The latest efforts have utilized modern database, storage, and distribution technologies including the Internet and World Wide Web. Given here is an informal survey of how these resources have developed, how they are currently characterized, and what their likely evolution will lead them to become in the future

  14. The chicken B-cell line DT40 proteome, beadome and interactomes

    Directory of Open Access Journals (Sweden)

    Johanna S. Rees

    2015-06-01

    Full Text Available In developing a new quantitative AP-MS method for exploring interactomes in the chicken B-cell line DT40, we also surveyed the most abundant proteins in this organism and explored the likely contaminants that bind to a variety of affinity resins that would later be confirmed quantitatively [1]. We present the ‘Top 150 abundant DT40 proteins list’, the DT40 beadomes as well as protein interaction lists for the Phosphatidyl inositol 5-phosphate 4-kinase 2β and Fanconi anaemia protein complexes.

  15. The L1TD1 Protein Interactome Reveals the Importance of Post-transcriptional Regulation in Human Pluripotency

    Directory of Open Access Journals (Sweden)

    Maheswara Reddy Emani

    2015-03-01

    Full Text Available The RNA-binding protein L1TD1 is one of the most specific and abundant proteins in pluripotent stem cells and is essential for the maintenance of pluripotency in human cells. Here, we identify the protein interaction network of L1TD1 in human embryonic stem cells (hESCs and provide insights into the interactome network constructed in human pluripotent cells. Our data reveal that L1TD1 has an important role in RNA splicing, translation, protein traffic, and degradation. L1TD1 interacts with multiple stem-cell-specific proteins, many of which are still uncharacterized in the context of development. Further, we show that L1TD1 is a part of the pluripotency interactome network of OCT4, SOX2, and NANOG, bridging nuclear and cytoplasmic regulation and highlighting the importance of RNA biology in pluripotency.

  16. A Collisional Database and Web Service within the Virtual Atomic ...

    Indian Academy of Sciences (India)

    MOL-D database is a collection of cross-sections and rate coefficients for specific collisional processes and a web service within the Serbian Virtual Observatory ... Hydrogen and helium molecular ion data are important for calculation of solar and stellar atmosphere models and for radiative transport, as well as for kinetics of ...

  17. Database Description - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Database Description General information of database Database name Trypanosomes Database...stitute of Genetics Research Organization of Information and Systems Yata 1111, Mishima, Shizuoka 411-8540, JAPAN E mail: Database...y Name: Trypanosoma Taxonomy ID: 5690 Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description The... Article title: Author name(s): Journal: External Links: Original website information Database maintenance s...DB (Protein Data Bank) KEGG PATHWAY Database DrugPort Entry list Available Query search Available Web servic

  18. DataBase on Demand

    International Nuclear Information System (INIS)

    Aparicio, R Gaspar; Gomez, D; Wojcik, D; Coz, I Coterillo

    2012-01-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  19. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    Science.gov (United States)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  20. Identification of human disease genes from interactome network using graphlet interaction.

    Directory of Open Access Journals (Sweden)

    Xiao-Dong Wang

    Full Text Available Identifying genes related to human diseases, such as cancer and cardiovascular disease, etc., is an important task in biomedical research because of its applications in disease diagnosis and treatment. Interactome networks, especially protein-protein interaction networks, had been used to disease genes identification based on the hypothesis that strong candidate genes tend to closely relate to each other in some kinds of measure on the network. We proposed a new measure to analyze the relationship between network nodes which was called graphlet interaction. The graphlet interaction contained 28 different isomers. The results showed that the numbers of the graphlet interaction isomers between disease genes in interactome networks were significantly larger than random picked genes, while graphlet signatures were not. Then, we designed a new type of score, based on the network properties, to identify disease genes using graphlet interaction. The genes with higher scores were more likely to be disease genes, and all candidate genes were ranked according to their scores. Then the approach was evaluated by leave-one-out cross-validation. The precision of the current approach achieved 90% at about 10% recall, which was apparently higher than the previous three predominant algorithms, random walk, Endeavour and neighborhood based method. Finally, the approach was applied to predict new disease genes related to 4 common diseases, most of which were identified by other independent experimental researches. In conclusion, we demonstrate that the graphlet interaction is an effective tool to analyze the network properties of disease genes, and the scores calculated by graphlet interaction is more precise in identifying disease genes.

  1. Identification of Human Disease Genes from Interactome Network Using Graphlet Interaction

    Science.gov (United States)

    Yang, Lun; Wei, Dong-Qing; Qi, Ying-Xin; Jiang, Zong-Lai

    2014-01-01

    Identifying genes related to human diseases, such as cancer and cardiovascular disease, etc., is an important task in biomedical research because of its applications in disease diagnosis and treatment. Interactome networks, especially protein-protein interaction networks, had been used to disease genes identification based on the hypothesis that strong candidate genes tend to closely relate to each other in some kinds of measure on the network. We proposed a new measure to analyze the relationship between network nodes which was called graphlet interaction. The graphlet interaction contained 28 different isomers. The results showed that the numbers of the graphlet interaction isomers between disease genes in interactome networks were significantly larger than random picked genes, while graphlet signatures were not. Then, we designed a new type of score, based on the network properties, to identify disease genes using graphlet interaction. The genes with higher scores were more likely to be disease genes, and all candidate genes were ranked according to their scores. Then the approach was evaluated by leave-one-out cross-validation. The precision of the current approach achieved 90% at about 10% recall, which was apparently higher than the previous three predominant algorithms, random walk, Endeavour and neighborhood based method. Finally, the approach was applied to predict new disease genes related to 4 common diseases, most of which were identified by other independent experimental researches. In conclusion, we demonstrate that the graphlet interaction is an effective tool to analyze the network properties of disease genes, and the scores calculated by graphlet interaction is more precise in identifying disease genes. PMID:24465923

  2. Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us FANTOM5 Database Description General information of database Database name FANTOM5 Alternati...me: Rattus norvegicus Taxonomy ID: 10116 Taxonomy Name: Macaca mulatta Taxonomy ID: 9544 Database descriptio...l Links: Original website information Database maintenance site RIKEN Center for Life Science Technologies, ...ilable Web services Not available URL of Web services - Need for user registration Not available About This Database Database... Description Download License Update History of This Database Site Policy | Contact Us Database Description - FANTOM5 | LSDB Archive ...

  3. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    Science.gov (United States)

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  4. Development of a New Web Portal for the Database on Demand Service

    CERN Document Server

    Altinigne, Can Yilmaz

    2017-01-01

    The Database on Demand service allows members of CERN communities to provision and manage database instances of different flavours (MySQL, Oracle, PostgreSQL and InfluxDB). Users can create and edit these instances using the web interface of DB On Demand. This web front end is currently on Java technologies and the ZK web framework, for which is generally difficult to find experienced developers and which has gotten to lack behind more modern web stacks in capabilities and usability.

  5. Crowd sourcing a new paradigm for interactome driven drug target identification in Mycobacterium tuberculosis.

    Directory of Open Access Journals (Sweden)

    Rohit Vashisht

    Full Text Available A decade since the availability of Mycobacterium tuberculosis (Mtb genome sequence, no promising drug has seen the light of the day. This not only indicates the challenges in discovering new drugs but also suggests a gap in our current understanding of Mtb biology. We attempt to bridge this gap by carrying out extensive re-annotation and constructing a systems level protein interaction map of Mtb with an objective of finding novel drug target candidates. Towards this, we synergized crowd sourcing and social networking methods through an initiative 'Connect to Decode' (C2D to generate the first and largest manually curated interactome of Mtb termed 'interactome pathway' (IPW, encompassing a total of 1434 proteins connected through 2575 functional relationships. Interactions leading to gene regulation, signal transduction, metabolism, structural complex formation have been catalogued. In the process, we have functionally annotated 87% of the Mtb genome in context of gene products. We further combine IPW with STRING based network to report central proteins, which may be assessed as potential drug targets for development of drugs with least possible side effects. The fact that five of the 17 predicted drug targets are already experimentally validated either genetically or biochemically lends credence to our unique approach.

  6. Crowd Sourcing a New Paradigm for Interactome Driven Drug Target Identification in Mycobacterium tuberculosis

    Science.gov (United States)

    Rohira, Harsha; Bhat, Ashwini G.; Passi, Anurag; Mukherjee, Keya; Choudhary, Kumari Sonal; Kumar, Vikas; Arora, Anshula; Munusamy, Prabhakaran; Subramanian, Ahalyaa; Venkatachalam, Aparna; S, Gayathri; Raj, Sweety; Chitra, Vijaya; Verma, Kaveri; Zaheer, Salman; J, Balaganesh; Gurusamy, Malarvizhi; Razeeth, Mohammed; Raja, Ilamathi; Thandapani, Madhumohan; Mevada, Vishal; Soni, Raviraj; Rana, Shruti; Ramanna, Girish Muthagadhalli; Raghavan, Swetha; Subramanya, Sunil N.; Kholia, Trupti; Patel, Rajesh; Bhavnani, Varsha; Chiranjeevi, Lakavath; Sengupta, Soumi; Singh, Pankaj Kumar; Atray, Naresh; Gandhi, Swati; Avasthi, Tiruvayipati Suma; Nisthar, Shefin; Anurag, Meenakshi; Sharma, Pratibha; Hasija, Yasha; Dash, Debasis; Sharma, Arun; Scaria, Vinod; Thomas, Zakir; Chandra, Nagasuma; Brahmachari, Samir K.; Bhardwaj, Anshu

    2012-01-01

    A decade since the availability of Mycobacterium tuberculosis (Mtb) genome sequence, no promising drug has seen the light of the day. This not only indicates the challenges in discovering new drugs but also suggests a gap in our current understanding of Mtb biology. We attempt to bridge this gap by carrying out extensive re-annotation and constructing a systems level protein interaction map of Mtb with an objective of finding novel drug target candidates. Towards this, we synergized crowd sourcing and social networking methods through an initiative ‘Connect to Decode’ (C2D) to generate the first and largest manually curated interactome of Mtb termed ‘interactome pathway’ (IPW), encompassing a total of 1434 proteins connected through 2575 functional relationships. Interactions leading to gene regulation, signal transduction, metabolism, structural complex formation have been catalogued. In the process, we have functionally annotated 87% of the Mtb genome in context of gene products. We further combine IPW with STRING based network to report central proteins, which may be assessed as potential drug targets for development of drugs with least possible side effects. The fact that five of the 17 predicted drug targets are already experimentally validated either genetically or biochemically lends credence to our unique approach. PMID:22808064

  7. Functional interactome of Aquaporin 1 sub-family reveals new physiological functions in Arabidopsis Thaliana

    Directory of Open Access Journals (Sweden)

    Mohamed Ragab Abdel Gawwad

    2013-09-01

    Full Text Available Aquaporins are channel proteins found in plasma membranes and intercellular membranes of different cellular compartments, facilitate the water flux, solutes and gases across the cellular plasma membranes. The present study highlights the sub-family plasma membrane intrinsic protein (PIP predicting the 3-D structure and analyzing the functional interactome of it homologs. PIP1 homologs integrate with many proteins with different plant physiological roles in Arabidopsis thaliana including; PIP1A and PIP1B: facilitate the transport of water, diffusion of amino acids and/or peptides from the vacuolar compartment to the cytoplasm, play a role in the control of cell turgor and cell expansion and involved in root water uptake respectively. In addition we found that PIP1B plays a defensive role against Pseudomonas syringae infection through the interaction with the plasma membrane Rps2 protein. Another substantial function of PIP1C via the interaction with PIP2E is the response to nematode infection. Generally, PIP1 sub-family interactome controlling many physiological processes in plant cell like; osmoregulation in plants under high osmotic stress such as under a high salt, response to nematode, facilitate the transport of water across cell membrane and regulation of floral initiation in Arabidopsis thaliana.

  8. In vitro nuclear interactome of the HIV-1 Tat protein.

    LENUS (Irish Health Repository)

    Gautier, Virginie W

    2009-01-01

    BACKGROUND: One facet of the complexity underlying the biology of HIV-1 resides not only in its limited number of viral proteins, but in the extensive repertoire of cellular proteins they interact with and their higher-order assembly. HIV-1 encodes the regulatory protein Tat (86-101aa), which is essential for HIV-1 replication and primarily orchestrates HIV-1 provirus transcriptional regulation. Previous studies have demonstrated that Tat function is highly dependent on specific interactions with a range of cellular proteins. However they can only partially account for the intricate molecular mechanisms underlying the dynamics of proviral gene expression. To obtain a comprehensive nuclear interaction map of Tat in T-cells, we have designed a proteomic strategy based on affinity chromatography coupled with mass spectrometry. RESULTS: Our approach resulted in the identification of a total of 183 candidates as Tat nuclear partners, 90% of which have not been previously characterised. Subsequently we applied in silico analysis, to validate and characterise our dataset which revealed that the Tat nuclear interactome exhibits unique signature(s). First, motif composition analysis highlighted that our dataset is enriched for domains mediating protein, RNA and DNA interactions, and helicase and ATPase activities. Secondly, functional classification and network reconstruction clearly depicted Tat as a polyvalent protein adaptor and positioned Tat at the nexus of a densely interconnected interaction network involved in a range of biological processes which included gene expression regulation, RNA biogenesis, chromatin structure, chromosome organisation, DNA replication and nuclear architecture. CONCLUSION: We have completed the in vitro Tat nuclear interactome and have highlighted its modular network properties and particularly those involved in the coordination of gene expression by Tat. Ultimately, the highly specialised set of molecular interactions identified will

  9. EXFOR-CINDA-ENDF: Migration of Databases to Give Higher-Quality Nuclear Data Services

    International Nuclear Information System (INIS)

    Zerkin, V.V.; McLane, V.; Herman, M.W.; Dunford, C.L.

    2005-01-01

    Extensive work began in 1999 to migrate the EXFOR, CINDA, and ENDF nuclear reaction databases, and convert the available nuclear data services from VMS to a modern computing environment. This work has been performed through co-operative efforts between the IAEA Nuclear Data Section (IAEA-NDS) and the National Nuclear Data Center (NNDC), Brookhaven National Laboratory. The project also afforded the opportunity to make general revisions and improvements to the nuclear reaction data services by taking account of past experience with the old system and users' feedback. A main goal of the project was to implement databases in a relational form that provides full functionality for maintenance by data centre staff and improved retrieval capability for external users. As a result, the quality of our nuclear service has significantly improved, with better functionality of the system, accessibility of data, and improved data retrieval functions for users involved in a wide range of applications

  10. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    Science.gov (United States)

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  11. Monitoring of services with non-relational databases and map-reduce framework

    International Nuclear Information System (INIS)

    Babik, M; Souto, F

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.

  12. THE ROLE OF DATABASE MARKETING IN THE OPERATIONALIZATION OF THE SERVICES RELATIONSHIP MARKETING

    OpenAIRE

    Luigi DUMITRESCU; Mircea FUCIU

    2010-01-01

    The relationship marketing aims the construction of a durable relation between the enterprise and the final client, identified at an individual level. The particular part of the relationship marketing has two main concepts: individuality and the relation. This paper presents the concepts of relationship marketing, database marketing and geomarketing. We present the importance of implementing a marketing database in a service providing enterprise and its implications on one hand for the client...

  13. Allie: a database and a search service of abbreviations and long forms

    Science.gov (United States)

    Yamamoto, Yasunori; Yamaguchi, Atsuko; Bono, Hidemasa; Takagi, Toshihisa

    2011-01-01

    Many abbreviations are used in the literature especially in the life sciences, and polysemous abbreviations appear frequently, making it difficult to read and understand scientific papers that are outside of a reader’s expertise. Thus, we have developed Allie, a database and a search service of abbreviations and their long forms (a.k.a. full forms or definitions). Allie searches for abbreviations and their corresponding long forms in a database that we have generated based on all titles and abstracts in MEDLINE. When a user query matches an abbreviation, Allie returns all potential long forms of the query along with their bibliographic data (i.e. title and publication year). In addition, for each candidate, co-occurring abbreviations and a research field in which it frequently appears in the MEDLINE data are displayed. This function helps users learn about the context in which an abbreviation appears. To deal with synonymous long forms, we use a dictionary called GENA that contains domain-specific terms such as gene, protein or disease names along with their synonymic information. Conceptually identical domain-specific terms are regarded as one term, and then conceptually identical abbreviation-long form pairs are grouped taking into account their appearance in MEDLINE. To keep up with new abbreviations that are continuously introduced, Allie has an automatic update system. In addition, the database of abbreviations and their long forms with their corresponding PubMed IDs is constructed and updated weekly. Database URL: The Allie service is available at http://allie.dbcls.jp/. PMID:21498548

  14. Interactome maps of mouse gene regulatory domains reveal basic principles of transcriptional regulation

    DEFF Research Database (Denmark)

    Kieffer-Kwon, Kyong-Rim; Tang, Zhonghui; Mathe, Ewy

    2013-01-01

    IA-PET technologies to map the promoter-enhancer interactomes of pluripotent ES cells and differentiated B lymphocytes. We confirm that enhancer usage varies widely across tissues. Unexpectedly, we find that this feature extends to broadly transcribed genes, including Myc and Pim1 cell-cycle regulators, which...... associate with an entirely different set of enhancers in ES and B cells. By means of high-resolution CpG methylomes, genome editing, and digital footprinting, we show that these enhancers recruit lineage-determining factors. Furthermore, we demonstrate that the turning on and off of enhancers during...

  15. THE ROLE OF DATABASE MARKETING IN THE OPERATIONALIZATION OF THE SERVICES RELATIONSHIP MARKETING

    Directory of Open Access Journals (Sweden)

    Luigi DUMITRESCU

    2010-01-01

    Full Text Available The relationship marketing aims the construction of a durable relation between the enterprise and the final client, identified at an individual level. The particular part of the relationship marketing has two main concepts: individuality and the relation. This paper presents the concepts of relationship marketing, database marketing and geomarketing. We present the importance of implementing a marketing database in a service providing enterprise and its implications on one hand for the client and on the other hand for the enterprise. The paper point out the marketing database instruments and the advantages for the elements of the marketing mix. The implementation of a marketing database will aid the enterprise to better target and attract the client, to transform them into loyal consumers and in the same time it can help refresh the image of the enterprise.

  16. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    International Nuclear Information System (INIS)

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-01-01

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: →We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. →The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity.

  17. The drug-minded protein interaction database (DrumPID) for efficient target analysis and drug development.

    Science.gov (United States)

    Kunz, Meik; Liang, Chunguang; Nilla, Santosh; Cecil, Alexander; Dandekar, Thomas

    2016-01-01

    The drug-minded protein interaction database (DrumPID) has been designed to provide fast, tailored information on drugs and their protein networks including indications, protein targets and side-targets. Starting queries include compound, target and protein interactions and organism-specific protein families. Furthermore, drug name, chemical structures and their SMILES notation, affected proteins (potential drug targets), organisms as well as diseases can be queried including various combinations and refinement of searches. Drugs and protein interactions are analyzed in detail with reference to protein structures and catalytic domains, related compound structures as well as potential targets in other organisms. DrumPID considers drug functionality, compound similarity, target structure, interactome analysis and organismic range for a compound, useful for drug development, predicting drug side-effects and structure-activity relationships.Database URL:http://drumpid.bioapps.biozentrum.uni-wuerzburg.de. © The Author(s) 2016. Published by Oxford University Press.

  18. Manual of plant producers and services in environmental protection. Database in the field of environmental protection

    International Nuclear Information System (INIS)

    Serve, C.

    1992-01-01

    On the basis of an enquiry, the Stuttgart Chamber of Industry and Commerce produced a database of the services offered by regional and supraregional companies in the field of environmental protection. The data are presented in this manual, classified as follows: noise protection systems; sanitation systems and services; other systems and services. (orig.) [de

  19. The Protein Identifier Cross-Referencing (PICR service: reconciling protein identifiers across multiple source databases

    Directory of Open Access Journals (Sweden)

    Leinonen Rasko

    2007-10-01

    Full Text Available Abstract Background Each major protein database uses its own conventions when assigning protein identifiers. Resolving the various, potentially unstable, identifiers that refer to identical proteins is a major challenge. This is a common problem when attempting to unify datasets that have been annotated with proteins from multiple data sources or querying data providers with one flavour of protein identifiers when the source database uses another. Partial solutions for protein identifier mapping exist but they are limited to specific species or techniques and to a very small number of databases. As a result, we have not found a solution that is generic enough and broad enough in mapping scope to suit our needs. Results We have created the Protein Identifier Cross-Reference (PICR service, a web application that provides interactive and programmatic (SOAP and REST access to a mapping algorithm that uses the UniProt Archive (UniParc as a data warehouse to offer protein cross-references based on 100% sequence identity to proteins from over 70 distinct source databases loaded into UniParc. Mappings can be limited by source database, taxonomic ID and activity status in the source database. Users can copy/paste or upload files containing protein identifiers or sequences in FASTA format to obtain mappings using the interactive interface. Search results can be viewed in simple or detailed HTML tables or downloaded as comma-separated values (CSV or Microsoft Excel (XLS files suitable for use in a local database or a spreadsheet. Alternatively, a SOAP interface is available to integrate PICR functionality in other applications, as is a lightweight REST interface. Conclusion We offer a publicly available service that can interactively map protein identifiers and protein sequences to the majority of commonly used protein databases. Programmatic access is available through a standards-compliant SOAP interface or a lightweight REST interface. The PICR

  20. Database Description - ConfC | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ConfC Alternative name Database...amotsu Noguchi Tel: 042-495-8736 E-mail: Database classification Structure Database...s - Protein structure Structure Databases - Small molecules Structure Databases - Nucleic acid structure Database... services - Need for user registration - About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Database Description - ConfC | LSDB Archive ...

  1. System-level insights into the cellular interactome of a non-model organism: inferring, modelling and analysing functional gene network of soybean (Glycine max.

    Directory of Open Access Journals (Sweden)

    Yungang Xu

    Full Text Available Cellular interactome, in which genes and/or their products interact on several levels, forming transcriptional regulatory-, protein interaction-, metabolic-, signal transduction networks, etc., has attracted decades of research focuses. However, such a specific type of network alone can hardly explain the various interactive activities among genes. These networks characterize different interaction relationships, implying their unique intrinsic properties and defects, and covering different slices of biological information. Functional gene network (FGN, a consolidated interaction network that models fuzzy and more generalized notion of gene-gene relations, have been proposed to combine heterogeneous networks with the goal of identifying functional modules supported by multiple interaction types. There are yet no successful precedents of FGNs on sparsely studied non-model organisms, such as soybean (Glycine max, due to the absence of sufficient heterogeneous interaction data. We present an alternative solution for inferring the FGNs of soybean (SoyFGNs, in a pioneering study on the soybean interactome, which is also applicable to other organisms. SoyFGNs exhibit the typical characteristics of biological networks: scale-free, small-world architecture and modularization. Verified by co-expression and KEGG pathways, SoyFGNs are more extensive and accurate than an orthology network derived from Arabidopsis. As a case study, network-guided disease-resistance gene discovery indicates that SoyFGNs can provide system-level studies on gene functions and interactions. This work suggests that inferring and modelling the interactome of a non-model plant are feasible. It will speed up the discovery and definition of the functions and interactions of other genes that control important functions, such as nitrogen fixation and protein or lipid synthesis. The efforts of the study are the basis of our further comprehensive studies on the soybean functional

  2. A Unified Peer-to-Peer Database Framework for XQueries over Dynamic Distributed Content and its Application for Scalable Service Discovery

    CERN Document Server

    Hoschek, Wolfgang

    In a large distributed system spanning administrative domains such as a Grid, it is desirable to maintain and query dynamic and timely information about active participants such as services, resources and user communities. The web services vision promises that programs are made more flexible and powerful by querying Internet databases (registries) at runtime in order to discover information and network attached third-party building blocks. Services can advertise themselves and related metadata via such databases, enabling the assembly of distributed higher-level components. In support of this vision, this thesis shows how to support expressive general-purpose queries over a view that integrates autonomous dynamic database nodes from a wide range of distributed system topologies. We motivate and justify the assertion that realistic ubiquitous service and resource discovery requires a rich general-purpose query language such as XQuery or SQL. Next, we introduce the Web Service Discovery Architecture (WSDA), wh...

  3. Database Description - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PSCDB Alternative n...rial Science and Technology (AIST) Takayuki Amemiya E-mail: Database classification Structure Databases - Protein structure Database...554-D558. External Links: Original website information Database maintenance site Graduate School of Informat...available URL of Web services - Need for user registration Not available About This Database Database Descri...ption Download License Update History of This Database Site Policy | Contact Us Database Description - PSCDB | LSDB Archive ...

  4. MitProNet: A knowledgebase and analysis platform of proteome, interactome and diseases for mammalian mitochondria.

    Directory of Open Access Journals (Sweden)

    Jiabin Wang

    Full Text Available Mitochondrion plays a central role in diverse biological processes in most eukaryotes, and its dysfunctions are critically involved in a large number of diseases and the aging process. A systematic identification of mitochondrial proteomes and characterization of functional linkages among mitochondrial proteins are fundamental in understanding the mechanisms underlying biological functions and human diseases associated with mitochondria. Here we present a database MitProNet which provides a comprehensive knowledgebase for mitochondrial proteome, interactome and human diseases. First an inventory of mammalian mitochondrial proteins was compiled by widely collecting proteomic datasets, and the proteins were classified by machine learning to achieve a high-confidence list of mitochondrial proteins. The current version of MitProNet covers 1124 high-confidence proteins, and the remainders were further classified as middle- or low-confidence. An organelle-specific network of functional linkages among mitochondrial proteins was then generated by integrating genomic features encoded by a wide range of datasets including genomic context, gene expression profiles, protein-protein interactions, functional similarity and metabolic pathways. The functional-linkage network should be a valuable resource for the study of biological functions of mitochondrial proteins and human mitochondrial diseases. Furthermore, we utilized the network to predict candidate genes for mitochondrial diseases using prioritization algorithms. All proteins, functional linkages and disease candidate genes in MitProNet were annotated according to the information collected from their original sources including GO, GEO, OMIM, KEGG, MIPS, HPRD and so on. MitProNet features a user-friendly graphic visualization interface to present functional analysis of linkage networks. As an up-to-date database and analysis platform, MitProNet should be particularly helpful in comprehensive studies of

  5. Database Description - RMG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RMG Alternative name ...raki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Database... classification Nucleotide Sequence Databases Organism Taxonomy Name: Oryza sativa Japonica Group Taxonomy ID: 39947 Database...rnal: Mol Genet Genomics (2002) 268: 434–445 External Links: Original website information Database...available URL of Web services - Need for user registration Not available About This Database Database Descri

  6. GSIMF: a web service based software and database management system for the next generation grids

    International Nuclear Information System (INIS)

    Wang, N; Ananthan, B; Gieraltowski, G; May, E; Vaniachine, A

    2008-01-01

    To process the vast amount of data from high energy physics experiments, physicists rely on Computational and Data Grids; yet, the distribution, installation, and updating of a myriad of different versions of different programs over the Grid environment is complicated, time-consuming, and error-prone. Our Grid Software Installation Management Framework (GSIMF) is a set of Grid Services that has been developed for managing versioned and interdependent software applications and file-based databases over the Grid infrastructure. This set of Grid services provide a mechanism to install software packages on distributed Grid computing elements, thus automating the software and database installation management process on behalf of the users. This enables users to remotely install programs and tap into the computing power provided by Grids

  7. Linkage between the Danish National Health Service Prescription Database, the Danish Fetal Medicine Database, and other Danish registries as a tool for the study of drug safety in pregnancy

    DEFF Research Database (Denmark)

    Pedersen, Lars Henning; Petersen, Olav Bjørn; Nørgaard, Mette

    2016-01-01

    A linked population-based database is being created in Denmark for research on drug safety during pregnancy. It combines information from the Danish National Health Service Prescription Database (with information on all prescriptions reimbursed in Denmark since 2004), the Danish Fetal Medicine...

  8. Database Description - KAIKOcDNA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us KAIKOcDNA Database Description General information of database Database name KAIKOcDNA Alter...National Institute of Agrobiological Sciences Akiya Jouraku E-mail : Database cla...ssification Nucleotide Sequence Databases Organism Taxonomy Name: Bombyx mori Taxonomy ID: 7091 Database des...rnal: G3 (Bethesda) / 2013, Sep / vol.9 External Links: Original website information Database maintenance si...available URL of Web services - Need for user registration Not available About This Database Database

  9. Molecular characterization and interactome analysis of Trypanosoma cruzi tryparedoxin II.

    Science.gov (United States)

    Arias, Diego G; Piñeyro, María Dolores; Iglesias, Alberto A; Guerrero, Sergio A; Robello, Carlos

    2015-04-29

    Trypanosoma cruzi, the causative agent of Chagas disease, possesses two tryparedoxins (TcTXNI and TcTXNII), belonging to the thioredoxin superfamily. TXNs are oxidoreductases which mediate electron transfer between trypanothione and peroxiredoxins. This constitutes a difference with the host cells, in which these activities are mediated by thioredoxins. These differences make TXNs an attractive target for drug development. In a previous work we characterized TcTXNI, including the redox interactome. In this work we extend the study to TcTXNII. We demonstrate that TcTXNII is a transmembrane protein anchored to the surface of the mitochondria and endoplasmic reticulum, with a cytoplasmatic orientation of the redox domain. It would be expressed during the metacyclogenesis process. In order to continue with the characterization of the redox interactome of T. cruzi, we designed an active site mutant TcTXNII lacking the resolving cysteine, and through the expression of this mutant protein and incubation with T. cruzi proteins, heterodisulfide complexes were isolated by affinity chromatography and identified by mass spectrometry. This allowed us to identify sixteen TcTXNII interacting proteins, which are involved in a wide range of cellular processes, indicating the relevance of TcTXNII, and contributing to our understanding of the redox interactome of T. cruzi. T. cruzi, the causative agent of Chagas disease, constitutes a major sanitary problem in Latin America. The number of estimated infected persons is ca. 8 million, 28 million people are at risk of infection and ~20,000 deaths occur per year in endemic regions. No vaccines are available at present, and most drugs currently in use were developed decades ago and show variable efficacy with undesirable side effects. The parasite is able to live and prolipherate inside macrophage phagosomes, where it is exposed to cytotoxic reactive oxygen and nitrogen species, derived from macrophage activation. Therefore, T. cruzi

  10. PrimateLit Database

    Science.gov (United States)

    Primate Info Net Related Databases NCRR PrimateLit: A bibliographic database for primatology Top of any problems with this service. We welcome your feedback. The PrimateLit database is no longer being Resources, National Institutes of Health. The database is a collaborative project of the Wisconsin Primate

  11. Lukasiewicz-Topos Models of Neural Networks, Cell Genome and Interactome Nonlinear Dynamic Models

    CERN Document Server

    Baianu, I C

    2004-01-01

    A categorical and Lukasiewicz-Topos framework for Lukasiewicz Algebraic Logic models of nonlinear dynamics in complex functional systems such as neural networks, genomes and cell interactomes is proposed. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous logical models of both genetic activities and neural networks. An algebraic formulation of variable 'next-state functions' is extended to a Lukasiewicz Topos with an n-valued Lukasiewicz Algebraic Logic subobject classifier description that represents non-random and nonlinear network activities as well as their transformations in developmental processes and carcinogenesis.

  12. Application of Google Maps API service for creating web map of information retrieved from CORINE land cover databases

    Directory of Open Access Journals (Sweden)

    Kilibarda Milan

    2010-01-01

    Full Text Available Today, Google Maps API application based on Ajax technology as standard web service; facilitate users with publication interactive web maps, thus opening new possibilities in relation to the classical analogue maps. CORINE land cover databases are recognized as the fundamental reference data sets for numerious spatial analysis. The theoretical and applicable aspects of Google Maps API cartographic service are considered on the case of creating web map of change in urban areas in Belgrade and surround from 2000. to 2006. year, obtained from CORINE databases.

  13. Data management of protein interaction networks

    CERN Document Server

    Cannataro, Mario

    2012-01-01

    Interactomics: a complete survey from data generation to knowledge extraction With the increasing use of high-throughput experimental assays, more and more protein interaction databases are becoming available. As a result, computational analysis of protein-to-protein interaction (PPI) data and networks, now known as interactomics, has become an essential tool to determine functionally associated proteins. From wet lab technologies to data management to knowledge extraction, this timely book guides readers through the new science of interactomics, giving them the tools needed to: Generate

  14. Conceptual Model Formalization in a Semantic Interoperability Service Framework: Transforming Relational Database Schemas to OWL.

    Science.gov (United States)

    Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd

    2014-01-01

    Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.

  15. Toxoplasmosis and Polygenic Disease Susceptibility Genes: Extensive Toxoplasma gondii Host/Pathogen Interactome Enrichment in Nine Psychiatric or Neurological Disorders

    Directory of Open Access Journals (Sweden)

    C. J. Carter

    2013-01-01

    Full Text Available Toxoplasma gondii is not only implicated in schizophrenia and related disorders, but also in Alzheimer's or Parkinson's disease, cancer, cardiac myopathies, and autoimmune disorders. During its life cycle, the pathogen interacts with ~3000 host genes or proteins. Susceptibility genes for multiple sclerosis, Alzheimer's disease, schizophrenia, bipolar disorder, depression, childhood obesity, Parkinson's disease, attention deficit hyperactivity disorder (multiple sclerosis, and autism (, but not anorexia or chronic fatigue are highly enriched in the human arm of this interactome and 18 (ADHD to 33% (MS of the susceptibility genes relate to it. The signalling pathways involved in the susceptibility gene/interactome overlaps are relatively specific and relevant to each disease suggesting a means whereby susceptibility genes could orient the attentions of a single pathogen towards disruption of the specific pathways that together contribute (positively or negatively to the endophenotypes of different diseases. Conditional protein knockdown, orchestrated by T. gondii proteins or antibodies binding to those of the host (pathogen derived autoimmunity and metabolite exchange, may contribute to this disruption. Susceptibility genes may thus be related to the causes and influencers of disease, rather than (and as well as to the disease itself.

  16. 24-Hour Forecast of Air Temperatures from the National Weather Service's National Digital Forecast Database (NDFD)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Digital Forecast Database (NDFD) contains a seamless mosaic of the National Weather Service's (NWS) digital forecasts of air temperature. In...

  17. 72-Hour Forecast of Air Temperatures from the National Weather Service's National Digital Forecast Database (NDFD)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Digital Forecast Database (NDFD) contains a seamless mosaic of the National Weather Service's (NWS) digital forecasts of air temperature. In...

  18. 48-Hour Forecast of Air Temperatures from the National Weather Service's National Digital Forecast Database (NDFD)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The National Digital Forecast Database (NDFD) contains a seamless mosaic of the National Weather Service's (NWS) digital forecasts of air temperature. In...

  19. Database security in the cloud

    OpenAIRE

    Sakhi, Imal

    2012-01-01

    The aim of the thesis is to get an overview of the database services available in cloud computing environment, investigate the security risks associated with it and propose the possible countermeasures to minimize the risks. The thesis also analyzes two cloud database service providers namely; Amazon RDS and Xeround. The reason behind choosing these two providers is because they are currently amongst the leading cloud database providers and both provide relational cloud databases which makes ...

  20. A predicted protein interactome identifies conserved global networks and disease resistance subnetworks in maize.

    Directory of Open Access Journals (Sweden)

    Matt eGeisler

    2015-06-01

    Full Text Available Interactomes are genome-wide roadmaps of protein-protein interactions. They have been produced for humans, yeast, the fruit fly, and Arabidopsis thaliana and have become invaluable tools for generating and testing hypotheses. A predicted interactome for Zea mays (PiZeaM is presented here as an aid to the research community for this valuable crop species. PiZeaM was built using a proven method of interologs (interacting orthologs that were identified using both one-to-one and many-to-many orthology between genomes of maize and reference species. Where both maize orthologs occurred for an experimentally determined interaction in the reference species, we predicted a likely interaction in maize. A total of 49,026 unique interactions for 6,004 maize proteins were predicted. These interactions are enriched for processes that are evolutionarily conserved, but include many otherwise poorly annotated proteins in maize. The predicted maize interactions were further analyzed by comparing annotation of interacting proteins, including different layers of ontology. A map of pairwise gene co-expression was also generated and compared to predicted interactions. Two global subnetworks were constructed for highly conserved interactions. These subnetworks showed clear clustering of proteins by function. Another subnetwork was created for disease response using a bait and prey strategy to capture interacting partners for proteins that respond to other organisms. Closer examination of this subnetwork revealed the connectivity between biotic and abiotic hormone stress pathways. We believe PiZeaM will provide a useful tool for the prediction of protein function and analysis of pathways for Z. mays researchers and is presented in this paper as a reference tool for the exploration of protein interactions in maize.

  1. Interactome analysis of transcriptional coactivator multiprotein bridging factor 1 unveils a yeast AP-1-like transcription factor involved in oxidation tolerance of mycopathogen Beauveria bassiana.

    Science.gov (United States)

    Chu, Xin-Ling; Dong, Wei-Xia; Ding, Jin-Li; Feng, Ming-Guang; Ying, Sheng-Hua

    2018-02-01

    Oxidation tolerance is an important determinant to predict the virulence and biocontrol potential of Beauveria bassiana, a well-known entomopathogenic fungus. As a transcriptional coactivator, multiprotein bridging factor 1 mediates the activity of transcription factor in diverse physiological processes, and its homolog in B. bassiana (BbMBF1) contributes to fungal oxidation tolerance. In this study, the BbMBF1-interactomes under oxidative stress and normal growth condition were deciphered by mass spectrometry integrated with the immunoprecipitation. BbMBF1p factor has a broad interaction with proteins that are involved in various cellular processes, and this interaction is dynamically regulated by oxidative stress. Importantly, a B. bassiana homolog of yeast AP-1-like transcription factor (BbAP-1) was specifically associated with the BbMBF1-interactome under oxidation and significantly contributed to fungal oxidation tolerance. In addition, qPCR analysis revealed that several antioxidant genes are jointly controlled by BbAP-1 and BbMBF1. Conclusively, it is proposed that BbMBF1p protein mediates BbAP-1p factor to transcribe the downstream antioxidant genes in B. bassiana under oxidative stress. This study demonstrates for the first time a proteomic view of the MBF1-interactome in fungi, and presents an initial framework to probe the transcriptional mechanism involved in fungal response to oxidation, which will provide a new strategy to improve the biocontrol efficacy of B. bassiana.

  2. Maintenance services of nuclear power plant using 3D as-built database management system

    International Nuclear Information System (INIS)

    Okumura, Kazutaka; Nakashima, Kazuhito; Mori, Norimasa; Azuma, Takashi

    2017-01-01

    Three dimensional As-built DAtabase Management System (NUSEC-ADAMS) is a system whose goal is to produce economical, speedy and accurate maintenance services of nuclear power plants by using 3D point group data. This system makes it possible to understand the plant situation remotely without field measurements. 3D point group data are collected before and after plant equipment installations, and it is stored to database after converted to viewable data on the web. Therefore, it can be shared in domestic network of a company and it can be connected with system diagram, specification of equipment, and additional information (e.g. maintenance record) by registering key information between 3D point group data and equipment's data. Thus, it reduces workload of pre-job field survey and improves work efficiency. In case of problem at a plant, if 3D as-built data is set to be seen on the network, it is possible to understand accurate information and the cause remotely in the beginning of problem. Collecting 3D point group data and updating database continuously keep as-built information up to date, therefore it improves accuracy of off-site study, and plant situation can be grasped timely. As a result, we can reduce workload and improve quality of maintenance services of nuclear power plants. (author)

  3. Intranuclear interactomic inhibition of NF-κB suppresses LPS-induced severe sepsis

    International Nuclear Information System (INIS)

    Park, Sung-Dong; Cheon, So Yeong; Park, Tae-Yoon; Shin, Bo-Young; Oh, Hyunju; Ghosh, Sankar; Koo, Bon-Nyeo; Lee, Sang-Kyou

    2015-01-01

    Suppression of nuclear factor-κB (NF-κB) activation, which is best known as a major regulator of innate and adaptive immune responses, is a potent strategy for the treatment of endotoxic sepsis. To inhibit NF-κB functions, we designed the intra-nuclear transducible form of transcription modulation domain (TMD) of RelA (p65), called nt-p65-TMD, which can be delivered effectively into the nucleus without influencing the cell viability, and work as interactomic inhibitors via disruption of the endogenous p65-mediated transcription complex. nt-p65-TMD effectively inhibited the secretion of pro-inflammatory cytokines, including TNF-α, IL-1β, or IL-6 from BV2 microglia cells stimulated by lipopolysaccharide (LPS). nt-p65-TMD did not inhibit tyrosine phosphorylation of signaling mediators such as ZAP-70, p38, JNK, or ERK involved in T cell activation, but was capable of suppressing the transcriptional activity of NF-κB without the functional effect on that of NFAT upon T-cell receptor (TCR) stimulation. The transduced nt-p65-TMD in T cell did not affect the expression of CD69, however significantly inhibited the secretion of T cell-specific cytokines such as IL-2, IFN-γ, IL-4, IL-17A, or IL-10. Systemic administration of nt-p65-TMD showed a significant therapeutic effect on LPS-induced sepsis model by inhibiting pro-inflammatory cytokines secretion. Therefore, nt-p65-TMD can be a novel therapeutics for the treatment of various inflammatory diseases, including sepsis, where a transcription factor has a key role in pathogenesis, and further allows us to discover new functions of p65 under normal physiological condition without genetic alteration. - Highlights: • The nt-p65-TMD is intra-nuclear interactomic inhibitor of endogenous p65. • The nt-p65-TMD effectively inhibited the secretion of pro-inflammatory cytokines. • The excellent therapeutic potential of nt-p65-TMD was confirmed in sepsis model

  4. Intranuclear interactomic inhibition of NF-κB suppresses LPS-induced severe sepsis

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sung-Dong [Translational Research Center for Protein Function Control, College of Life Science and Biotechnology, Yonsei University, Seoul 120-749 (Korea, Republic of); Department of Biotechnology, College of Life Science and Biotechnology, Yonsei University, Seoul 120-749 (Korea, Republic of); Cheon, So Yeong [Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul 120-752 (Korea, Republic of); Park, Tae-Yoon; Shin, Bo-Young [Translational Research Center for Protein Function Control, College of Life Science and Biotechnology, Yonsei University, Seoul 120-749 (Korea, Republic of); Department of Biotechnology, College of Life Science and Biotechnology, Yonsei University, Seoul 120-749 (Korea, Republic of); Oh, Hyunju; Ghosh, Sankar [Department of Microbiology and Immunology, College of Physicians and Surgeons, Columbia University, New York, NY 10032 (United States); Koo, Bon-Nyeo, E-mail: koobn@yuhs.ac [Department of Anesthesiology and Pain Medicine, Anesthesia and Pain Research Institute, Yonsei University College of Medicine, Seoul 120-752 (Korea, Republic of); Lee, Sang-Kyou, E-mail: sjrlee@yonsei.ac.kr [Translational Research Center for Protein Function Control, College of Life Science and Biotechnology, Yonsei University, Seoul 120-749 (Korea, Republic of); Department of Biotechnology, College of Life Science and Biotechnology, Yonsei University, Seoul 120-749 (Korea, Republic of)

    2015-08-28

    Suppression of nuclear factor-κB (NF-κB) activation, which is best known as a major regulator of innate and adaptive immune responses, is a potent strategy for the treatment of endotoxic sepsis. To inhibit NF-κB functions, we designed the intra-nuclear transducible form of transcription modulation domain (TMD) of RelA (p65), called nt-p65-TMD, which can be delivered effectively into the nucleus without influencing the cell viability, and work as interactomic inhibitors via disruption of the endogenous p65-mediated transcription complex. nt-p65-TMD effectively inhibited the secretion of pro-inflammatory cytokines, including TNF-α, IL-1β, or IL-6 from BV2 microglia cells stimulated by lipopolysaccharide (LPS). nt-p65-TMD did not inhibit tyrosine phosphorylation of signaling mediators such as ZAP-70, p38, JNK, or ERK involved in T cell activation, but was capable of suppressing the transcriptional activity of NF-κB without the functional effect on that of NFAT upon T-cell receptor (TCR) stimulation. The transduced nt-p65-TMD in T cell did not affect the expression of CD69, however significantly inhibited the secretion of T cell-specific cytokines such as IL-2, IFN-γ, IL-4, IL-17A, or IL-10. Systemic administration of nt-p65-TMD showed a significant therapeutic effect on LPS-induced sepsis model by inhibiting pro-inflammatory cytokines secretion. Therefore, nt-p65-TMD can be a novel therapeutics for the treatment of various inflammatory diseases, including sepsis, where a transcription factor has a key role in pathogenesis, and further allows us to discover new functions of p65 under normal physiological condition without genetic alteration. - Highlights: • The nt-p65-TMD is intra-nuclear interactomic inhibitor of endogenous p65. • The nt-p65-TMD effectively inhibited the secretion of pro-inflammatory cytokines. • The excellent therapeutic potential of nt-p65-TMD was confirmed in sepsis model.

  5. Database security - how can developers and DBAs do it together and what can other Service Managers learn from it

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk gives an overview of security threats affecting databases, preventive measures that we are taking at CERN and best practices in the industry. The presentation will describe how generic the threats are and how can other service managers profit from the database experience to protect other systems.

  6. Colil: a database and search service for citation contexts in the life sciences domain.

    Science.gov (United States)

    Fujiwara, Toyofumi; Yamamoto, Yasunori

    2015-01-01

    To promote research activities in a particular research area, it is important to efficiently identify current research trends, advances, and issues in that area. Although review papers in the research area can suffice for this purpose in general, researchers are not necessarily able to obtain these papers from research aspects of their interests at the time they are required. Therefore, the utilization of the citation contexts of papers in a research area has been considered as another approach. However, there are few search services to retrieve citation contexts in the life sciences domain; furthermore, efficiently obtaining citation contexts is becoming difficult due to the large volume and rapid growth of life sciences papers. Here, we introduce the Colil (Comments on Literature in Literature) database to store citation contexts in the life sciences domain. By using the Resource Description Framework (RDF) and a newly compiled vocabulary, we built the Colil database and made it available through the SPARQL endpoint. In addition, we developed a web-based search service called Colil that searches for a cited paper in the Colil database and then returns a list of citation contexts for it along with papers relevant to it based on co-citations. The citation contexts in the Colil database were extracted from full-text papers of the PubMed Central Open Access Subset (PMC-OAS), which includes 545,147 papers indexed in PubMed. These papers are distributed across 3,171 journals and cite 5,136,741 unique papers that correspond to approximately 25 % of total PubMed entries. By utilizing Colil, researchers can easily refer to a set of citation contexts and relevant papers based on co-citations for a target paper. Colil helps researchers to comprehend life sciences papers in a research area more efficiently and makes their biological research more efficient.

  7. Monitoring of services with non-relational databases and map-reduce framework

    CERN Document Server

    Babik, M; CERN. Geneva. IT Department

    2012-01-01

    Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their exi...

  8. Proteomic Analysis of the Mediator Complex Interactome in Saccharomyces cerevisiae.

    Science.gov (United States)

    Uthe, Henriette; Vanselow, Jens T; Schlosser, Andreas

    2017-02-27

    Here we present the most comprehensive analysis of the yeast Mediator complex interactome to date. Particularly gentle cell lysis and co-immunopurification conditions allowed us to preserve even transient protein-protein interactions and to comprehensively probe the molecular environment of the Mediator complex in the cell. Metabolic 15 N-labeling thereby enabled stringent discrimination between bona fide interaction partners and nonspecifically captured proteins. Our data indicates a functional role for Mediator beyond transcription initiation. We identified a large number of Mediator-interacting proteins and protein complexes, such as RNA polymerase II, general transcription factors, a large number of transcriptional activators, the SAGA complex, chromatin remodeling complexes, histone chaperones, highly acetylated histones, as well as proteins playing a role in co-transcriptional processes, such as splicing, mRNA decapping and mRNA decay. Moreover, our data provides clear evidence, that the Mediator complex interacts not only with RNA polymerase II, but also with RNA polymerases I and III, and indicates a functional role of the Mediator complex in rRNA processing and ribosome biogenesis.

  9. The Interactomic Analysis Reveals Pathogenic Protein Networks in Phomopsis longicolla Underlying Seed Decay of Soybean

    Directory of Open Access Journals (Sweden)

    Shuxian Li

    2018-04-01

    Full Text Available Phomopsis longicolla T. W. Hobbs (syn. Diaporthe longicolla is the primary cause of Phomopsis seed decay (PSD in soybean, Glycine max (L. Merrill. This disease results in poor seed quality and is one of the most economically important seed diseases in soybean. The objectives of this study were to infer protein–protein interactions (PPI and to identify conserved global networks and pathogenicity subnetworks in P. longicolla including orthologous pathways for cell signaling and pathogenesis. The interlog method used in the study identified 215,255 unique PPIs among 3,868 proteins. There were 1,414 pathogenicity related genes in P. longicolla identified using the pathogen host interaction (PHI database. Additionally, 149 plant cell wall degrading enzymes (PCWDE were detected. The network captured five different classes of carbohydrate degrading enzymes, including the auxiliary activities, carbohydrate esterases, glycoside hydrolases, glycosyl transferases, and carbohydrate binding molecules. From the PPI analysis, novel interacting partners were determined for each of the PCWDE classes. The most predominant class of PCWDE was a group of 60 glycoside hydrolases proteins. The glycoside hydrolase subnetwork was found to be interacting with 1,442 proteins within the network and was among the largest clusters. The orthologous proteins FUS3, HOG, CYP1, SGE1, and the g5566t.1 gene identified in this study could play an important role in pathogenicity. Therefore, the P. longicolla protein interactome (PiPhom generated in this study can lead to a better understanding of PPIs in soybean pathogens. Furthermore, the PPI may aid in targeting of genes and proteins for further studies of the pathogenicity mechanisms.

  10. Comprehensively Characterizing the Thioredoxin Interactome In Vivo Highlights the Central Role Played by This Ubiquitous Oxidoreductase in Redox Control*

    Science.gov (United States)

    Arts, Isabelle S.; Vertommen, Didier; Baldin, Francesca; Laloux, Géraldine; Collet, Jean-François

    2016-01-01

    Thioredoxin (Trx) is a ubiquitous oxidoreductase maintaining protein-bound cysteine residues in the reduced thiol state. Here, we combined a well-established method to trap Trx substrates with the power of bacterial genetics to comprehensively characterize the in vivo Trx redox interactome in the model bacterium Escherichia coli. Using strains engineered to optimize trapping, we report the identification of a total 268 Trx substrates, including 201 that had never been reported to depend on Trx for reduction. The newly identified Trx substrates are involved in a variety of cellular processes, ranging from energy metabolism to amino acid synthesis and transcription. The interaction between Trx and two of its newly identified substrates, a protein required for the import of most carbohydrates, PtsI, and the bacterial actin homolog MreB was studied in detail. We provide direct evidence that PtsI and MreB contain cysteine residues that are susceptible to oxidation and that participate in the formation of an intermolecular disulfide with Trx. By considerably expanding the number of Trx targets, our work highlights the role played by this major oxidoreductase in a variety of cellular processes. Moreover, as the dependence on Trx for reduction is often conserved across species, it also provides insightful information on the interactome of Trx in organisms other than E. coli. PMID:27081212

  11. TranscriptomeBrowser 3.0: introducing a new compendium of molecular interactions and a new visualization tool for the study of gene regulatory networks.

    Science.gov (United States)

    Lepoivre, Cyrille; Bergon, Aurélie; Lopez, Fabrice; Perumal, Narayanan B; Nguyen, Catherine; Imbert, Jean; Puthier, Denis

    2012-01-31

    Deciphering gene regulatory networks by in silico approaches is a crucial step in the study of the molecular perturbations that occur in diseases. The development of regulatory maps is a tedious process requiring the comprehensive integration of various evidences scattered over biological databases. Thus, the research community would greatly benefit from having a unified database storing known and predicted molecular interactions. Furthermore, given the intrinsic complexity of the data, the development of new tools offering integrated and meaningful visualizations of molecular interactions is necessary to help users drawing new hypotheses without being overwhelmed by the density of the subsequent graph. We extend the previously developed TranscriptomeBrowser database with a set of tables containing 1,594,978 human and mouse molecular interactions. The database includes: (i) predicted regulatory interactions (computed by scanning vertebrate alignments with a set of 1,213 position weight matrices), (ii) potential regulatory interactions inferred from systematic analysis of ChIP-seq experiments, (iii) regulatory interactions curated from the literature, (iv) predicted post-transcriptional regulation by micro-RNA, (v) protein kinase-substrate interactions and (vi) physical protein-protein interactions. In order to easily retrieve and efficiently analyze these interactions, we developed In-teractomeBrowser, a graph-based knowledge browser that comes as a plug-in for Transcriptome-Browser. The first objective of InteractomeBrowser is to provide a user-friendly tool to get new insight into any gene list by providing a context-specific display of putative regulatory and physical interactions. To achieve this, InteractomeBrowser relies on a "cell compartments-based layout" that makes use of a subset of the Gene Ontology to map gene products onto relevant cell compartments. This layout is particularly powerful for visual integration of heterogeneous biological information

  12. E3 Staff Database

    Data.gov (United States)

    US Agency for International Development — E3 Staff database is maintained by E3 PDMS (Professional Development & Management Services) office. The database is Mysql. It is manually updated by E3 staff as...

  13. Merging in-silico and in vitro salivary protein complex partners using the STRING database: A tutorial.

    Science.gov (United States)

    Crosara, Karla Tonelli Bicalho; Moffa, Eduardo Buozi; Xiao, Yizhi; Siqueira, Walter Luiz

    2018-01-16

    Protein-protein interaction is a common physiological mechanism for protection and actions of proteins in an organism. The identification and characterization of protein-protein interactions in different organisms is necessary to better understand their physiology and to determine their efficacy. In a previous in vitro study using mass spectrometry, we identified 43 proteins that interact with histatin 1. Six previously documented interactors were confirmed and 37 novel partners were identified. In this tutorial, we aimed to demonstrate the usefulness of the STRING database for studying protein-protein interactions. We used an in-silico approach along with the STRING database (http://string-db.org/) and successfully performed a fast simulation of a novel constructed histatin 1 protein-protein network, including both the previously known and the predicted interactors, along with our newly identified interactors. Our study highlights the advantages and importance of applying bioinformatics tools to merge in-silico tactics with experimental in vitro findings for rapid advancement of our knowledge about protein-protein interactions. Our findings also indicate that bioinformatics tools such as the STRING protein network database can help predict potential interactions between proteins and thus serve as a guide for future steps in our exploration of the Human Interactome. Our study highlights the usefulness of the STRING protein database for studying protein-protein interactions. The STRING database can collect and integrate data about known and predicted protein-protein associations from many organisms, including both direct (physical) and indirect (functional) interactions, in an easy-to-use interface. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Cell Centred Database (CCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Cell Centered Database (CCDB) is a web accessible database for high resolution 2D, 3D and 4D data from light and electron microscopy, including correlated imaging.

  15. The Neotoma Paleoecology Database

    Science.gov (United States)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  16. DenHunt - A Comprehensive Database of the Intricate Network of Dengue-Human Interactions.

    Directory of Open Access Journals (Sweden)

    Prashanthi Karyala

    2016-09-01

    Full Text Available Dengue virus (DENV is a human pathogen and its etiology has been widely established. There are many interactions between DENV and human proteins that have been reported in literature. However, no publicly accessible resource for efficiently retrieving the information is yet available. In this study, we mined all publicly available dengue-human interactions that have been reported in the literature into a database called DenHunt. We retrieved 682 direct interactions of human proteins with dengue viral components, 382 indirect interactions and 4120 differentially expressed human genes in dengue infected cell lines and patients. We have illustrated the importance of DenHunt by mapping the dengue-human interactions on to the host interactome and observed that the virus targets multiple host functional complexes of important cellular processes such as metabolism, immune system and signaling pathways suggesting a potential role of these interactions in viral pathogenesis. We also observed that 7 percent of the dengue virus interacting human proteins are also associated with other infectious and non-infectious diseases. Finally, the understanding that comes from such analyses could be used to design better strategies to counteract the diseases caused by dengue virus. The whole dataset has been catalogued in a searchable database, called DenHunt (http://proline.biochem.iisc.ernet.in/DenHunt/.

  17. DenHunt - A Comprehensive Database of the Intricate Network of Dengue-Human Interactions.

    Science.gov (United States)

    Karyala, Prashanthi; Metri, Rahul; Bathula, Christopher; Yelamanchi, Syam K; Sahoo, Lipika; Arjunan, Selvam; Sastri, Narayan P; Chandra, Nagasuma

    2016-09-01

    Dengue virus (DENV) is a human pathogen and its etiology has been widely established. There are many interactions between DENV and human proteins that have been reported in literature. However, no publicly accessible resource for efficiently retrieving the information is yet available. In this study, we mined all publicly available dengue-human interactions that have been reported in the literature into a database called DenHunt. We retrieved 682 direct interactions of human proteins with dengue viral components, 382 indirect interactions and 4120 differentially expressed human genes in dengue infected cell lines and patients. We have illustrated the importance of DenHunt by mapping the dengue-human interactions on to the host interactome and observed that the virus targets multiple host functional complexes of important cellular processes such as metabolism, immune system and signaling pathways suggesting a potential role of these interactions in viral pathogenesis. We also observed that 7 percent of the dengue virus interacting human proteins are also associated with other infectious and non-infectious diseases. Finally, the understanding that comes from such analyses could be used to design better strategies to counteract the diseases caused by dengue virus. The whole dataset has been catalogued in a searchable database, called DenHunt (http://proline.biochem.iisc.ernet.in/DenHunt/).

  18. Experience of introducing a new database for an approved coordination and record keeping service

    International Nuclear Information System (INIS)

    Garratt, N. J.

    2011-01-01

    The Health Protection Agency (and its predecessors) has many years experience of running Approved Dosimetry Services, including coordination and record keeping. This paper describes the experiences gained whilst introducing a new web-based system for coordination and record keeping to replace the ageing mainframe database. This includes the planning of the project, the migration of the data between the two systems, parallel running of all the operational tasks and lessons learned during the process. (authors)

  19. Data Analytic Process of a Nationwide Population-Based Study Using National Health Information Database Established by National Health Insurance Service

    Directory of Open Access Journals (Sweden)

    Yong-ho Lee

    2016-02-01

    Full Text Available In 2014, the National Health Insurance Service (NHIS signed a memorandum of understanding with the Korean Diabetes Association to provide limited open access to its databases for investigating the past and current status of diabetes and its management. NHIS databases include the entire Korean population; therefore, it can be used as a population-based nationwide study for various diseases, including diabetes and its complications. This report presents how we established the analytic system of nation-wide population-based studies using the NHIS database as follows: the selection of database study population and its distribution and operational definition of diabetes and patients of currently ongoing collaboration projects.

  20. Database reliability engineering designing and operating resilient database systems

    CERN Document Server

    Campbell, Laine

    2018-01-01

    The infrastructure-as-code revolution in IT is also affecting database administration. With this practical book, developers, system administrators, and junior to mid-level DBAs will learn how the modern practice of site reliability engineering applies to the craft of database architecture and operations. Authors Laine Campbell and Charity Majors provide a framework for professionals looking to join the ranks of today’s database reliability engineers (DBRE). You’ll begin by exploring core operational concepts that DBREs need to master. Then you’ll examine a wide range of database persistence options, including how to implement key technologies to provide resilient, scalable, and performant data storage and retrieval. With a firm foundation in database reliability engineering, you’ll be ready to dive into the architecture and operations of any modern database. This book covers: Service-level requirements and risk management Building and evolving an architecture for operational visibility ...

  1. OECD Structural Analysis Databases: Sectoral Principles in the Study of Markets for Goods and Services

    Directory of Open Access Journals (Sweden)

    Marina D. Simonova

    2015-01-01

    Full Text Available This study focuses on the characteristics of the information database of the OECD structural business statistics in the analysis of markets of goods and services, and macroeconomic trends. The system of indicators of structural statistics is presented in OECD publications and on-line access to a wide range of users. Collected data sources generated by the OECD offices are based on the national statistical offices of country-members, Russia and the BRICS. Data on the development of economic sectors are calculated according to the methodology of individual countries, regional and international standards: annual national accounts of countries, annual industry and business surveys, methodology of short-term indicators, statistics of international trade in goods. Data are aggregated on the basis of complex indicators statements of the enterprises' questionnaire and business surveys. Information system of structural statistics which is available and continuously updated, has certain features. It is composed of several subsystems: Structural Statistics on Industry and Services, EU entrepreneurship statistics, Indicators of Industry and Services, International Trade in Commodities Statistics. The grouping of industries is based on the International standard industrial classification of all economic activities (ISIC. Classification of foreign trade flows is made in accordance with the Harmonized system of description and coding of goods. The structural statistics databases comprise four classes of industries' grouping according to the technology intensity. The paper discusses the main reasons for the non-comparability of data in the subsystems in certain time intervals.

  2. Efficient Prediction of Progesterone Receptor Interactome Using a Support Vector Machine Model

    Directory of Open Access Journals (Sweden)

    Ji-Long Liu

    2015-03-01

    Full Text Available Protein-protein interaction (PPI is essential for almost all cellular processes and identification of PPI is a crucial task for biomedical researchers. So far, most computational studies of PPI are intended for pair-wise prediction. Theoretically, predicting protein partners for a single protein is likely a simpler problem. Given enough data for a particular protein, the results can be more accurate than general PPI predictors. In the present study, we assessed the potential of using the support vector machine (SVM model with selected features centered on a particular protein for PPI prediction. As a proof-of-concept study, we applied this method to identify the interactome of progesterone receptor (PR, a protein which is essential for coordinating female reproduction in mammals by mediating the actions of ovarian progesterone. We achieved an accuracy of 91.9%, sensitivity of 92.8% and specificity of 91.2%. Our method is generally applicable to any other proteins and therefore may be of help in guiding biomedical experiments.

  3. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    Science.gov (United States)

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  4. Exploitation of complex network topology for link prediction in biological interactomes

    KAUST Repository

    Alanis Lobato, Gregorio

    2014-06-01

    The network representation of the interactions between proteins and genes allows for a holistic perspective of the complex machinery underlying the living cell. However, the large number of interacting entities within the cell makes network construction a daunting and arduous task, prone to errors and missing information. Fortunately, the structure of biological networks is not different from that of other complex systems, such as social networks, the world-wide web or power grids, for which growth models have been proposed to better understand their structure and function. This means that we can design tools based on these models in order to exploit the topology of biological interactomes with the aim to construct more complete and reliable maps of the cell. In this work, we propose three novel and powerful approaches for the prediction of interactions in biological networks and conclude that it is possible to mine the topology of these complex system representations and produce reliable and biologically meaningful information that enriches the datasets to which we have access today.

  5. 數據資料庫 Numeric Databases

    Directory of Open Access Journals (Sweden)

    Mei-ling Wang Chen

    1989-03-01

    Full Text Available 無In 1979, the International Communication Bureau of R.O.C. connected some U.S. information service centers through the international telecommunication network. Since then, there are Dialog, ORBIT & BRS introduced into this country. However, the users are interested in the bibliographic databases and seldomly know the non-bibliographic databases or the numeric databases. This article mainly describes the numeric database about its definition & characteristics, comparison with bibliographic databases, its producers. Service systems & users, data element, a brief introduction by the subject, its problem and future, Iibrary role and the present use status in the R.O.C.

  6. Database on Demand: insight how to build your own DBaaS

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  7. Database on Demand: insight how to build your own DBaaS

    Science.gov (United States)

    Gaspar Aparicio, Ruben; Coterillo Coz, Ignacio

    2015-12-01

    At CERN, a number of key database applications are running on user-managed MySQL, PostgreSQL and Oracle database services. The Database on Demand (DBoD) project was born out of an idea to provide CERN user community with an environment to develop and run database services as a complement to the central Oracle based database service. The Database on Demand empowers the user to perform certain actions that had been traditionally done by database administrators, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently three major RDBMS (relational database management system) vendors are offered. In this article we show the actual status of the service after almost three years of operations, some insight of our new redesign software engineering and near future evolution.

  8. Automated Oracle database testing

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    Ensuring database stability and steady performance in the modern world of agile computing is a major challenge. Various changes happening at any level of the computing infrastructure: OS parameters & packages, kernel versions, database parameters & patches, or even schema changes, all can potentially harm production services. This presentation shows how an automatic and regular testing of Oracle databases can be achieved in such agile environment.

  9. Towards cloud-centric distributed database evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  10. Towards Cloud-centric Distributed Database Evaluation

    OpenAIRE

    Seybold, Daniel

    2016-01-01

    The area of cloud computing also pushed the evolvement of distributed databases, resulting in a variety of distributed database systems, which can be classified in relation databases, NoSQL and NewSQL database systems. In general all representatives of these database system classes claim to provide elasticity and "unlimited" horizontal scalability. As these characteristics comply with the cloud, distributed databases seem to be a perfect match for Database-as-a-Service systems (DBaaS).

  11. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes to make use of the new data.3

  12. Assessing availability of scientific journals, databases, and health library services in Canadian health ministries: a cross-sectional study.

    Science.gov (United States)

    Léon, Grégory; Ouimet, Mathieu; Lavis, John N; Grimshaw, Jeremy; Gagnon, Marie-Pierre

    2013-03-21

    Evidence-informed health policymaking logically depends on timely access to research evidence. To our knowledge, despite the substantial political and societal pressure to enhance the use of the best available research evidence in public health policy and program decision making, there is no study addressing availability of peer-reviewed research in Canadian health ministries. To assess availability of (1) a purposive sample of high-ranking scientific journals, (2) bibliographic databases, and (3) health library services in the fourteen Canadian health ministries. From May to October 2011, we conducted a cross-sectional survey among librarians employed by Canadian health ministries to collect information relative to availability of scientific journals, bibliographic databases, and health library services. Availability of scientific journals in each ministry was determined using a sample of 48 journals selected from the 2009 Journal Citation Reports (Sciences and Social Sciences Editions). Selection criteria were: relevance for health policy based on scope note information about subject categories and journal popularity based on impact factors. We found that the majority of Canadian health ministries did not have subscription access to key journals and relied heavily on interlibrary loans. Overall, based on a sample of high-ranking scientific journals, availability of journals through interlibrary loans, online and print-only subscriptions was estimated at 63%, 28% and 3%, respectively. Health Canada had a 2.3-fold higher number of journal subscriptions than that of the provincial ministries' average. Most of the organisations provided access to numerous discipline-specific and multidisciplinary databases. Many organisations provided access to the library resources described through library partnerships or consortia. No professionally led health library environment was found in four out of fourteen Canadian health ministries (i.e. Manitoba Health, Northwest

  13. Impact of Access to Online Databases on Document Delivery Services within Iranian Academic Libraries

    Directory of Open Access Journals (Sweden)

    Zohreh Zahedi

    2007-04-01

    Full Text Available The present study investigates the impact of access to online databases on the document delivery services in Iranian Academic Libraries, within the framework of factors such as number of orders lodged over the years studied and their trends, expenditures made by each university, especially those universities and groups that had the highest number of orders. This investigation was carried out through a survey and by calling on the library document supply unit in universities as well as in-person interview with librarians in charge. The study sample was confined to the universities of Shiraz, Tehran and Tarbiyat Modaress along with their faculties. Findings indicate that the rate of document requests in various universities depends on the target audience, capabilities, students’ familiarity as well as mode of document delivery services..

  14. Fly-DPI: database of protein interactomes for D. melanogaster in the approach of systems biology

    Directory of Open Access Journals (Sweden)

    Lin Chieh-Hua

    2006-12-01

    Full Text Available Abstract Background Proteins control and mediate many biological activities of cells by interacting with other protein partners. This work presents a statistical model to predict protein interaction networks of Drosophila melanogaster based on insight into domain interactions. Results Three high-throughput yeast two-hybrid experiments and the collection in FlyBase were used as our starting datasets. The co-occurrences of domains in these interactive events are converted into a probability score of domain-domain interaction. These scores are used to infer putative interaction among all available open reading frames (ORFs of fruit fly. Additionally, the likelihood function is used to estimate all potential protein-protein interactions. All parameters are successfully iterated and MLE is obtained for each pair of domains. Additionally, the maximized likelihood reaches its converged criteria and maintains the probability stable. The hybrid model achieves a high specificity with a loss of sensitivity, suggesting that the model may possess major features of protein-protein interactions. Several putative interactions predicted by the proposed hybrid model are supported by literatures, while experimental data with a low probability score indicate an uncertain reliability and require further proof of interaction. Fly-DPI is the online database used to present this work. It is an integrated proteomics tool with comprehensive protein annotation information from major databases as well as an effective means of predicting protein-protein interactions. As a novel search strategy, the ping-pong search is a naïve path map between two chosen proteins based on pre-computed shortest paths. Adopting effective filtering strategies will facilitate researchers in depicting the bird's eye view of the network of interest. Fly-DPI can be accessed at http://flydpi.nhri.org.tw. Conclusion This work provides two reference systems, statistical and biological, to evaluate

  15. TranscriptomeBrowser 3.0: introducing a new compendium of molecular interactions and a new visualization tool for the study of gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Lepoivre Cyrille

    2012-01-01

    Full Text Available Abstract Background Deciphering gene regulatory networks by in silico approaches is a crucial step in the study of the molecular perturbations that occur in diseases. The development of regulatory maps is a tedious process requiring the comprehensive integration of various evidences scattered over biological databases. Thus, the research community would greatly benefit from having a unified database storing known and predicted molecular interactions. Furthermore, given the intrinsic complexity of the data, the development of new tools offering integrated and meaningful visualizations of molecular interactions is necessary to help users drawing new hypotheses without being overwhelmed by the density of the subsequent graph. Results We extend the previously developed TranscriptomeBrowser database with a set of tables containing 1,594,978 human and mouse molecular interactions. The database includes: (i predicted regulatory interactions (computed by scanning vertebrate alignments with a set of 1,213 position weight matrices, (ii potential regulatory interactions inferred from systematic analysis of ChIP-seq experiments, (iii regulatory interactions curated from the literature, (iv predicted post-transcriptional regulation by micro-RNA, (v protein kinase-substrate interactions and (vi physical protein-protein interactions. In order to easily retrieve and efficiently analyze these interactions, we developed In-teractomeBrowser, a graph-based knowledge browser that comes as a plug-in for Transcriptome-Browser. The first objective of InteractomeBrowser is to provide a user-friendly tool to get new insight into any gene list by providing a context-specific display of putative regulatory and physical interactions. To achieve this, InteractomeBrowser relies on a "cell compartments-based layout" that makes use of a subset of the Gene Ontology to map gene products onto relevant cell compartments. This layout is particularly powerful for visual integration

  16. Reexamining Operating System Support for Database Management

    OpenAIRE

    Vasil, Tim

    2003-01-01

    In 1981, Michael Stonebraker [21] observed that database management systems written for commodity operating systems could not effectively take advantage of key operating system services, such as buffer pool management and process scheduling, due to expensive overhead and lack of customizability. The “not quite right” fit between these kernel services and the demands of database systems forced database designers to work around such limitations or re-implement some kernel functionality in user ...

  17. Veterans Administration Databases

    Science.gov (United States)

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  18. Nuclear technology databases and information network systems

    International Nuclear Information System (INIS)

    Iwata, Shuichi; Kikuchi, Yasuyuki; Minakuchi, Satoshi

    1993-01-01

    This paper describes the databases related to nuclear (science) technology, and information network. Following contents are collected in this paper: the database developed by JAERI, ENERGY NET, ATOM NET, NUCLEN nuclear information database, INIS, NUclear Code Information Service (NUCLIS), Social Application of Nuclear Technology Accumulation project (SANTA), Nuclear Information Database/Communication System (NICS), reactor materials database, radiation effects database, NucNet European nuclear information database, reactor dismantling database. (J.P.N.)

  19. Protein function prediction involved on radio-resistant bacteria

    International Nuclear Information System (INIS)

    Mezhoud, Karim; Mankai, Houda; Sghaier, Haitham; Barkallah, Insaf

    2009-01-01

    Previously, we identified 58 proteins under positive selection in ionizing-radiation-resistant bacteria (IRRB) but absent in all ionizing-radiation-sensitive bacteria (IRSB). These are good reasons to believe these 58 proteins with their interactions with other proteins (interactomes) are a part of the answer to the question as to how IRRB resist to radiation, because our knowledge of interactomes of positively selected orphan proteins in IRRB might allow us to define cellular pathways important to ionizing-radiation resistance. Using the Database of Interacting Proteins and the PSIbase, we have predicted interactions of orthologs of the 58 proteins under positive selection in IRRB but absent in all IRSB. We used integrate experimental data sets with molecular interaction networks and protein structure prediction from databases. Among these, 18 proteins with their interactomes were identified in Deinococcus radiodurans R1. DNA checkpoint and repair, kinases pathways, energetic and nucleotide metabolisms were the important biological process that found. We predicted the interactomes of 58 proteins under positive selection in IRRB. It is hoped our data will provide new clues as to the cellular pathways that are important for ionizing-radiation resistance. We have identified news proteins involved on DNA management which were not previously mentioned. It is an important input in addition to protein that studied. It does still work to deepen our study on these new proteins

  20. Dissection of protein interactomics highlights microRNA synergy.

    Science.gov (United States)

    Zhu, Wenliang; Zhao, Yilei; Xu, Yingqi; Sun, Yong; Wang, Zhe; Yuan, Wei; Du, Zhimin

    2013-01-01

    Despite a large amount of microRNAs (miRNAs) have been validated to play crucial roles in human biology and disease, there is little systematic insight into the nature and scale of the potential synergistic interactions executed by miRNAs themselves. Here we established an integrated parameter synergy score to determine miRNA synergy, by combining the two mechanisms for miRNA-miRNA interactions, miRNA-mediated gene co-regulation and functional association between target gene products, into one single parameter. Receiver operating characteristic (ROC) analysis indicated that synergy score accurately identified the gene ontology-defined miRNA synergy (AUC = 0.9415, psynergy, implying poor expectancy of widespread synergy. However, targeting more key genes made two miRNAs more likely to act synergistically. Compared to other miRNAs, miR-21 was a highly exceptional case due to frequent appearance in the top synergistic miRNA pairs. This result highlighted its essential role in coordinating or strengthening physiological and pathological functions of other miRNAs. The synergistic effect of miR-21 and miR-1 were functionally validated for their significant influences on myocardial apoptosis, cardiac hypertrophy and fibrosis. The novel approach established in this study enables easy and effective identification of condition-restricted potent miRNA synergy simply by concentrating the available protein interactomics and miRNA-target interaction data into a single parameter synergy score. Our results may be important for understanding synergistic gene regulation by miRNAs and may have significant implications for miRNA combination therapy of cardiovascular disease.

  1. Enabling On-Demand Database Computing with MIT SuperCloud Database Management System

    Science.gov (United States)

    2015-09-15

    arc.liv.ac.uk/trac/SGE) provides these services and is independent of programming language (C, Fortran, Java , Matlab, etc) or parallel programming...a MySQL database to store DNS records. The DNS records are controlled via a simple web service interface that allows records to be created

  2. Characterization of hampin/MSL1 as a node in the nuclear interactome

    International Nuclear Information System (INIS)

    Dmitriev, Ruslan I.; Korneenko, Tatyana V.; Bessonov, Alexander A.; Shakhparonov, Mikhail I.; Modyanov, Nikolai N.; Pestov, Nikolay B.

    2007-01-01

    Hampin, homolog of Drosophila MSL1, is a partner of histone acetyltransferase MYST1/MOF. Functions of these proteins remain poorly understood beyond their participation in chromatin remodeling complex MSL. In order to identify new proteins interacting with hampin, we screened a mouse cDNA library in yeast two-hybrid system with mouse hampin as bait and found five high-confidence interactors: MYST1, TPR proteins TTC4 and KIAA0103, NOP17 (homolog of a yeast nucleolar protein), and transcription factor GC BP. Subsequently, all these proteins were used as baits in library screenings and more new interactions were found: tumor suppressor RASSF1C and spliceosome component PRP3 for KIAA0103, ring finger RNF10 for RASSF1C, and RNA polymerase II regulator NELF-C for MYST1. The majority of the observed interactions was confirmed in vitro by pull-down of bacterially expressed proteins. Reconstruction of a fragment of mammalian interactome suggests that hampin may be linked to diverse regulatory processes in the nucleus

  3. Medicare Coverage Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicare Coverage Database (MCD) contains all National Coverage Determinations (NCDs) and Local Coverage Determinations (LCDs), local articles, and proposed NCD...

  4. Experience in running relational databases on clustered storage

    CERN Document Server

    Aparicio, Ruben Gaspar

    2015-01-01

    For past eight years, CERN IT Database group has based its backend storage on NAS (Network-Attached Storage) architecture, providing database access via NFS (Network File System) protocol. In last two and half years, our storage has evolved from a scale-up architecture to a scale-out one. This paper describes our setup and a set of functionalities providing key features to other services like Database on Demand [1] or CERN Oracle backup and recovery service. It also outlines possible trend of evolution that, storage for databases could follow.

  5. JICST Factual Database(2)

    Science.gov (United States)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  6. OECD/NEA thermochemical database

    Energy Technology Data Exchange (ETDEWEB)

    Byeon, Kee Hoh; Song, Dae Yong; Shin, Hyun Kyoo; Park, Seong Won; Ro, Seung Gy

    1998-03-01

    This state of the art report is to introduce the contents of the Chemical Data-Service, OECD/NEA, and the results of survey by OECD/NEA for the thermodynamic and kinetic database currently in use. It is also to summarize the results of Thermochemical Database Projects of OECD/NEA. This report will be a guide book for the researchers easily to get the validate thermodynamic and kinetic data of all substances from the available OECD/NEA database. (author). 75 refs.

  7. The Service Status and Development Strategy of the Mobile Application Service of Ancient Books Database

    Directory of Open Access Journals (Sweden)

    Yang Siluo

    2017-12-01

    Full Text Available [Purpose/significance] The mobile application of ancient books database is a change of the ancient books database from the online version to the mobile one. At present, the mobile application of ancient books database is in the initial stage of development, so it is necessary to investigate the current situation and provide suggestions for the development of it. [Method/process] This paper selected two kinds of ancient books databases, namely WeChat platform and the mobile phone client, and analyzed the operation mode and the main function. [Result/conclusion] We come to conclusion that the ancient database mobile application has some defects, such as resources in a small scale, single content and data form, and the function of single platform construction is not perfect, users pay inadequate attention to such issues. Then, we put forward some corresponding suggestions and point out that in order to construct ancient books database mobile applications, it is necessary to improve the platform construction, enrich the data form and quantity, optimize the function, emphasize the communication and interaction with the user.

  8. The Zebrafish Model Organism Database (ZFIN)

    Data.gov (United States)

    U.S. Department of Health & Human Services — ZFIN serves as the zebrafish model organism database. It aims to: a) be the community database resource for the laboratory use of zebrafish, b) develop and support...

  9. Nuclear Reaction and Structure Databases of the National Nuclear Data Center

    International Nuclear Information System (INIS)

    Pritychenko, B.; Arcilla, R.; Herman, M. W.; Oblozinsky, P.; Rochman, D.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2006-01-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic research and applied nuclear technologies. In 2004, the NNDC migrated all databases into modern relational database software, installed new generation of Linux servers and developed new Java-based Web service. This nuclear database development means much faster, more flexible and more convenient service to all users in the United States. These nuclear reaction and structure database developments as well as related Web services are briefly described

  10. Household Products Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — This database links over 4,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by the manufacturers and allows scientists and...

  11. Detection and Prevention of Insider Threats in Database Driven Web Services

    Science.gov (United States)

    Chumash, Tzvi; Yao, Danfeng

    In this paper, we take the first step to address the gap between the security needs in outsourced hosting services and the protection provided in the current practice. We consider both insider and outsider attacks in the third-party web hosting scenarios. We present SafeWS, a modular solution that is inserted between server side scripts and databases in order to prevent and detect website hijacking and unauthorized access to stored data. To achieve the required security, SafeWS utilizes a combination of lightweight cryptographic integrity and encryption tools, software engineering techniques, and security data management principles. We also describe our implementation of SafeWS and its evaluation. The performance analysis of our prototype shows the overhead introduced by security verification is small. SafeWS will allow business owners to significantly reduce the security risks and vulnerabilities of outsourcing their sensitive customer data to third-party providers.

  12. Storing XML Documents in Databases

    NARCIS (Netherlands)

    A.R. Schmidt; S. Manegold (Stefan); M.L. Kersten (Martin); L.C. Rivero; J.H. Doorn; V.E. Ferraggine

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML

  13. Dissolution Methods Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — For a drug product that does not have a dissolution test method in the United States Pharmacopeia (USP), the FDA Dissolution Methods Database provides information on...

  14. Deciphering peculiar protein-protein interacting modules in Deinococcus radiodurans

    Directory of Open Access Journals (Sweden)

    Barkallah Insaf

    2009-04-01

    Full Text Available Abstract Interactomes of proteins under positive selection from ionizing-radiation-resistant bacteria (IRRB might be a part of the answer to the question as to how IRRB, particularly Deinococcus radiodurans R1 (Deira, resist ionizing radiation. Here, using the Database of Interacting Proteins (DIP and the Protein Structural Interactome (PSI-base server for PSI map, we have predicted novel interactions of orthologs of the 58 proteins under positive selection in Deira and other IRRB, but which are absent in IRSB. Among these, 18 domains and their interactomes have been identified in DNA checkpoint and repair; kinases pathways; energy and nucleotide metabolisms were the important biological processes that were found to be involved. This finding provides new clues to the cellular pathways that can to be important for ionizing-radiation resistance in Deira.

  15. A RESTful Web service interface to the ATLAS COOL database

    International Nuclear Information System (INIS)

    Roe, S A

    2010-01-01

    The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.

  16. Exploring the potential offered by legacy soil databases for ecosystem services mapping of Central African soils

    Science.gov (United States)

    Verdoodt, Ann; Baert, Geert; Van Ranst, Eric

    2014-05-01

    Central African soil resources are characterised by a large variability, ranging from stony, shallow or sandy soils with poor life-sustaining capabilities to highly weathered soils that recycle and support large amounts of biomass. Socio-economic drivers within this largely rural region foster inappropriate land use and management, threaten soil quality and finally culminate into a declining soil productivity and increasing food insecurity. For the development of sustainable land use strategies targeting development planning and natural hazard mitigation, decision makers often rely on legacy soil maps and soil profile databases. Recent development cooperation financed projects led to the design of soil information systems for Rwanda, D.R. Congo, and (ongoing) Burundi. A major challenge is to exploit these existing soil databases and convert them into soil inference systems through an optimal combination of digital soil mapping techniques, land evaluation tools, and biogeochemical models. This presentation aims at (1) highlighting some key characteristics of typical Central African soils, (2) assessing the positional, geographic and semantic quality of the soil information systems, and (3) revealing its potential impacts on the use of these datasets for thematic mapping of soil ecosystem services (e.g. organic carbon storage, pH buffering capacity). Soil map quality is assessed considering positional and semantic quality, as well as geographic completeness. Descriptive statistics, decision tree classification and linear regression techniques are used to mine the soil profile databases. Geo-matching as well as class-matching approaches are considered when developing thematic maps. Variability in inherent as well as dynamic soil properties within the soil taxonomic units is highlighted. It is hypothesized that within-unit variation in soil properties highly affects the use and interpretation of thematic maps for ecosystem services mapping. Results will mainly be based

  17. IAEA nuclear databases for applications

    International Nuclear Information System (INIS)

    Schwerer, Otto

    2003-01-01

    The Nuclear Data Section (NDS) of the International Atomic Energy Agency (IAEA) provides nuclear data services to scientists on a worldwide scale with particular emphasis on developing countries. More than 100 data libraries are made available cost-free by Internet, CD-ROM and other media. These databases are used for practically all areas of nuclear applications as well as basic research. An overview is given of the most important nuclear reaction and nuclear structure databases, such as EXFOR, CINDA, ENDF, NSR, ENSDF, NUDAT, and of selected special purpose libraries such as FENDL, RIPL, RNAL, the IAEA Photonuclear Data Library, and the IAEA charged-particle cross section database for medical radioisotope production. The NDS also coordinates two international nuclear data centre networks and is involved in data development activities (to create new or improve existing data libraries when the available data are inadequate) and in technology transfer to developing countries, e.g. through the installation and support of the mirror web site of the IAEA Nuclear Data Services at IPEN (operational since March 2000) and by organizing nuclear-data related workshops. By encouraging their participation in IAEA Co-ordinated Research Projects and also by compiling their experimental results in databases such as EXFOR, the NDS helps to make developing countries' contributions to nuclear science visible and conveniently available. The web address of the IAEA Nuclear Data Services is http://www.nds.iaea.org and the NDS mirror service at IPEN (Brasil) can be accessed at http://www.nds.ipen.br/ (author)

  18. Who's Gonna Pay the Piper for Free Online Databases?

    Science.gov (United States)

    Jacso, Peter

    1996-01-01

    Discusses new pricing models for some online services and considers the possibilities for the traditional online database market. Topics include multimedia music databases, including copyright implications; other retail-oriented databases; and paying for free databases with advertising. (LRW)

  19. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  20. Smart Location Database - Download

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block...

  1. Exploration of a Vision for Actor Database Systems

    DEFF Research Database (Denmark)

    Shah, Vivek

    of these services. Existing popular approaches to building these services either use an in-memory database system or an actor runtime. We observe that these approaches have complementary strengths and weaknesses. In this dissertation, we propose the integration of actor programming models in database systems....... In doing so, we lay down a vision for a new class of systems called actor database systems. To explore this vision, this dissertation crystallizes the notion of an actor database system by defining its feature set in light of current application and hardware trends. In order to explore the viability...... of the outlined vision, a new programming model named Reactors has been designed to enrich classic relational database programming models with logical actor programming constructs. To support the reactor programming model, a high-performance in-memory multi-core OLTP database system named REACTDB has been built...

  2. Storing XML Documents in Databases

    OpenAIRE

    Schmidt, A.R.; Manegold, Stefan; Kersten, Martin; Rivero, L.C.; Doorn, J.H.; Ferraggine, V.E.

    2005-01-01

    textabstractThe authors introduce concepts for loading large amounts of XML documents into databases where the documents are stored and maintained. The goal is to make XML databases as unobtrusive in multi-tier systems as possible and at the same time provide as many services defined by the XML standards as possible. The ubiquity of XML has sparked great interest in deploying concepts known from Relational Database Management Systems such as declarative query languages, transactions, indexes ...

  3. A comprehensive protein-protein interactome for yeast PAS kinase 1 reveals direct inhibition of respiration through the phosphorylation of Cbf1.

    Science.gov (United States)

    DeMille, Desiree; Bikman, Benjamin T; Mathis, Andrew D; Prince, John T; Mackay, Jordan T; Sowa, Steven W; Hall, Tacie D; Grose, Julianne H

    2014-07-15

    Per-Arnt-Sim (PAS) kinase is a sensory protein kinase required for glucose homeostasis in yeast, mice, and humans, yet little is known about the molecular mechanisms of its function. Using both yeast two-hybrid and copurification approaches, we identified the protein-protein interactome for yeast PAS kinase 1 (Psk1), revealing 93 novel putative protein binding partners. Several of the Psk1 binding partners expand the role of PAS kinase in glucose homeostasis, including new pathways involved in mitochondrial metabolism. In addition, the interactome suggests novel roles for PAS kinase in cell growth (gene/protein expression, replication/cell division, and protein modification and degradation), vacuole function, and stress tolerance. In vitro kinase studies using a subset of 25 of these binding partners identified Mot3, Zds1, Utr1, and Cbf1 as substrates. Further evidence is provided for the in vivo phosphorylation of Cbf1 at T211/T212 and for the subsequent inhibition of respiration. This respiratory role of PAS kinase is consistent with the reported hypermetabolism of PAS kinase-deficient mice, identifying a possible molecular mechanism and solidifying the evolutionary importance of PAS kinase in the regulation of glucose homeostasis. © 2014 DeMille et al. This article is distributed by The American Society for Cell Biology under license from the author(s). Two months after publication it is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  4. The database system for dosimetry service at THE Institute of Nuclear Physics Polish Academy of Sciences

    International Nuclear Information System (INIS)

    Kopec, R.; Puchalska, M.; Olko, P.; Budzanowski, M.

    2005-01-01

    Full text: Laboratory of Individual and Environment Dosimetry (LADIS) at Institute of Nuclear Physics (IFJ) Polish Academy of Sciences in Krakow was formally established and accredited in 2001, based on the 30 years of experience of the local dosimetric service. The service is based on self-developed thermoluminescent detectors MTS-N (LiF:Mg,Ti) and MCP-N (LiF:Mg,Cu,P), three automatic: ACARD and DOSACUS readers and two manual RA-94 readers. The rapid increase of the number of customers from 200 in 2001 to 6000 in 2004 stimulated the development of the dedicated DosBaz data base. The database was built using the MS Access platform. The content of the data structure was elaborated according to the EUR 14852 EN recommendations. In particular, customers are identified with unique ID numbers, the establishment (e.g. name, contact, street and number/PO box, town, country, employer code number and telephone number), the site (e.g. name, contact) and the individual (e.g. full name, a unique number) as recommended in mentioned technical recommendations. The DosBaz allows for entire processing the data, including preparation of the final certificate, reporting for authorities, preparation of the statistics etc. The paper will discuss the structure of the database, show the dataflow and demonstrate the results of statistical evaluation of results. (author)

  5. A Taxonomic Search Engine: federating taxonomic databases using web services.

    Science.gov (United States)

    Page, Roderic D M

    2005-03-09

    The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  6. Usability of some databases for information services in Czechoslovak nuclear programme

    International Nuclear Information System (INIS)

    Kakos, A.

    1988-01-01

    The contents were compared of the databases Chemical Abstracts Search, World Patent Index, Excerpta Medica, Inspec and Compendex with INIS, with regard to possible completing of INIS searches with searches in these other databases. On the basis of the results of test searches made in all said databases on selected topics falling under the INIS scope, concrete cases were determined when INIS searches should be completed with data in some of the other databases. The contents analysis method is described with regard to the concrete search topics and areas are given of the overlapping of the databases with INIS. Numerical results are given. (J.B.). 2 tabs

  7. Synaptic Interactome Mining Reveals p140Cap as a New Hub for PSD Proteins Involved in Psychiatric and Neurological Disorders

    Directory of Open Access Journals (Sweden)

    Annalisa Alfieri

    2017-06-01

    Full Text Available Altered synaptic function has been associated with neurological and psychiatric conditions including intellectual disability, schizophrenia and autism spectrum disorder (ASD. Amongst the recently discovered synaptic proteins is p140Cap, an adaptor that localizes at dendritic spines and regulates their maturation and physiology. We recently showed that p140Cap knockout mice have cognitive deficits, impaired long-term potentiation (LTP and long-term depression (LTD, and immature, filopodia-like dendritic spines. Only a few p140Cap interacting proteins have been identified in the brain and the molecular complexes and pathways underlying p140Cap synaptic function are largely unknown. Here, we isolated and characterized the p140Cap synaptic interactome by co-immunoprecipitation from crude mouse synaptosomes, followed by mass spectrometry-based proteomics. We identified 351 p140Cap interactors and found that they cluster to sub complexes mostly located in the postsynaptic density (PSD. p140Cap interactors converge on key synaptic processes, including transmission across chemical synapses, actin cytoskeleton remodeling and cell-cell junction organization. Gene co-expression data further support convergent functions: the p140Cap interactors are tightly co-expressed with each other and with p140Cap. Importantly, the p140Cap interactome and its co-expression network show strong enrichment in genes associated with schizophrenia, autism, bipolar disorder, intellectual disability and epilepsy, supporting synaptic dysfunction as a shared biological feature in brain diseases. Overall, our data provide novel insights into the molecular organization of the synapse and indicate that p140Cap acts as a hub for postsynaptic complexes relevant to psychiatric and neurological disorders.

  8. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    Directory of Open Access Journals (Sweden)

    Okba Selama

    2013-01-01

    Full Text Available Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record. These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  9. Enforcing Privacy in Cloud Databases

    OpenAIRE

    Moghadam, Somayeh Sobati; Darmont, Jérôme; Gavin, Gérald

    2017-01-01

    International audience; Outsourcing databases, i.e., resorting to Database-as-a-Service (DBaaS), is nowadays a popular choice due to the elasticity, availability, scalability and pay-as-you-go features of cloud computing. However, most data are sensitive to some extent, and data privacy remains one of the top concerns to DBaaS users, for obvious legal and competitive reasons.In this paper, we survey the mechanisms that aim at making databases secure in a cloud environment, and discuss current...

  10. Large scale access tests and online interfaces to ATLAS conditions databases

    International Nuclear Information System (INIS)

    Amorim, A; Lopes, L; Pereira, P; Simoes, J; Soloviev, I; Burckhart, D; Schmitt, J V D; Caprini, M; Kolos, S

    2008-01-01

    The access of the ATLAS Trigger and Data Acquisition (TDAQ) system to the ATLAS Conditions Databases sets strong reliability and performance requirements on the database storage and access infrastructures. Several applications were developed to support the integration of Conditions database access with the online services in TDAQ, including the interface to the Information Services (IS) and to the TDAQ Configuration Databases. The information storage requirements were the motivation for the ONline A Synchronous Interface to COOL (ONASIC) from the Information Service (IS) to LCG/COOL databases. ONASIC avoids the possible backpressure from Online Database servers by managing a local cache. In parallel, OKS2COOL was developed to store Configuration Databases into an Offline Database with history record. The DBStressor application was developed to test and stress the access to the Conditions database using the LCG/COOL interface while operating in an integrated way as a TDAQ application. The performance scaling of simultaneous Conditions database read accesses was studied in the context of the ATLAS High Level Trigger large computing farms. A large set of tests were performed involving up to 1000 computing nodes that simultaneously accessed the LCG central database server infrastructure at CERN

  11. The research of network database security technology based on web service

    Science.gov (United States)

    Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin

    2013-03-01

    Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.

  12. Fine Arts Database (FAD)

    Data.gov (United States)

    General Services Administration — The Fine Arts Database records information on federally owned art in the control of the GSA; this includes the location, current condition and information on artists.

  13. Distributed Database Access in the LHC Computing Grid with CORAL

    CERN Document Server

    Molnár, Z; Düllmann, D; Giacomo, G; Kalkhof, A; Valassi, A; CERN. Geneva. IT Department

    2009-01-01

    The CORAL package is the LCG Persistency Framework foundation for accessing relational databases. From the start CORAL has been designed to facilitate the deployment of the LHC experiment database applications in a distributed computing environment. In particular we cover - improvements to database service scalability by client connection management - platform-independent, multi-tier scalable database access by connection multiplexing, caching - a secure authentication and authorisation scheme integrated with existing grid services. We will summarize the deployment experience from several experiment productions using the distributed database infrastructure, which is now available in LCG. Finally, we present perspectives for future developments in this area.

  14. A Taxonomic Search Engine: Federating taxonomic databases using web services

    Directory of Open Access Journals (Sweden)

    Page Roderic DM

    2005-03-01

    Full Text Available Abstract Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata for each name. Conclusion The Taxonomic Search Engine is available at http://darwin.zoology.gla.ac.uk/~rpage/portal/ and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names.

  15. Report on the present situation of the FY 1998 technical literature database; 1998 nendo gijutsu bunken database nado genjo chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    To study database which contributes to the future scientific technology information distribution, survey/analysis were conducted of the present status of the service supply side. In the survey on the database trend, the trend of relations between DB producers and distributors was investigated. As a result, there were seen the increase in DB producers, expansion of internet/distribution/service, etc., and there were no changes in the U.S.-centered structure. Further, it was recognized that the DB service in the internet age now faces the time of change as seen in existing producers' response to internet, on-line service of primary information source, creation of new on-line service, etc. By the internet impact, the following are predicted for the future DB service: slump of producers without strong points and gateway type distributors, appearance of new types of DB service, etc. (NEDO)

  16. Rat Genome Database (RGD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Rat Genome Database (RGD) is a collaborative effort between leading research institutions involved in rat genetic and genomic research to collect, consolidate,...

  17. Mouse Phenome Database (MPD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Mouse Phenome Database (MPD) has characterizations of hundreds of strains of laboratory mice to facilitate translational discoveries and to assist in selection...

  18. Radiation safety research information database

    International Nuclear Information System (INIS)

    Yukawa, Masae; Miyamoto, Kiriko; Takeda, Hiroshi; Kuroda, Noriko; Yamamoto, Kazuhiko

    2004-01-01

    National Institute of Radiological Sciences in Japan began to construct Radiation Safety Research Information Database' in 2001. The research information database is of great service to evaluate the effects of radiation on people by estimating exposure dose by determining radiation and radioactive matters in the environment. The above database (DB) consists of seven DB such as Nirs Air Borne Dust Survey DB, Nirs Environmental Tritium Survey DB, Nirs Environmental Carbon Survey DB, Environmental Radiation Levels, Abe, Metabolic Database for Assessment of Internal Dose, Graphs of Predicted Monitoring Data, and Nirs nuclear installation environment water tritium survey DB. Outline of DB and each DB are explained. (S.Y.)

  19. Survey on utilization of database for research and development of global environmental industry technology; Chikyu kankyo sangyo gijutsu kenkyu kaihatsu no tame no database nado no riyo ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    1993-03-01

    To optimize networks and database systems for promotion of the industry technology development contributing to the solution of the global environmental problem, studies are made on reusable information resource and its utilization methods. As reusable information resource, there are external database and network system for researchers` information exchange and for computer use. The external database includes commercial database and academic database. As commercial database, 6 agents and 13 service systems are selected. As academic database, there are NACSIS-IR and the database which is connected with INTERNET in the U.S. These are used in connection with the UNIX academic research network called INTERNET. For connection with INTERNET, a commercial UNIX network service called IIJ which starts service in April 1993 can be used. However, personal computer communication network is used for the time being. 6 figs., 4 tabs.

  20. Databases for highway inventories. Proposal for a new model

    Energy Technology Data Exchange (ETDEWEB)

    Perez Casan, J.A.

    2016-07-01

    Database models for highway inventories are based on classical schemes for relational databases: many related tables, in which the database designer establishes, a priori, every detail that they consider relevant for inventory management. This kind of database presents several problems. First, adapting the model and its applications when new database features appear is difficult. In addition, the different needs of different sets of road inventory users are difficult to fulfil with these schemes. For example, maintenance management services, road authorities and emergency services have different needs. In addition, this kind of database cannot be adapted to new scenarios, such as other countries and regions (that may classify roads or name certain elements differently). The problem is more complex if the language used in these scenarios is not the same as that used in the database design. In addition, technicians need a long time to learn to use the database efficiently. This paper proposes a flexible, multilanguage and multipurpose database model, which gives an effective and simple solution to the aforementioned problems. (Author)

  1. Report on the present situation of the FY 1998 technical literature database; 1998 nendo gijutsu bunken database nado genjo chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-03-01

    To study database which contributes to the future scientific technology information distribution, survey/analysis were conducted of the present status of the service supply side. In the survey on the database trend, the trend of relations between DB producers and distributors was investigated. As a result, there were seen the increase in DB producers, expansion of internet/distribution/service, etc., and there were no changes in the U.S.-centered structure. Further, it was recognized that the DB service in the internet age now faces the time of change as seen in existing producers' response to internet, on-line service of primary information source, creation of new on-line service, etc. By the internet impact, the following are predicted for the future DB service: slump of producers without strong points and gateway type distributors, appearance of new types of DB service, etc. (NEDO)

  2. Medicaid CHIP ESPC Database

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Environmental Scanning and Program Characteristic (ESPC) Database is in a Microsoft (MS) Access format and contains Medicaid and CHIP data, for the 50 states and...

  3. INIST: databases reorientation

    International Nuclear Information System (INIS)

    Bidet, J.C.

    1995-01-01

    INIST is a CNRS (Centre National de la Recherche Scientifique) laboratory devoted to the treatment of scientific and technical informations and to the management of these informations compiled in a database. Reorientation of the database content has been proposed in 1994 to increase the transfer of research towards enterprises and services, to develop more automatized accesses to the informations, and to create a quality assurance plan. The catalog of publications comprises 5800 periodical titles (1300 for fundamental research and 4500 for applied research). A science and technology multi-thematic database will be created in 1995 for the retrieval of applied and technical informations. ''Grey literature'' (reports, thesis, proceedings..) and human and social sciences data will be added to the base by the use of informations selected in the existing GRISELI and Francis databases. Strong modifications are also planned in the thematic cover of Earth sciences and will considerably reduce the geological information content. (J.S.). 1 tab

  4. The Impact of Online Bibliographic Databases on Teaching and Research in Political Science.

    Science.gov (United States)

    Reichel, Mary

    The availability of online bibliographic databases greatly facilitates literature searching in political science. The advantages to searching databases online include combination of concepts, comprehensiveness, multiple database searching, free-text searching, currency, current awareness services, document delivery service, and convenience.…

  5. Computer application for database management and networking of service radio physics; Aplicacion informatica para la gestion de bases de datos y conexiones en red de un servicio de radiofisica

    Energy Technology Data Exchange (ETDEWEB)

    Ferrando Sanchez, A.; Cabello Murillo, E.; Diaz Fuentes, R.; Castro Novais, J.; Clemente Gutierrez, F.; Casa de Juan, M. A. de la; Adaimi Hernandez, P.

    2011-07-01

    The databases in the quality control prove to be a powerful tool for recording, management and statistical process control. Developed in a Windows environment and under Access (Microsoft Office) our service implements this philosophy on the centers computer network. A computer that acts as the server provides the database to the treatment units to record quality control measures daily and incidents. To remove shortcuts stop working with data migration, possible use of duplicate and erroneous data loss because of errors in network connections, which are common problems, we proceeded to manage connections and access to databases ease of maintenance and use horn to all service personnel.

  6. Noise reduction in protein-protein interaction graphs by the implementation of a novel weighting scheme

    Directory of Open Access Journals (Sweden)

    Moschopoulos Charalampos

    2011-06-01

    Full Text Available Abstract Background Recent technological advances applied to biology such as yeast-two-hybrid, phage display and mass spectrometry have enabled us to create a detailed map of protein interaction networks. These interaction networks represent a rich, yet noisy, source of data that could be used to extract meaningful information, such as protein complexes. Several interaction network weighting schemes have been proposed so far in the literature in order to eliminate the noise inherent in interactome data. In this paper, we propose a novel weighting scheme and apply it to the S. cerevisiae interactome. Complex prediction rates are improved by up to 39%, depending on the clustering algorithm applied. Results We adopt a two step procedure. During the first step, by applying both novel and well established protein-protein interaction (PPI weighting methods, weights are introduced to the original interactome graph based on the confidence level that a given interaction is a true-positive one. The second step applies clustering using established algorithms in the field of graph theory, as well as two variations of Spectral clustering. The clustered interactome networks are also cross-validated against the confirmed protein complexes present in the MIPS database. Conclusions The results of our experimental work demonstrate that interactome graph weighting methods clearly improve the clustering results of several clustering algorithms. Moreover, our proposed weighting scheme outperforms other approaches of PPI graph weighting.

  7. Making connections for life: an in vivo map of the yeast interactome.

    Science.gov (United States)

    Kast, Juergen

    2008-10-01

    Proteins are the true workhorses of any cell. To carry out specific tasks, they frequently bind other molecules in their surroundings. Due to their structural complexity and flexibility, the most diverse array of interactions is seen with other proteins. The different geometries and affinities available for such interactions typically bestow specific functions on proteins. Having available a map of protein-protein interactions is therefore of enormous importance for any researcher interested in gaining insight into biological systems at the level of cells and organisms. In a recent report, a novel approach has been employed that relies on the spontaneous folding of complementary enzyme fragments fused to two different proteins to test whether these interact in their actual cellular context [Tarassov et al., Science 320, 1465-1470 (2008)]. Genome-wide application of this protein-fragment complementation assay has resulted in the first map of the in vivo interactome of Saccharomyces cerevisiae. The current data show striking similarities but also significant differences to those obtained using other large-scale approaches for the same task. This warrants a general discussion of the current state of affairs of protein-protein interaction studies and foreseeable future trends, highlighting their significance for a variety of applications and their potential to revolutionize our understanding of the architecture and dynamics of biological systems.

  8. Mapping the Interactome of a Major Mammalian Endoplasmic Reticulum Heat Shock Protein 90.

    Directory of Open Access Journals (Sweden)

    Feng Hong

    Full Text Available Up to 10% of cytosolic proteins are dependent on the mammalian heat shock protein 90 (HSP90 for folding. However, the interactors of its endoplasmic reticulum (ER paralogue (gp96, Grp94 and HSP90b1 has not been systematically identified. By combining genetic and biochemical approaches, we have comprehensively mapped the interactome of gp96 in macrophages and B cells. A total of 511 proteins were reduced in gp96 knockdown cells, compared to levels observed in wild type cells. By immunoprecipitation, we found that 201 proteins associated with gp96. Gene Ontology analysis indicated that these proteins are involved in metabolism, transport, translation, protein folding, development, localization, response to stress and cellular component biogenesis. While known gp96 clients such as integrins, Toll-like receptors (TLRs and Wnt co-receptor LRP6, were confirmed, cell surface HSP receptor CD91, TLR4 pathway protein CD180, WDR1, GANAB and CAPZB were identified as potentially novel substrates of gp96. Taken together, our study establishes gp96 as a critical chaperone to integrate innate immunity, Wnt signaling and organ development.

  9. The Danish Cardiac Rehabilitation Database

    DEFF Research Database (Denmark)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne

    2016-01-01

    hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. CONCLUSION: The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD......AIM OF DATABASE: The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). STUDY POPULATION: Hospitalized patients with CHD with stenosis on coronary angiography treated...... with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date...

  10. A new relational database structure and online interface for the HITRAN database

    International Nuclear Information System (INIS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-01-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described. -- Highlights: • A new, interactive version of the HITRAN database is presented. • The data is stored in a structured fashion in a relational database. • The new HITRANonline interface offers increased functionality and easier error correction

  11. A new relational database structure and online interface for the HITRAN database

    Science.gov (United States)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  12. A database application for wilderness character monitoring

    Science.gov (United States)

    Ashley Adams; Peter Landres; Simon Kingston

    2012-01-01

    The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...

  13. Building and analyzing protein interactome networks by cross-species comparisons

    Directory of Open Access Journals (Sweden)

    Blackman Barron

    2010-03-01

    Full Text Available Abstract Background A genomic catalogue of protein-protein interactions is a rich source of information, particularly for exploring the relationships between proteins. Numerous systems-wide and small-scale experiments have been conducted to identify interactions; however, our knowledge of all interactions for any one species is incomplete, and alternative means to expand these network maps is needed. We therefore took a comparative biology approach to predict protein-protein interactions across five species (human, mouse, fly, worm, and yeast and developed InterologFinder for research biologists to easily navigate this data. We also developed a confidence score for interactions based on available experimental evidence and conservation across species. Results The connectivity of the resultant networks was determined to have scale-free distribution, small-world properties, and increased local modularity, indicating that the added interactions do not disrupt our current understanding of protein network structures. We show examples of how these improved interactomes can be used to analyze a genome-scale dataset (RNAi screen and to assign new function to proteins. Predicted interactions within this dataset were tested by co-immunoprecipitation, resulting in a high rate of validation, suggesting the high quality of networks produced. Conclusions Protein-protein interactions were predicted in five species, based on orthology. An InteroScore, a score accounting for homology, number of orthologues with evidence of interactions, and number of unique observations of interactions, is given to each known and predicted interaction. Our website http://www.interologfinder.org provides research biologists intuitive access to this data.

  14. Database of Interacting Proteins (DIP)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The DIP database catalogs experimentally determined interactions between proteins. It combines information from a variety of sources to create a single, consistent...

  15. C# Database Basics

    CERN Document Server

    Schmalz, Michael

    2012-01-01

    Working with data and databases in C# certainly can be daunting if you're coming from VB6, VBA, or Access. With this hands-on guide, you'll shorten the learning curve considerably as you master accessing, adding, updating, and deleting data with C#-basic skills you need if you intend to program with this language. No previous knowledge of C# is necessary. By following the examples in this book, you'll learn how to tackle several database tasks in C#, such as working with SQL Server, building data entry forms, and using data in a web service. The book's code samples will help you get started

  16. Linkage between the Danish National Health Service Prescription Database, the Danish Fetal Medicine Database, and other Danish registries as a tool for the study of drug safety in pregnancy

    Directory of Open Access Journals (Sweden)

    Pedersen LH

    2016-05-01

    Full Text Available Lars H Pedersen,1,2 Olav B Petersen,1,2 Mette Nørgaard,3 Charlotte Ekelund,4 Lars Pedersen,3 Ann Tabor,4 Henrik T Sørensen3 1Department of Clinical Medicine, Aarhus University, 2Department of Obstetrics and Gynecology, 3Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, 4Department of Fetal Medicine, Rigshospitalet, Copenhagen, Denmark Abstract: A linked population-based database is being created in Denmark for research on drug safety during pregnancy. It combines information from the Danish National Health Service Prescription Database (with information on all prescriptions reimbursed in Denmark since 2004, the Danish Fetal Medicine Database, the Danish National Registry of Patients, and the Medical Birth Registry. The new linked database will provide validated information on malformations diagnosed both prenatally and postnatally. The cohort from 2008 to 2014 will comprise 589,000 pregnancies with information on 424,000 pregnancies resulting in live-born children, ~420,000 pregnancies undergoing prenatal ultrasound scans, 65,000 miscarriages, and 92,000 terminations. It will be updated yearly with information on ~80,000 pregnancies. The cohort will enable identification of drug exposures associated with severe malformations, not only based on malformations diagnosed after birth but also including those having led to termination of pregnancy or miscarriage. Such combined data will provide a unique source of information for research on the safety of medications used during pregnancy. Keywords: malformations, teratology, therapeutic drug monitoring, epidemiological methods, registries

  17. JDD, Inc. Database

    Science.gov (United States)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  18. Healthcare Cost and Utilization Project Nationwide Readmissions Database (NRD)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Nationwide Readmissions Database (NRD) is a unique and powerful database designed to support various types of analyses of national readmission rates for all...

  19. Comprehensive RNA Polymerase II Interactomes Reveal Distinct and Varied Roles for Each Phospho-CTD Residue

    Directory of Open Access Journals (Sweden)

    Kevin M. Harlen

    2016-06-01

    Full Text Available Transcription controls splicing and other gene regulatory processes, yet mechanisms remain obscure due to our fragmented knowledge of the molecular connections between the dynamically phosphorylated RNA polymerase II (Pol II C-terminal domain (CTD and regulatory factors. By systematically isolating phosphorylation states of the CTD heptapeptide repeat (Y1S2P3T4S5P6S7, we identify hundreds of protein factors that are differentially enriched, revealing unappreciated connections between the Pol II CTD and co-transcriptional processes. These data uncover a role for threonine-4 in 3′ end processing through control of the transition between cleavage and termination. Furthermore, serine-5 phosphorylation seeds spliceosomal assembly immediately downstream of 3′ splice sites through a direct interaction with spliceosomal subcomplex U1. Strikingly, threonine-4 phosphorylation also impacts splicing by serving as a mark of co-transcriptional spliceosome release and ensuring efficient post-transcriptional splicing genome-wide. Thus, comprehensive Pol II interactomes identify the complex and functional connections between transcription machinery and other gene regulatory complexes.

  20. Examination of Industry Payments to Radiation Oncologists in 2014 Using the Centers for Medicare and Medicaid Services Open Payments Database

    Energy Technology Data Exchange (ETDEWEB)

    Jairam, Vikram [Yale School of Medicine, New Haven, Connecticut (United States); Yu, James B., E-mail: james.b.yu@yale.edu [Department of Therapeutic Radiology, Yale School of Medicine, New Haven, Connecticut (United States)

    2016-01-01

    Purpose: To use the Centers for Medicare and Medicaid Services Open Payments database to characterize payments made to radiation oncologists and compare their payment profile with that of medical and surgical oncologists. Methods and Materials: The June 2015 release of the Open Payments database was accessed, containing all payments made to physicians in 2014. The general payments dataset was used for analysis. Data on payments made to medical, surgical, and radiation oncologists was obtained and compared. Within radiation oncology, data regarding payment category, sponsorship, and geographic distribution were identified. Basic statistics including mean, median, range, and sum were calculated by provider and by transaction. Results: Among the 3 oncologic specialties, radiation oncology had the smallest proportion (58%) of compensated physicians and the lowest mean ($1620) and median ($112) payment per provider. Surgical oncology had the highest proportion (84%) of compensated physicians, whereas medical oncology had the highest mean ($6371) and median ($448) payment per physician. Within radiation oncology, nonconsulting services accounted for the most money to physicians ($1,042,556), whereas the majority of the sponsors were medical device companies (52%). Radiation oncologists in the West accepted the most money ($2,041,603) of any US Census region. Conclusions: Radiation oncologists in 2014 received a large number of payments from industry, although less than their medical or surgical counterparts. As the Open Payments database continues to be improved, it remains to be seen whether this information will be used by patients to inform choice of providers or by lawmakers to enact policy regulating physician–industry relationships.

  1. DEPOT database: Reference manual and user's guide

    International Nuclear Information System (INIS)

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information

  2. Linking Ecosystem Services Benefit Transfer Databases and Ecosystem Services Production Function Libraries

    Science.gov (United States)

    The quantification or estimation of the economic and non-economic values of ecosystem services can be done from a number of distinct approaches. For example, practitioners may use ecosystem services production function models (ESPFMs) for a particular location, or alternatively, ...

  3. Configuration Database for BaBar On-line

    International Nuclear Information System (INIS)

    Salnikov, Andrei

    2003-01-01

    The configuration database is one of the vital systems in the BaBar on-line system. It provides services for the different parts of the data acquisition system and control system, which require run-time parameters. The original design and implementation of the configuration database played a significant role in the successful BaBar operations since the beginning of experiment. Recent additions to the design of the configuration database provide better means for the management of data and add new tools to simplify main configuration tasks. We describe the design of the configuration database, its implementation with the Objectivity/DB object-oriented database, and our experience collected during the years of operation

  4. Analysis of isotropic turbulence using a public database and the Web service model, and applications to study subgrid models

    Science.gov (United States)

    Meneveau, Charles; Yang, Yunke; Perlman, Eric; Wan, Minpin; Burns, Randal; Szalay, Alex; Chen, Shiyi; Eyink, Gregory

    2008-11-01

    A public database system archiving a direct numerical simulation (DNS) data set of isotropic, forced turbulence is used for studying basic turbulence dynamics. The data set consists of the DNS output on 1024-cubed spatial points and 1024 time-samples spanning about one large-scale turn-over timescale. This complete space-time history of turbulence is accessible to users remotely through an interface that is based on the Web-services model (see http://turbulence.pha.jhu.edu). Users may write and execute analysis programs on their host computers, while the programs make subroutine-like calls that request desired parts of the data over the network. The architecture of the database is briefly explained, as are some of the new functions such as Lagrangian particle tracking and spatial box-filtering. These tools are used to evaluate and compare subgrid stresses and models.

  5. A Global Interactome Map of the Dengue Virus NS1 Identifies Virus Restriction and Dependency Host Factors

    Directory of Open Access Journals (Sweden)

    Mohamed Lamine Hafirassou

    2017-12-01

    Full Text Available Dengue virus (DENV infections cause the most prevalent mosquito-borne viral disease worldwide, for which no therapies are available. DENV encodes seven non-structural (NS proteins that co-assemble and recruit poorly characterized host factors to form the DENV replication complex essential for viral infection. Here, we provide a global proteomic analysis of the human host factors that interact with the DENV NS1 protein. Combined with a functional RNAi screen, this study reveals a comprehensive network of host cellular processes involved in DENV infection and identifies DENV host restriction and dependency factors. We highlight an important role of RACK1 and the chaperonin TRiC (CCT and oligosaccharyltransferase (OST complexes during DENV replication. We further show that the OST complex mediates NS1 and NS4B glycosylation, and pharmacological inhibition of its N-glycosylation function strongly impairs DENV infection. In conclusion, our study provides a global interactome of the DENV NS1 and identifies host factors targetable for antiviral therapies.

  6. Reactome graph database: Efficient access to complex pathway data

    Science.gov (United States)

    Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902

  7. Reactome graph database: Efficient access to complex pathway data.

    Directory of Open Access Journals (Sweden)

    Antonio Fabregat

    2018-01-01

    Full Text Available Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j as well as the new ContentService (REST API that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  8. Reactome graph database: Efficient access to complex pathway data.

    Science.gov (United States)

    Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning

    2018-01-01

    Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.

  9. Developmental and Reproductive Toxicology Database (DART)

    Data.gov (United States)

    U.S. Department of Health & Human Services — A bibliographic database on the National Library of Medicine's (NLM) Toxicology Data Network (TOXNET) with references to developmental and reproductive toxicology...

  10. Latest developments for the IAGOS database: Interoperability and metadata

    Science.gov (United States)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  11. Analysis of the robustness of network-based disease-gene prioritization methods reveals redundancy in the human interactome and functional diversity of disease-genes.

    Directory of Open Access Journals (Sweden)

    Emre Guney

    Full Text Available Complex biological systems usually pose a trade-off between robustness and fragility where a small number of perturbations can substantially disrupt the system. Although biological systems are robust against changes in many external and internal conditions, even a single mutation can perturb the system substantially, giving rise to a pathophenotype. Recent advances in identifying and analyzing the sequential variations beneath human disorders help to comprehend a systemic view of the mechanisms underlying various disease phenotypes. Network-based disease-gene prioritization methods rank the relevance of genes in a disease under the hypothesis that genes whose proteins interact with each other tend to exhibit similar phenotypes. In this study, we have tested the robustness of several network-based disease-gene prioritization methods with respect to the perturbations of the system using various disease phenotypes from the Online Mendelian Inheritance in Man database. These perturbations have been introduced either in the protein-protein interaction network or in the set of known disease-gene associations. As the network-based disease-gene prioritization methods are based on the connectivity between known disease-gene associations, we have further used these methods to categorize the pathophenotypes with respect to the recoverability of hidden disease-genes. Our results have suggested that, in general, disease-genes are connected through multiple paths in the human interactome. Moreover, even when these paths are disturbed, network-based prioritization can reveal hidden disease-gene associations in some pathophenotypes such as breast cancer, cardiomyopathy, diabetes, leukemia, parkinson disease and obesity to a greater extend compared to the rest of the pathophenotypes tested in this study. Gene Ontology (GO analysis highlighted the role of functional diversity for such diseases.

  12. CAZymes Analysis Toolkit (CAT): web service for searching and analyzing carbohydrate-active enzymes in a newly sequenced organism using CAZy database.

    Science.gov (United States)

    Park, Byung H; Karpinets, Tatiana V; Syed, Mustafa H; Leuze, Michael R; Uberbacher, Edward C

    2010-12-01

    The Carbohydrate-Active Enzyme (CAZy) database provides a rich set of manually annotated enzymes that degrade, modify, or create glycosidic bonds. Despite rich and invaluable information stored in the database, software tools utilizing this information for annotation of newly sequenced genomes by CAZy families are limited. We have employed two annotation approaches to fill the gap between manually curated high-quality protein sequences collected in the CAZy database and the growing number of other protein sequences produced by genome or metagenome sequencing projects. The first approach is based on a similarity search against the entire nonredundant sequences of the CAZy database. The second approach performs annotation using links or correspondences between the CAZy families and protein family domains. The links were discovered using the association rule learning algorithm applied to sequences from the CAZy database. The approaches complement each other and in combination achieved high specificity and sensitivity when cross-evaluated with the manually curated genomes of Clostridium thermocellum ATCC 27405 and Saccharophagus degradans 2-40. The capability of the proposed framework to predict the function of unknown protein domains and of hypothetical proteins in the genome of Neurospora crassa is demonstrated. The framework is implemented as a Web service, the CAZymes Analysis Toolkit, and is available at http://cricket.ornl.gov/cgi-bin/cat.cgi.

  13. The JANA calibrations and conditions database API

    International Nuclear Information System (INIS)

    Lawrence, David

    2010-01-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  14. The JANA calibrations and conditions database API

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence, David, E-mail: davidl@jlab.or [12000 Jefferson Ave., Suite 8, Newport News, VA 23601 (United States)

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  15. SCOWLP: a web-based database for detailed characterization and visualization of protein interfaces

    Directory of Open Access Journals (Sweden)

    Schroeder Michael

    2006-03-01

    Full Text Available Abstract Background Currently there is a strong need for methods that help to obtain an accurate description of protein interfaces in order to be able to understand the principles that govern molecular recognition and protein function. Many of the recent efforts to computationally identify and characterize protein networks extract protein interaction information at atomic resolution from the PDB. However, they pay none or little attention to small protein ligands and solvent. They are key components and mediators of protein interactions and fundamental for a complete description of protein interfaces. Interactome profiling requires the development of computational tools to extract and analyze protein-protein, protein-ligand and detailed solvent interaction information from the PDB in an automatic and comparative fashion. Adding this information to the existing one on protein-protein interactions will allow us to better understand protein interaction networks and protein function. Description SCOWLP (Structural Characterization Of Water, Ligands and Proteins is a user-friendly and publicly accessible web-based relational database for detailed characterization and visualization of the PDB protein interfaces. The SCOWLP database includes proteins, peptidic-ligands and interface water molecules as descriptors of protein interfaces. It contains currently 74,907 protein interfaces and 2,093,976 residue-residue interactions formed by 60,664 structural units (protein domains and peptidic-ligands and their interacting solvent. The SCOWLP web-server allows detailed structural analysis and comparisons of protein interfaces at atomic level by text query of PDB codes and/or by navigating a SCOP-based tree. It includes a visualization tool to interactively display the interfaces and label interacting residues and interface solvent by atomic physicochemical properties. SCOWLP is automatically updated with every SCOP release. Conclusion SCOWLP enriches

  16. National Database for Autism Research (NDAR)

    Data.gov (United States)

    U.S. Department of Health & Human Services — National Database for Autism Research (NDAR) is an extensible, scalable informatics platform for austism spectrum disorder-relevant data at all levels of biological...

  17. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  18. The Danish Collaborative Bacteraemia Network (DACOBAN) database

    DEFF Research Database (Denmark)

    Gradel, Kim Oren; Schønheyder, Henrik Carl; Arpi, Magnus

    2014-01-01

    % of the Danish population). The database also includes data on comorbidity from the Danish National Patient Registry, vital status from the Danish Civil Registration System, and clinical data on 31% of nonselected records in the database. Use of the unique civil registration number given to all Danish residents......The Danish Collaborative Bacteraemia Network (DACOBAN) research database includes microbiological data obtained from positive blood cultures from a geographically and demographically well-defined population serviced by three clinical microbiology departments (1.7 million residents, 32...... enables linkage to additional registries for specific research projects. The DACOBAN database is continuously updated, and it currently comprises 39,292 patients with 49,951 bacteremic episodes from 2000 through 2011. The database is part of an international network of population-based bacteremia...

  19. A Philosophy Research Database to Share Data Resources

    Directory of Open Access Journals (Sweden)

    Jili Cheng

    2007-12-01

    Full Text Available Philosophy research used to rely mainly on the traditional published journals and newspapers for collecting or communicating data. However, because of financial limits or lack of capability to collect data, required published materials and even restricted materials and developing information from research projects often could not be obtained. The rise of digital techniques and Internet opportunities has allowed data resource sharing of philosophy research. However, although there are several ICPs with large-scale comprehensive commercial databases in the field in China, no real non-profit professional database for philosophy researchers exists. Therefore, in 2002, the Philosophy Institute of the Chinese Academy of Social Sciences began a project to build "The Database of Philosophy Research." Until Mar. 2006 the number of subsets had reached 30, with more than 30,000 records, retrieval services reached 6,000, and article-reading reached 30,000. Because of the concept of intellectual property, the service of the database is currently limited to the information held in CASS. Nevertheless, this is the first academic database for philosophy research, so its orientation is towards resource-sharing, leading users to data, and serving large number of demands from other provinces and departments.

  20. Reshaping Smart Businesses with Cloud Database Solutions

    Directory of Open Access Journals (Sweden)

    Bogdan NEDELCU

    2015-03-01

    Full Text Available The aim of this article is to show the importance of Big Data and its growing influence on companies. We can also see how much are the companies willing to invest in big data and how much are they currently gaining from their big data. In this big data era, there is a fiercely competition between the companies and the technologies they use when building their strategies. There are almost no boundaries when it comes to the possibilities and facilities some databases can offer. However, the most challenging part lays in the development of efficient solutions - where and when to take the right decision, which cloud service is the most accurate being given a certain scenario, what database is suitable for the business taking in consideration the data types. These are just a few aspects which will be dealt with in the following chapters as well as exemplifications of the most accurate cloud services (e.g. NoSQL databases used by business leaders nowadays.

  1. Discerning molecular interactions: A comprehensive review on biomolecular interaction databases and network analysis tools.

    Science.gov (United States)

    Miryala, Sravan Kumar; Anbarasu, Anand; Ramaiah, Sudha

    2018-02-05

    Computational analysis of biomolecular interaction networks is now gaining a lot of importance to understand the functions of novel genes/proteins. Gene interaction (GI) network analysis and protein-protein interaction (PPI) network analysis play a major role in predicting the functionality of interacting genes or proteins and gives an insight into the functional relationships and evolutionary conservation of interactions among the genes. An interaction network is a graphical representation of gene/protein interactome, where each gene/protein is a node, and interaction between gene/protein is an edge. In this review, we discuss the popular open source databases that serve as data repositories to search and collect protein/gene interaction data, and also tools available for the generation of interaction network, visualization and network analysis. Also, various network analysis approaches like topological approach and clustering approach to study the network properties and functional enrichment server which illustrates the functions and pathway of the genes and proteins has been discussed. Hence the distinctive attribute mentioned in this review is not only to provide an overview of tools and web servers for gene and protein-protein interaction (PPI) network analysis but also to extract useful and meaningful information from the interaction networks. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. One approach to design of speech emotion database

    Science.gov (United States)

    Uhrin, Dominik; Chmelikova, Zdenka; Tovarek, Jaromir; Partila, Pavol; Voznak, Miroslav

    2016-05-01

    This article describes a system for evaluating the credibility of recordings with emotional character. Sound recordings form Czech language database for training and testing systems of speech emotion recognition. These systems are designed to detect human emotions in his voice. The emotional state of man is useful in the security forces and emergency call service. Man in action (soldier, police officer and firefighter) is often exposed to stress. Information about the emotional state (his voice) will help to dispatch to adapt control commands for procedure intervention. Call agents of emergency call service must recognize the mental state of the caller to adjust the mood of the conversation. In this case, the evaluation of the psychological state is the key factor for successful intervention. A quality database of sound recordings is essential for the creation of the mentioned systems. There are quality databases such as Berlin Database of Emotional Speech or Humaine. The actors have created these databases in an audio studio. It means that the recordings contain simulated emotions, not real. Our research aims at creating a database of the Czech emotional recordings of real human speech. Collecting sound samples to the database is only one of the tasks. Another one, no less important, is to evaluate the significance of recordings from the perspective of emotional states. The design of a methodology for evaluating emotional recordings credibility is described in this article. The results describe the advantages and applicability of the developed method.

  3. Drugs and Lactation Database (LactMed)

    Data.gov (United States)

    U.S. Department of Health & Human Services — A peer-reviewed and fully referenced database of drugs to which breastfeeding mothers may be exposed. Among the data included are maternal and infant levels of...

  4. Documentation Service

    International Nuclear Information System (INIS)

    Charnay, J.; Chosson, L.; Croize, M.; Ducloux, A.; Flores, S.; Jarroux, D.; Melka, J.; Morgue, D.; Mottin, C.

    1998-01-01

    This service assures the treatment and diffusion of the scientific information and the management of the scientific production of the institute as well as the secretariat operation for the groups and services of the institute. The report on documentation-library section mentions: the management of the documentation funds, search in international databases (INIS, Current Contents, Inspects), Pret-Inter service which allows accessing documents through DEMOCRITE network of IN2P3. As realizations also mentioned are: the setup of a video, photo database, the Web home page of the institute's library, follow-up of digitizing the document funds by integrating the CD-ROMs and diskettes, electronic archiving of the scientific production, etc

  5. Characterization and interactome study of white spot syndrome virus envelope protein VP11.

    Directory of Open Access Journals (Sweden)

    Wang-Jing Liu

    Full Text Available White spot syndrome virus (WSSV is a large enveloped virus. The WSSV viral particle consists of three structural layers that surround its core DNA: an outer envelope, a tegument and a nucleocapsid. Here we characterize the WSSV structural protein VP11 (WSSV394, GenBank accession number AF440570, and use an interactome approach to analyze the possible associations between this protein and an array of other WSSV and host proteins. Temporal transcription analysis showed that vp11 is an early gene. Western blot hybridization of the intact viral particles and fractionation of the viral components, and immunoelectron microscopy showed that VP11 is an envelope protein. Membrane topology software predicted VP11 to be a type of transmembrane protein with a highly hydrophobic transmembrane domain at its N-terminal. Based on an immunofluorescence assay performed on VP11-transfected Sf9 cells and a trypsin digestion analysis of the virion, we conclude that, contrary to topology software prediction, the C-terminal of this protein is in fact inside the virion. Yeast two-hybrid screening combined with co-immunoprecipitation assays found that VP11 directly interacted with at least 12 other WSSV structural proteins as well as itself. An oligomerization assay further showed that VP11 could form dimers. VP11 is also the first reported WSSV structural protein to interact with the major nucleocapsid protein VP664.

  6. Searching for cellular partners of hantaviral nonstructural protein NSs: Y2H screening of mouse cDNA library and analysis of cellular interactome.

    Science.gov (United States)

    Rönnberg, Tuomas; Jääskeläinen, Kirsi; Blot, Guillaume; Parviainen, Ville; Vaheri, Antti; Renkonen, Risto; Bouloy, Michele; Plyusnin, Alexander

    2012-01-01

    Hantaviruses (Bunyaviridae) are negative-strand RNA viruses with a tripartite genome. The small (S) segment encodes the nucleocapsid protein and, in some hantaviruses, also the nonstructural protein (NSs). The aim of this study was to find potential cellular partners for the hantaviral NSs protein. Toward this aim, yeast two-hybrid (Y2H) screening of mouse cDNA library was performed followed by a search for potential NSs protein counterparts via analyzing a cellular interactome. The resulting interaction network was shown to form logical, clustered structures. Furthermore, several potential binding partners for the NSs protein, for instance ACBD3, were identified and, to prove the principle, interaction between NSs and ACBD3 proteins was demonstrated biochemically.

  7. Striatal Transcriptome and Interactome Analysis of Shank3-overexpressing Mice Reveals the Connectivity between Shank3 and mTORC1 Signaling

    Directory of Open Access Journals (Sweden)

    Yeunkum Lee

    2017-06-01

    Full Text Available Mania causes symptoms of hyperactivity, impulsivity, elevated mood, reduced anxiety and decreased need for sleep, which suggests that the dysfunction of the striatum, a critical component of the brain motor and reward system, can be causally associated with mania. However, detailed molecular pathophysiology underlying the striatal dysfunction in mania remains largely unknown. In this study, we aimed to identify the molecular pathways showing alterations in the striatum of SH3 and multiple ankyrin repeat domains 3 (Shank3-overexpressing transgenic (TG mice that display manic-like behaviors. The results of transcriptome analysis suggested that mammalian target of rapamycin complex 1 (mTORC1 signaling may be the primary molecular signature altered in the Shank3 TG striatum. Indeed, we found that striatal mTORC1 activity, as measured by mTOR S2448 phosphorylation, was significantly decreased in the Shank3 TG mice compared to wild-type (WT mice. To elucidate the potential underlying mechanism, we re-analyzed previously reported protein interactomes, and detected a high connectivity between Shank3 and several upstream regulators of mTORC1, such as tuberous sclerosis 1 (TSC1, TSC2 and Ras homolog enriched in striatum (Rhes, via 94 common interactors that we denominated “Shank3-mTORC1 interactome”. We noticed that, among the 94 common interactors, 11 proteins were related to actin filaments, the level of which was increased in the dorsal striatum of Shank3 TG mice. Furthermore, we could co-immunoprecipitate Shank3, Rhes and Wiskott-Aldrich syndrome protein family verprolin-homologous protein 1 (WAVE1 proteins from the striatal lysate of Shank3 TG mice. By comparing with the gene sets of psychiatric disorders, we also observed that the 94 proteins of Shank3-mTORC1 interactome were significantly associated with bipolar disorder (BD. Altogether, our results suggest a protein interaction-mediated connectivity between Shank3 and certain upstream

  8. Project for a relational database for a radiotherapy service

    International Nuclear Information System (INIS)

    Esposito, R. D.; Planes Meseguer, D.; Dorado Rodriguez, M. P.

    2011-01-01

    The aim of this work is to extract useful data easily to improve our working protocols and to evaluate quantitatively the results of the treatments. To do this you are implementing a database (DB) relational practice that allows the use of this information stored.

  9. Development of a PSA information database system

    International Nuclear Information System (INIS)

    Kim, Seung Hwan

    2005-01-01

    The need to develop the PSA information database for performing a PSA has been growing rapidly. For example, performing a PSA requires a lot of data to analyze, to evaluate the risk, to trace the process of results and to verify the results. PSA information database is a system that stores all PSA related information into the database and file system with cross links to jump to the physical documents whenever they are needed. Korea Atomic Energy Research Institute is developing a PSA information database system, AIMS (Advanced Information Management System for PSA). The objective is to integrate and computerize all the distributed information of a PSA into a system and to enhance the accessibility to PSA information for all PSA related activities. This paper describes how we implemented such a database centered application in the view of two areas, database design and data (document) service

  10. Private and Efficient Query Processing on Outsourced Genomic Databases.

    Science.gov (United States)

    Ghasemi, Reza; Al Aziz, Md Momin; Mohammed, Noman; Dehkordi, Massoud Hadian; Jiang, Xiaoqian

    2017-09-01

    Applications of genomic studies are spreading rapidly in many domains of science and technology such as healthcare, biomedical research, direct-to-consumer services, and legal and forensic. However, there are a number of obstacles that make it hard to access and process a big genomic database for these applications. First, sequencing genomic sequence is a time consuming and expensive process. Second, it requires large-scale computation and storage systems to process genomic sequences. Third, genomic databases are often owned by different organizations, and thus, not available for public usage. Cloud computing paradigm can be leveraged to facilitate the creation and sharing of big genomic databases for these applications. Genomic data owners can outsource their databases in a centralized cloud server to ease the access of their databases. However, data owners are reluctant to adopt this model, as it requires outsourcing the data to an untrusted cloud service provider that may cause data breaches. In this paper, we propose a privacy-preserving model for outsourcing genomic data to a cloud. The proposed model enables query processing while providing privacy protection of genomic databases. Privacy of the individuals is guaranteed by permuting and adding fake genomic records in the database. These techniques allow cloud to evaluate count and top-k queries securely and efficiently. Experimental results demonstrate that a count and a top-k query over 40 Single Nucleotide Polymorphisms (SNPs) in a database of 20 000 records takes around 100 and 150 s, respectively.

  11. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  12. An XML-Based Networking Method for Connecting Distributed Anthropometric Databases

    Directory of Open Access Journals (Sweden)

    H Cheng

    2007-03-01

    Full Text Available Anthropometric data are used by numerous types of organizations for health evaluation, ergonomics, apparel sizing, fitness training, and many other applications. Data have been collected and stored in electronic databases since at least the 1940s. These databases are owned by many organizations around the world. In addition, the anthropometric studies stored in these databases often employ different standards, terminology, procedures, or measurement sets. To promote the use and sharing of these databases, the World Engineering Anthropometry Resources (WEAR group was formed and tasked with the integration and publishing of member resources. It is easy to see that organizing worldwide anthropometric data into a single database architecture could be a daunting and expensive undertaking. The challenges of WEAR integration reflect mainly in the areas of distributed and disparate data, different standards and formats, independent memberships, and limited development resources. Fortunately, XML schema and web services provide an alternative method for networking databases, referred to as the Loosely Coupled WEAR Integration. A standard XML schema can be defined and used as a type of Rosetta stone to translate the anthropometric data into a universal format, and a web services system can be set up to link the databases to one another. In this way, the originators of the data can keep their data locally along with their own data management system and user interface, but their data can be searched and accessed as part of the larger data network, and even combined with the data of others. This paper will identify requirements for WEAR integration, review XML as the universal format, review different integration approaches, and propose a hybrid web services/data mart solution.

  13. Upgrade of laser and electron beam welding database

    CERN Document Server

    Furman, Magdalena

    2014-01-01

    The main purpose of this project was to fix existing issues and update the existing database holding parameters of laser-beam and electron-beam welding machines. Moreover, the database had to be extended to hold the data for the new machines that arrived recently at the workshop. As a solution - the database had to be migrated to Oracle framework, the new user interface (using APEX) had to be designed and implemented with the integration with the CERN web services (EDMS, Phonebook, JMT, CDD and EDH).

  14. Curating the innate immunity interactome.

    LENUS (Irish Health Repository)

    Lynn, David J

    2010-01-01

    The innate immune response is the first line of defence against invading pathogens and is regulated by complex signalling and transcriptional networks. Systems biology approaches promise to shed new light on the regulation of innate immunity through the analysis and modelling of these networks. A key initial step in this process is the contextual cataloguing of the components of this system and the molecular interactions that comprise these networks. InnateDB (http:\\/\\/www.innatedb.com) is a molecular interaction and pathway database developed to facilitate systems-level analyses of innate immunity.

  15. The human interactome knowledge base (hint-kb): An integrative human protein interaction database enriched with predicted protein–protein interaction scores using a novel hybrid technique

    KAUST Repository

    Theofilatos, Konstantinos A.

    2013-07-12

    Proteins are the functional components of many cellular processes and the identification of their physical protein–protein interactions (PPIs) is an area of mature academic research. Various databases have been developed containing information about experimentally and computationally detected human PPIs as well as their corresponding annotation data. However, these databases contain many false positive interactions, are partial and only a few of them incorporate data from various sources. To overcome these limitations, we have developed HINT-KB (http://biotools.ceid.upatras.gr/hint-kb/), a knowledge base that integrates data from various sources, provides a user-friendly interface for their retrieval, cal-culatesasetoffeaturesofinterest and computesaconfidence score for every candidate protein interaction. This confidence score is essential for filtering the false positive interactions which are present in existing databases, predicting new protein interactions and measuring the frequency of each true protein interaction. For this reason, a novel machine learning hybrid methodology, called (Evolutionary Kalman Mathematical Modelling—EvoKalMaModel), was used to achieve an accurate and interpretable scoring methodology. The experimental results indicated that the proposed scoring scheme outperforms existing computational methods for the prediction of PPIs.

  16. USING THE INTERNATIONAL SCIENTOMETRIC DATABASES OF OPEN ACCESS IN SCIENTIFIC RESEARCH

    Directory of Open Access Journals (Sweden)

    O. Galchevska

    2015-05-01

    Full Text Available In the article the problem of the use of international scientometric databases in research activities as web-oriented resources and services that are the means of publication and dissemination of research results is considered. Selection criteria of scientometric platforms of open access in conducting scientific researches (coverage Ukrainian scientific periodicals and publications, data accuracy, general characteristics of international scientometrics database, technical, functional characteristics and their indexes are emphasized. The review of the most popular scientometric databases of open access Google Scholar, Russian Scientific Citation Index (RSCI, Scholarometer, Index Copernicus (IC, Microsoft Academic Search is made. Advantages of usage of International Scientometrics database Google Scholar in conducting scientific researches and prospects of research that are in the separation of cloud information and analytical services of the system are determined.

  17. NAViGaTing the micronome--using multiple microRNA prediction databases to identify signalling pathway-associated microRNAs.

    Directory of Open Access Journals (Sweden)

    Elize A Shirdel

    2011-02-01

    Full Text Available MicroRNAs are a class of small RNAs known to regulate gene expression at the transcript level, the protein level, or both. Since microRNA binding is sequence-based but possibly structure-specific, work in this area has resulted in multiple databases storing predicted microRNA:target relationships computed using diverse algorithms. We integrate prediction databases, compare predictions to in vitro data, and use cross-database predictions to model the microRNA:transcript interactome--referred to as the micronome--to study microRNA involvement in well-known signalling pathways as well as associations with disease. We make this data freely available with a flexible user interface as our microRNA Data Integration Portal--mirDIP (http://ophid.utoronto.ca/mirDIP.mirDIP integrates prediction databases to elucidate accurate microRNA:target relationships. Using NAViGaTOR to produce interaction networks implicating microRNAs in literature-based, KEGG-based and Reactome-based pathways, we find these signalling pathway networks have significantly more microRNA involvement compared to chance (p<0.05, suggesting microRNAs co-target many genes in a given pathway. Further examination of the micronome shows two distinct classes of microRNAs; universe microRNAs, which are involved in many signalling pathways; and intra-pathway microRNAs, which target multiple genes within one signalling pathway. We find universe microRNAs to have more targets (p<0.0001, to be more studied (p<0.0002, and to have higher degree in the KEGG cancer pathway (p<0.0001, compared to intra-pathway microRNAs.Our pathway-based analysis of mirDIP data suggests microRNAs are involved in intra-pathway signalling. We identify two distinct classes of microRNAs, suggesting a hierarchical organization of microRNAs co-targeting genes both within and between pathways, and implying differential involvement of universe and intra-pathway microRNAs at the disease level.

  18. Development of an Aerodynamic Analysis Method and Database for the SLS Service Module Panel Jettison Event Utilizing Inviscid CFD and MATLAB

    Science.gov (United States)

    Applebaum, Michael P.; Hall, Leslie, H.; Eppard, William M.; Purinton, David C.; Campbell, John R.; Blevins, John A.

    2015-01-01

    This paper describes the development, testing, and utilization of an aerodynamic force and moment database for the Space Launch System (SLS) Service Module (SM) panel jettison event. The database is a combination of inviscid Computational Fluid Dynamic (CFD) data and MATLAB code written to query the data at input values of vehicle/SM panel parameters and return the aerodynamic force and moment coefficients of the panels as they are jettisoned from the vehicle. The database encompasses over 5000 CFD simulations with the panels either in the initial stages of separation where they are hinged to the vehicle, in close proximity to the vehicle, or far enough from the vehicle that body interference effects are neglected. A series of viscous CFD check cases were performed to assess the accuracy of the Euler solutions for this class of problem and good agreement was obtained. The ultimate goal of the panel jettison database was to create a tool that could be coupled with any 6-Degree-Of-Freedom (DOF) dynamics model to rapidly predict SM panel separation from the SLS vehicle in a quasi-unsteady manner. Results are presented for panel jettison simulations that utilize the database at various SLS flight conditions. These results compare favorably to an approach that directly couples a 6-DOF model with the Cart3D Euler flow solver and obtains solutions for the panels at exact locations. This paper demonstrates a method of using inviscid CFD simulations coupled with a 6-DOF model that provides adequate fidelity to capture the physics of this complex multiple moving-body panel separation event.

  19. 77 FR 12234 - Changes in Hydric Soils Database Selection Criteria

    Science.gov (United States)

    2012-02-29

    ... Conservation Service [Docket No. NRCS-2011-0026] Changes in Hydric Soils Database Selection Criteria AGENCY... Changes to the National Soil Information System (NASIS) Database Selection Criteria for Hydric Soils of the United States. SUMMARY: The National Technical Committee for Hydric Soils (NTCHS) has updated the...

  20. HINT-KB: The human interactome knowledge base

    KAUST Repository

    Theofilatos, Konstantinos A.; Dimitrakopoulos, Christos M.; Kleftogiannis, Dimitrios A.; Moschopoulos, Charalampos N.; Papadimitriou, Stergios; Likothanassis, Spiridon D.; Mavroudi, Seferina P.

    2012-01-01

    Proteins and their interactions are considered to play a significant role in many cellular processes. The identification of Protein-Protein interactions (PPIs) in human is an open research area. Many Databases, which contain information about

  1. Draft secure medical database standard.

    Science.gov (United States)

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  2. News from the Library: Scientific Information Service - service interruption

    CERN Multimedia

    2014-01-01

    Techniques de l'Ingénieur has been part of the selection of databases offered by the Scientific Information Service for the last five years.   Unfortunately, as a consequence of budget reductions, and after careful consideration of all available options, we have to end this subscription. It will be still possible to purchase access to individual chapters via the Library services.  Furthermore, we are considering ending our subscriptions to Web of Science and Springer Materials (the Landolt-Börnstein database) during the course of 2015. We thank you for your understanding and welcome your feedback to library.desk@cern.ch

  3. DOE technology information management system database study report

    Energy Technology Data Exchange (ETDEWEB)

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  4. Relative Impact of Print and Database Products on Database Producer Expenses and Income--A Follow-Up.

    Science.gov (United States)

    Williams, Martha E.

    1982-01-01

    Provides update to 13-year analysis of finances of major database producer noting actions taken to improve finances (decrease expenses, increase efficiency, develop new products, market strategies and services, change pricing scheme, omit print products, increase prices) and consequences of actions (revenue increase, connect hour increase). Five…

  5. HINT-KB: The human interactome knowledge base

    KAUST Repository

    Theofilatos, Konstantinos A.

    2012-01-01

    Proteins and their interactions are considered to play a significant role in many cellular processes. The identification of Protein-Protein interactions (PPIs) in human is an open research area. Many Databases, which contain information about experimentally and computationally detected human PPIs as well as their corresponding annotation data, have been developed. However, these databases contain many false positive interactions, are partial and only a few of them incorporate data from various sources. To overcome these limitations, we have developed HINT-KB (http://150.140.142.24:84/Default.aspx) which is a knowledge base that integrates data from various sources, provides a user-friendly interface for their retrieval, estimates a set of features of interest and computes a confidence score for every candidate protein interaction using a modern computational hybrid methodology. © 2012 IFIP International Federation for Information Processing.

  6. Study of nuclear data online services

    International Nuclear Information System (INIS)

    Fan Tieshuan; Guo Zhiyu; Liu Wenlong; Ye Weiguo; Feng Yuqing; Song Xiangxiang; Huang Gang; Hong Yingjue; Liu Chi; Liu Tingjin; Chen Jinxiang; Tang Guoyou; Shi Zhaoming; Chen Jia'er; Huang Xiaolong

    2003-01-01

    A web-based nuclear data service software system, NDOS ( Nuclear Data Online Services), has been developed and released in Sep. 2001. Through the Internet, this system distributes charge of free 8 international nuclear databases: 5 evaluated neutron databases (BROND, CENDL, ENDF, JEF and, JENDL), Evaluated Nuclear Structure and Decay File ENSDF, Experimental Nuclear Data Library EXFOR database and IAEA Photonuclear Data Library. A software package, NDVS (Nuclear Data Viewing System), facilitates the visualization and manipulation of nuclear data. The computer programs providing support for database management and data retrievals are based on the Linux implementation of PHP and the MySQL software. (authors)

  7. Use of CD-Rom databases by staff and students in the University of ...

    African Journals Online (AJOL)

    The focus of this study is the use of CD-ROM databases by staff and students in the university of Jos library. This is of interest as CD-ROM database services is in consonant with the vision of providing excellent and effective information services to all staff and students of university of Jos. The study was guided by six ...

  8. A Sandia telephone database system

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.; Tolendino, L.F.

    1991-08-01

    Sandia National Laboratories, Albuquerque, may soon have more responsibility for the operation of its own telephone system. The processes that constitute providing telephone service can all be improved through the use of a central data information system. We studied these processes, determined the requirements for a database system, then designed the first stages of a system that meets our needs for work order handling, trouble reporting, and ISDN hardware assignments. The design was based on an extensive set of applications that have been used for five years to manage the Sandia secure data network. The system utilizes an Ingres database management system and is programmed using the Application-By-Forms tools.

  9. EQUIP: A European Survey of Quality Criteria for the Evaluation of Databases.

    Science.gov (United States)

    Wilson, T. D.

    1998-01-01

    Reports on two stages of an investigation into the perceived quality of online databases. Presents data from 989 questionnaires from 600 database users in 12 European and Scandinavian countries and results of a test of the SERVQUAL methodology for identifying user expectations about database services. Lists statements used in the SERVQUAL survey.…

  10. Database citation in full text biomedical articles.

    Science.gov (United States)

    Kafkas, Şenay; Kim, Jee-Hyub; McEntyre, Johanna R

    2013-01-01

    Molecular biology and literature databases represent essential infrastructure for life science research. Effective integration of these data resources requires that there are structured cross-references at the level of individual articles and biological records. Here, we describe the current patterns of how database entries are cited in research articles, based on analysis of the full text Open Access articles available from Europe PMC. Focusing on citation of entries in the European Nucleotide Archive (ENA), UniProt and Protein Data Bank, Europe (PDBe), we demonstrate that text mining doubles the number of structured annotations of database record citations supplied in journal articles by publishers. Many thousands of new literature-database relationships are found by text mining, since these relationships are also not present in the set of articles cited by database records. We recommend that structured annotation of database records in articles is extended to other databases, such as ArrayExpress and Pfam, entries from which are also cited widely in the literature. The very high precision and high-throughput of this text-mining pipeline makes this activity possible both accurately and at low cost, which will allow the development of new integrated data services.

  11. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    Vaniachine, A. V.; von der Schmitt, J. G.

    2008-01-01

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  12. RA radiological characterization database application

    International Nuclear Information System (INIS)

    Steljic, M.M; Ljubenov, V.Lj. . E-mail address of corresponding author: milijanas@vin.bg.ac.yu; Steljic, M.M.)

    2005-01-01

    Radiological characterization of the RA research reactor is one of the main activities in the first two years of the reactor decommissioning project. The raw characterization data from direct measurements or laboratory analyses (defined within the existing sampling and measurement programme) have to be interpreted, organized and summarized in order to prepare the final characterization survey report. This report should be made so that the radiological condition of the entire site is completely and accurately shown with the radiological condition of the components clearly depicted. This paper presents an electronic database application, designed as a serviceable and efficient tool for characterization data storage, review and analysis, as well as for the reports generation. Relational database model was designed and the application is made by using Microsoft Access 2002 (SP1), a 32-bit RDBMS for the desktop and client/server database applications that run under Windows XP. (author)

  13. CHID: a unique health information and education database.

    OpenAIRE

    Lunin, L F; Stein, R S

    1987-01-01

    The public's growing interest in health information and the health professions' increasing need to locate health education materials can be answered in part by the new Combined Health Information Database (CHID). This unique database focuses on materials and programs in professional and patient education, general health education, and community risk reduction. Accessible through BRS, CHID suggests sources for procuring brochures, pamphlets, articles, and films on community services, programs ...

  14. Federal Advisory Committee Act (FACA) Database-Complete-Raw

    Data.gov (United States)

    General Services Administration — The Federal Advisory Committee Act (FACA) database is used by Federal agencies to continuously manage an average of 1,000 advisory committees government-wide. This...

  15. Database Description - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Database Description General information of database Database n... BioResource Center Hiroshi Masuya Database classification Plant databases - Arabidopsis thaliana Organism T...axonomy Name: Arabidopsis thaliana Taxonomy ID: 3702 Database description The Arabidopsis thaliana phenome i...heir effective application. We developed the new Arabidopsis Phenome Database integrating two novel database...seful materials for their experimental research. The other, the “Database of Curated Plant Phenome” focusing

  16. National Database for Autism Research (NDAR): Big Data Opportunities for Health Services Research and Health Technology Assessment.

    Science.gov (United States)

    Payakachat, Nalin; Tilford, J Mick; Ungar, Wendy J

    2016-02-01

    The National Database for Autism Research (NDAR) is a US National Institutes of Health (NIH)-funded research data repository created by integrating heterogeneous datasets through data sharing agreements between autism researchers and the NIH. To date, NDAR is considered the largest neuroscience and genomic data repository for autism research. In addition to biomedical data, NDAR contains a large collection of clinical and behavioral assessments and health outcomes from novel interventions. Importantly, NDAR has a global unique patient identifier that can be linked to aggregated individual-level data for hypothesis generation and testing, and for replicating research findings. As such, NDAR promotes collaboration and maximizes public investment in the original data collection. As screening and diagnostic technologies as well as interventions for children with autism are expensive, health services research (HSR) and health technology assessment (HTA) are needed to generate more evidence to facilitate implementation when warranted. This article describes NDAR and explains its value to health services researchers and decision scientists interested in autism and other mental health conditions. We provide a description of the scope and structure of NDAR and illustrate how data are likely to grow over time and become available for HSR and HTA.

  17. Service employees give as they get: internal service as a moderator of the service climate-service outcomes link.

    Science.gov (United States)

    Ehrhart, Karen Holcombe; Witt, L A; Schneider, Benjamin; Perry, Sara Jansen

    2011-03-01

    We lend theoretical insight to the service climate literature by exploring the joint effects of branch service climate and the internal service provided to the branch (the service received from corporate units to support external service delivery) on customer-rated service quality. We hypothesized that service climate is related to service quality most strongly when the internal service quality received is high, providing front-line employees with the capability to deliver what the service climate motivates them to do. We studied 619 employees and 1,973 customers in 36 retail branches of a bank. We aggregated employee perceptions of the internal service quality received from corporate units and the local service climate and external customer perceptions of service quality to the branch level of analysis. Findings were consistent with the hypothesis that high-quality internal service is necessary for branch service climate to yield superior external customer service quality. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  18. The human interactome knowledge base (hint-kb): An integrative human protein interaction database enriched with predicted protein–protein interaction scores using a novel hybrid technique

    KAUST Repository

    Theofilatos, Konstantinos A.; Dimitrakopoulos, Christos M.; Likothanassis, Spiridon D.; Kleftogiannis, Dimitrios A.; Moschopoulos, Charalampos N.; Alexakos, Christos; Papadimitriou, Stergios; Mavroudi, Seferina P.

    2013-01-01

    Proteins are the functional components of many cellular processes and the identification of their physical protein–protein interactions (PPIs) is an area of mature academic research. Various databases have been developed containing information about

  19. Development of a dementia assessment quality database

    DEFF Research Database (Denmark)

    Johannsen, P.; Jørgensen, Kasper; Korner, A.

    2011-01-01

    OBJECTIVE: Increased focus on the quality of health care requires tools and information to address and improve quality. One tool to evaluate and report the quality of clinical health services is quality indicators based on a clinical database. METHOD: The Capital Region of Denmark runs a quality...... database for dementia evaluation in the secondary health system. One volume and seven process quality indicators on dementia evaluations are monitored. Indicators include frequency of demented patients, percentage of patients evaluated within three months, whether the work-up included blood tests, Mini...... for the data analyses. RESULTS: The database was constructed in 2005 and covers 30% of the Danish population. Data from all consecutive cases evaluated for dementia in the secondary health system in the Capital Region of Denmark are entered. The database has shown that the basic diagnostic work-up programme...

  20. Wisconsin Inventors` Network Database final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-04

    The Wisconsin Innovation Service Center at UW-Whitewater received a DOE grant to create an Inventor`s Network Database to assist independent inventors and entrepreneurs with new product development. Since 1980, the Wisconsin Innovation Service Center (WISC) at the University of Wisconsin-Whitewater has assisted independent and small business inventors in estimating the marketability of their new product ideas and inventions. The purpose of the WISC as an economic development entity is to encourage inventors who appear to have commercially viable inventions, based on preliminary market research, to invest in the next stages of development, perhaps investigating prototype development, legal protection, or more in-depth market research. To address inventor`s information needs, WISC developed on electronic database with search capabilities by geographic region and by product category/industry. It targets both public and private resources capable of, and interested in, working with individual and small business inventors. At present, the project includes resources in Wisconsin only.

  1. Database Resources of the BIG Data Center in 2018.

    Science.gov (United States)

    2018-01-04

    The BIG Data Center at Beijing Institute of Genomics (BIG) of the Chinese Academy of Sciences provides freely open access to a suite of database resources in support of worldwide research activities in both academia and industry. With the vast amounts of omics data generated at ever-greater scales and rates, the BIG Data Center is continually expanding, updating and enriching its core database resources through big-data integration and value-added curation, including BioCode (a repository archiving bioinformatics tool codes), BioProject (a biological project library), BioSample (a biological sample library), Genome Sequence Archive (GSA, a data repository for archiving raw sequence reads), Genome Warehouse (GWH, a centralized resource housing genome-scale data), Genome Variation Map (GVM, a public repository of genome variations), Gene Expression Nebulas (GEN, a database of gene expression profiles based on RNA-Seq data), Methylation Bank (MethBank, an integrated databank of DNA methylomes), and Science Wikis (a series of biological knowledge wikis for community annotations). In addition, three featured web services are provided, viz., BIG Search (search as a service; a scalable inter-domain text search engine), BIG SSO (single sign-on as a service; a user access control system to gain access to multiple independent systems with a single ID and password) and Gsub (submission as a service; a unified submission service for all relevant resources). All of these resources are publicly accessible through the home page of the BIG Data Center at http://bigd.big.ac.cn. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Pharmacology Portal: An Open Database for Clinical Pharmacologic Laboratory Services.

    Science.gov (United States)

    Karlsen Bjånes, Tormod; Mjåset Hjertø, Espen; Lønne, Lars; Aronsen, Lena; Andsnes Berg, Jon; Bergan, Stein; Otto Berg-Hansen, Grim; Bernard, Jean-Paul; Larsen Burns, Margrete; Toralf Fosen, Jan; Frost, Joachim; Hilberg, Thor; Krabseth, Hege-Merete; Kvan, Elena; Narum, Sigrid; Austgulen Westin, Andreas

    2016-01-01

    More than 50 Norwegian public and private laboratories provide one or more analyses for therapeutic drug monitoring or testing for drugs of abuse. Practices differ among laboratories, and analytical repertoires can change rapidly as new substances become available for analysis. The Pharmacology Portal was developed to provide an overview of these activities and to standardize the practices and terminology among laboratories. The Pharmacology Portal is a modern dynamic web database comprising all available analyses within therapeutic drug monitoring and testing for drugs of abuse in Norway. Content can be retrieved by using the search engine or by scrolling through substance lists. The core content is a substance registry updated by a national editorial board of experts within the field of clinical pharmacology. This ensures quality and consistency regarding substance terminologies and classification. All laboratories publish their own repertoires in a user-friendly workflow, adding laboratory-specific details to the core information in the substance registry. The user management system ensures that laboratories are restricted from editing content in the database core or in repertoires within other laboratory subpages. The portal is for nonprofit use, and has been fully funded by the Norwegian Medical Association, the Norwegian Society of Clinical Pharmacology, and the 8 largest pharmacologic institutions in Norway. The database server runs an open-source content management system that ensures flexibility with respect to further development projects, including the potential expansion of the Pharmacology Portal to other countries. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  3. HCUP State Inpatient Databases (SID) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Inpatient Databases (SID) contain the universe of hospital inpatient discharge abstracts in States participating in HCUP that release their data through...

  4. An affinity pull-down approach to identify the plant cyclic nucleotide interactome

    KAUST Repository

    Donaldson, Lara Elizabeth; Meier, Stuart Kurt

    2013-01-01

    Cyclic nucleotides (CNs) are intracellular second messengers that play an important role in mediating physiological responses to environmental and developmental signals, in species ranging from bacteria to humans. In response to these signals, CNs are synthesized by nucleotidyl cyclases and then act by binding to and altering the activity of downstream target proteins known as cyclic nucleotide-binding proteins (CNBPs). A number of CNBPs have been identified across kingdoms including transcription factors, protein kinases, phosphodiesterases, and channels, all of which harbor conserved CN-binding domains. In plants however, few CNBPs have been identified as homology searches fail to return plant sequences with significant matches to known CNBPs. Recently, affinity pull-down techniques have been successfully used to identify CNBPs in animals and have provided new insights into CN signaling. The application of these techniques to plants has not yet been extensively explored and offers an alternative approach toward the unbiased discovery of novel CNBP candidates in plants. Here, an affinity pull-down technique for the identification of the plant CN interactome is presented. In summary, the method involves an extraction of plant proteins which is incubated with a CN-bait, followed by a series of increasingly stringent elutions that eliminates proteins in a sequential manner according to their affinity to the bait. The eluted and bait-bound proteins are separated by one-dimensional gel electrophoresis, excised, and digested with trypsin after which the resultant peptides are identified by mass spectrometry - techniques that are commonplace in proteomics experiments. The discovery of plant CNBPs promises to provide valuable insight into the mechanism of CN signal transduction in plants. © Springer Science+Business Media New York 2013.

  5. An affinity pull-down approach to identify the plant cyclic nucleotide interactome

    KAUST Repository

    Donaldson, Lara Elizabeth

    2013-09-03

    Cyclic nucleotides (CNs) are intracellular second messengers that play an important role in mediating physiological responses to environmental and developmental signals, in species ranging from bacteria to humans. In response to these signals, CNs are synthesized by nucleotidyl cyclases and then act by binding to and altering the activity of downstream target proteins known as cyclic nucleotide-binding proteins (CNBPs). A number of CNBPs have been identified across kingdoms including transcription factors, protein kinases, phosphodiesterases, and channels, all of which harbor conserved CN-binding domains. In plants however, few CNBPs have been identified as homology searches fail to return plant sequences with significant matches to known CNBPs. Recently, affinity pull-down techniques have been successfully used to identify CNBPs in animals and have provided new insights into CN signaling. The application of these techniques to plants has not yet been extensively explored and offers an alternative approach toward the unbiased discovery of novel CNBP candidates in plants. Here, an affinity pull-down technique for the identification of the plant CN interactome is presented. In summary, the method involves an extraction of plant proteins which is incubated with a CN-bait, followed by a series of increasingly stringent elutions that eliminates proteins in a sequential manner according to their affinity to the bait. The eluted and bait-bound proteins are separated by one-dimensional gel electrophoresis, excised, and digested with trypsin after which the resultant peptides are identified by mass spectrometry - techniques that are commonplace in proteomics experiments. The discovery of plant CNBPs promises to provide valuable insight into the mechanism of CN signal transduction in plants. © Springer Science+Business Media New York 2013.

  6. An Autonomic Framework for Integrating Security and Quality of Service Support in Databases

    Science.gov (United States)

    Alomari, Firas

    2013-01-01

    The back-end databases of multi-tiered applications are a major data security concern for enterprises. The abundance of these systems and the emergence of new and different threats require multiple and overlapping security mechanisms. Therefore, providing multiple and diverse database intrusion detection and prevention systems (IDPS) is a critical…

  7. CyanoBase: the cyanobacteria genome database update 2010

    OpenAIRE

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2009-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in var...

  8. Graph Query Portal

    OpenAIRE

    Dayal, Amit; Brock, David

    2018-01-01

    Prashant Chandrasekar, a lead developer for the Social Interactome project, has tasked the team with creating a graph representation of the data collected from the social networks involved in that project. The data is currently stored in a MySQL database. The client requested that the graph database be Cayley, but after a literature review, Neo4j was chosen. The reasons for this shift will be explained in the design section. Secondarily, the team was tasked with coming up with three scena...

  9. Web geoprocessing services on GML with a fast XML database ...

    African Journals Online (AJOL)

    Nowadays there exist quite a lot of Spatial Database Infrastructures (SDI) that facilitate the Geographic Information Systems (GIS) user community in getting access to distributed spatial data through web technology. However, sometimes the users first have to process available spatial data to obtain the needed information.

  10. Using Online Databases in Corporate Issues Management.

    Science.gov (United States)

    Thomsen, Steven R.

    1995-01-01

    Finds that corporate public relations practitioners felt they were able, using online database and information services, to intercept issues earlier in the "issue cycle" and thus enable their organizations to develop more "proactionary" or "catalytic" issues management repose strategies. (SR)

  11. Persistent storage of non-event data in the CMS databases

    International Nuclear Information System (INIS)

    De Gruttola, M; Di Guida, S; Innocente, V; Schlatter, D; Futyan, D; Glege, F; Paolucci, P; Govi, G; Picca, P; Pierro, A; Xie, Z

    2010-01-01

    In the CMS experiment, the non event data needed to set up the detector, or being produced by it, and needed to calibrate the physical responses of the detector itself are stored in ORACLE databases. The large amount of data to be stored, the number of clients involved and the performance requirements make the database system an essential service for the experiment to run. This note describes the CMS condition database architecture, the data-flow and PopCon, the tool built in order to populate the offline databases. Finally, the first experience obtained during the 2008 and 2009 cosmic data taking are presented.

  12. Integr8: enhanced inter-operability of European molecular biology databases.

    Science.gov (United States)

    Kersey, P J; Morris, L; Hermjakob, H; Apweiler, R

    2003-01-01

    The increasing production of molecular biology data in the post-genomic era, and the proliferation of databases that store it, require the development of an integrative layer in database services to facilitate the synthesis of related information. The solution of this problem is made more difficult by the absence of universal identifiers for biological entities, and the breadth and variety of available data. Integr8 was modelled using UML (Universal Modelling Language). Integr8 is being implemented as an n-tier system using a modern object-oriented programming language (Java). An object-relational mapping tool, OJB, is being used to specify the interface between the upper layers and an underlying relational database. The European Bioinformatics Institute is launching the Integr8 project. Integr8 will be an automatically populated database in which we will maintain stable identifiers for biological entities, describe their relationships with each other (in accordance with the central dogma of biology), and store equivalences between identified entities in the source databases. Only core data will be stored in Integr8, with web links to the source databases providing further information. Integr8 will provide the integrative layer of the next generation of bioinformatics services from the EBI. Web-based interfaces will be developed to offer gene-centric views of the integrated data, presenting (where known) the links between genome, proteome and phenotype.

  13. Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale

    Science.gov (United States)

    Canali, L.; Baranowski, Z.; Kothuri, P.

    2017-10-01

    This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.

  14. Giving you the business - Competitive pricing of selected Predicasts' databases

    Science.gov (United States)

    Jack, Robert F.

    1987-01-01

    The pricing policies of different data-base services offering Predicast data bases are examined from a user perspective. The services carrying these data bases are listed; the problems introduced by varying exchange rates and seemingly idiosyncratic price structures are discussed; and numerous specific examples are given.

  15. The Danish Fetal Medicine Database

    Directory of Open Access Journals (Sweden)

    Ekelund CK

    2016-10-01

    Full Text Available Charlotte Kvist Ekelund,1 Tine Iskov Kopp,2 Ann Tabor,1 Olav Bjørn Petersen3 1Department of Obstetrics, Center of Fetal Medicine, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark; 2Registry Support Centre (East – Epidemiology and Biostatistics, Research Centre for Prevention and Health, Glostrup, Denmark; 3Fetal Medicine Unit, Aarhus University Hospital, Aarhus Nord, Denmark Aim: The aim of this study is to set up a database in order to monitor the detection rates and false-positive rates of first-trimester screening for chromosomal abnormalities and prenatal detection rates of fetal malformations in Denmark. Study population: Pregnant women with a first or second trimester ultrasound scan performed at all public hospitals in Denmark are registered in the database. Main variables/descriptive data: Data on maternal characteristics, ultrasonic, and biochemical variables are continuously sent from the fetal medicine units' Astraia databases to the central database via web service. Information about outcome of pregnancy (miscarriage, termination, live birth, or stillbirth is received from the National Patient Register and National Birth Register and linked via the Danish unique personal registration number. Furthermore, results of all pre- and postnatal chromosome analyses are sent to the database. Conclusion: It has been possible to establish a fetal medicine database, which monitors first-trimester screening for chromosomal abnormalities and second-trimester screening for major fetal malformations with the input from already collected data. The database is valuable to assess the performance at a regional level and to compare Danish performance with international results at a national level. Keywords: prenatal screening, nuchal translucency, fetal malformations, chromosomal abnormalities

  16. Deep sequencing of cardiac microRNA-mRNA interactomes in clinical and experimental cardiomyopathy.

    Science.gov (United States)

    Matkovich, Scot J; Dorn, Gerald W

    2015-01-01

    MicroRNAs are a family of short (~21 nucleotide) noncoding RNAs that serve key roles in cellular growth and differentiation and the response of the heart to stress stimuli. As the sequence-specific recognition element of RNA-induced silencing complexes (RISCs), microRNAs bind mRNAs and prevent their translation via mechanisms that may include transcript degradation and/or prevention of ribosome binding. Short microRNA sequences and the ability of microRNAs to bind to mRNA sites having only partial/imperfect sequence complementarity complicate purely computational analyses of microRNA-mRNA interactomes. Furthermore, computational microRNA target prediction programs typically ignore biological context, and therefore the principal determinants of microRNA-mRNA binding: the presence and quantity of each. To address these deficiencies we describe an empirical method, developed via studies of stressed and failing hearts, to determine disease-induced changes in microRNAs, mRNAs, and the mRNAs targeted to the RISC, without cross-linking mRNAs to RISC proteins. Deep sequencing methods are used to determine RNA abundances, delivering unbiased, quantitative RNA data limited only by their annotation in the genome of interest. We describe the laboratory bench steps required to perform these experiments, experimental design strategies to achieve an appropriate number of sequencing reads per biological replicate, and computer-based processing tools and procedures to convert large raw sequencing data files into gene expression measures useful for differential expression analyses.

  17. CyanoBase: the cyanobacteria genome database update 2010.

    Science.gov (United States)

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  18. [Establishement for regional pelvic trauma database in Hunan Province].

    Science.gov (United States)

    Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua

    2017-04-28

    To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry.
 Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6.
 Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management.
 Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.

  19. BGDB: a database of bivalent genes.

    Science.gov (United States)

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  20. A service-oriented data access control model

    Science.gov (United States)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  1. Brazil: Increasing the Reach of the INIS Database by using Social Media

    International Nuclear Information System (INIS)

    Braga, Fabiane; Gama das Neves, Teodora Marly; Pereira, Diogo

    2015-01-01

    INIS began its activities in 1970 to collect and share scientific and technical information about the peaceful uses of nuclear science and technology, with participating nations collaborating to build a centralized database. In the same year, the Nuclear Information Center of the Brazilian Nuclear Energy Commission (CIN/CNEN) was created with the mission of representing Brazil in the INIS system. Since then it has played an important role in the context of Brazilian scientific and technological development as a database cooperative producer, a scientific and technical information service provider, and a knowledge generator, utilizing the INIS database. Since the 1970’s, CIN/CNEN has developed new information products and services based on the INIS database and produced in cooperation with other INIS Members, in order to support research in nuclear and related fields and to keep researchers updated on the newest publications in their areas of interest

  2. Information persistence using XML database technology

    Science.gov (United States)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and

  3. Global Reference Tables Services Architecture

    Data.gov (United States)

    Social Security Administration — This database stores the reference and transactional data used to provide a data-driven service access method to certain Global Reference Table (GRT) service tables.

  4. HCUP State Ambulatory Surgery Databases (SASD) - Restricted Access Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Ambulatory Surgery Databases (SASD) contain the universe of hospital-based ambulatory surgery encounters in participating States. Some States include...

  5. HCUP State Emergency Department Databases (SEDD) - Restricted Access File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The State Emergency Department Databases (SEDD) contain the universe of emergency department visits in participating States. Restricted access data files are...

  6. Lessons learned from hardware and software upgrades of IT-DB services

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    This talk gives an overview of recent changes in CERN database infrastructure. The presentation describes database service evolution, in particular new hardware & storage installation, integration with Agile infrastructure, complexity of validation strategy and finally the migration and upgrade process concerning the most critical database services.

  7. Wisconsin Inventors' Network Database final report

    Energy Technology Data Exchange (ETDEWEB)

    1991-12-04

    The Wisconsin Innovation Service Center at UW-Whitewater received a DOE grant to create an Inventor's Network Database to assist independent inventors and entrepreneurs with new product development. Since 1980, the Wisconsin Innovation Service Center (WISC) at the University of Wisconsin-Whitewater has assisted independent and small business inventors in estimating the marketability of their new product ideas and inventions. The purpose of the WISC as an economic development entity is to encourage inventors who appear to have commercially viable inventions, based on preliminary market research, to invest in the next stages of development, perhaps investigating prototype development, legal protection, or more in-depth market research. To address inventor's information needs, WISC developed on electronic database with search capabilities by geographic region and by product category/industry. It targets both public and private resources capable of, and interested in, working with individual and small business inventors. At present, the project includes resources in Wisconsin only.

  8. Ndos: nuclear data online services

    International Nuclear Information System (INIS)

    Fan, T.S.; Guo, Z.Y.; Ye, W.G.; Liu, W.L.; Feng, Y.Q.; Song, X.X.; Huang, G.; Liu, T.J.; Hong, Y.J.; Liu, C.; Chen, J.X.; Tang, G.Y.; Shi, Z.M.; Huang, X.L.; Chen, J.E.

    2004-01-01

    A Nuclear Data Online Services, Ndos software, has been developed to extract and present nuclear data from various available libraries. Using relational databases the present web-based software offers online data services of a centralized repository of data including eight major international nuclear data libraries for nuclear reaction data, nuclear structure and decay data. A data plotting software package, NDVS (Nuclear Data Viewing Software), which is included in NDOS, facilitates the visualization and manipulation of nuclear data. The NDOS is developed on the Linux implementation of PHP and the MySQL software. The relational nuclear databases and data services are platform independent in a wide sense

  9. Ndos: nuclear data online services

    Energy Technology Data Exchange (ETDEWEB)

    Fan, T.S. E-mail: tsfan@nst.pku.edu.cn; Guo, Z.Y.; Ye, W.G.; Liu, W.L.; Feng, Y.Q.; Song, X.X.; Huang, G.; Liu, T.J.; Hong, Y.J.; Liu, C.; Chen, J.X.; Tang, G.Y.; Shi, Z.M.; Huang, X.L.; Chen, J.E

    2004-07-01

    A Nuclear Data Online Services, Ndos software, has been developed to extract and present nuclear data from various available libraries. Using relational databases the present web-based software offers online data services of a centralized repository of data including eight major international nuclear data libraries for nuclear reaction data, nuclear structure and decay data. A data plotting software package, NDVS (Nuclear Data Viewing Software), which is included in NDOS, facilitates the visualization and manipulation of nuclear data. The NDOS is developed on the Linux implementation of PHP and the MySQL software. The relational nuclear databases and data services are platform independent in a wide sense.

  10. Database Description - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Database Description General information of database Database... name Yeast Interacting Proteins Database Alternative name - DOI 10.18908/lsdba.nbdc00742-000 Creator C...-ken 277-8561 Tel: +81-4-7136-3989 FAX: +81-4-7136-3979 E-mail : Database classif...s cerevisiae Taxonomy ID: 4932 Database description Information on interactions and related information obta...l Acad Sci U S A. 2001 Apr 10;98(8):4569-74. Epub 2001 Mar 13. External Links: Original website information Database

  11. INTERRUPTION TO COMPUTING SERVICES, SATURDAY 9 FEBRUARY

    CERN Multimedia

    2002-01-01

    In order to allow the rerouting of electrical cables which power most of the B513 Computer Room, there will be a complete shutdown of central computing services on Saturday 9th February. This shutown affects all Central Computing services, including all NICE services (for Windows 95, Windows NT and Windows 2000), Mail and Web services, sitewide printing services, all Unix interactive and batch services, the ACB service, all AIS services and databases (such as EDH, BHT, CFU and HR), dedicated Engineering services, and general purpose database services. Services will be run down progressively from early on Saturday morning and reestablished as soon as possible, starting in the afternoon. However, it is unlikely that full computing services will be available before the Saturday evening. For operational reasons, some services may be shutdown on the evening of Friday 8th February and restarted on Monday 11th February. More detailed information about the stoppage and restart schedules will be given nearer...

  12. INTERRUPTION TO COMPUTING SERVICES, SATURDAY 9 FEBRUARY

    CERN Multimedia

    2002-01-01

    In order to allow the rerouting of electrical cables which power most of the B513 Computer Room, there will be a complete shutdown of central computing services on Saturday 9th February. This shutown affects all Central Computing services, including all NICE services (for Windows 95, Windows NT and Windows 2000), Mail and Web services, sitewide printing services, all Unix interactive and batch services, the ACB service, all AIS services and databases (such as EDH, BHT, CFU and HR) dedicated Engineering services, and general purpose database services. Services will be run down progressively from early on Saturday morning and reestablished as soon as possible, starting in the afternoon. However, it is unlikely that full computing services will be available before the Saturday evening. For operational reasons, some services may be shutdown on the evening of Friday 8th February and restarted on Monday 11th February. More detailed information about the stoppage and restart schedules will be given nearer ...

  13. RPA tree-level database users guide

    Science.gov (United States)

    Patrick D. Miles; Scott A. Pugh; Brad Smith; Sonja N. Oswalt

    2014-01-01

    The Forest and Rangeland Renewable Resources Planning Act (RPA) of 1974 calls for a periodic assessment of the Nation's renewable resources. The Forest Inventory and Analysis (FIA) program of the U.S. Forest Service supports the RPA effort by providing information on the forest resources of the United States. The RPA tree-level database (RPAtreeDB) was generated...

  14. Developing a stone database for clinical practice.

    Science.gov (United States)

    Turney, Benjamin W; Noble, Jeremy G; Reynard, John M

    2011-09-01

    Our objective was to design an intranet-based database to streamline stone patient management and data collection. The system developers used a rapid development approach that removed the need for laborious and unnecessary documentation, instead focusing on producing a rapid prototype that could then be altered iteratively. By using open source development software and website best practice, the development cost was kept very low in comparison with traditional clinical applications. Information about each patient episode can be entered via a user-friendly interface. The bespoke electronic stone database removes the need for handwritten notes, dictation, and typing. From the database, files may be automatically generated for clinic letters, operation notes. and letters to family doctors. These may be printed or e-mailed from the database. Data may be easily exported for audits, coding, and research. Data collection remains central to medical practice, to improve patient safety, to analyze medical and surgical outcomes, and to evaluate emerging treatments. Establishing prospective data collection is crucial to this process. In the current era, we have the opportunity to embrace available technology to facilitate this process. The database template could be modified for use in other clinics. The database that we have designed helps to provide a modern and efficient clinical stone service.

  15. Update History of This Database - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Update History of This Database Date Update contents 2014/05/07 The co...ntact information is corrected. The features and manner of utilization of the database are corrected. 2014/02/04 Trypanosomes Databas...e English archive site is opened. 2011/04/04 Trypanosomes Database ( http://www.tan...paku.org/tdb/ ) is opened. About This Database Database Description Download Lice...nse Update History of This Database Site Policy | Contact Us Update History of This Database - Trypanosomes Database | LSDB Archive ...

  16. A national cross-sectional analysis of dermatology away rotations using the Visiting Student Application Service database.

    Science.gov (United States)

    Cao, Severine Z; Nambudiri, Vinod E

    2017-12-15

    The highly competitive nature of the dermatology match requires applicants to undertake a variety of measures in the hopes of securing a residency position. Among the opportunities available to applicants is the chance to participate in away or "audition" rotations during their final year of undergraduate medical education. Away rotations are now performed by a majority of medical students applying into dermatology, but littleresearch has been done to describe the nature of this opportunity for interested applicants. An analysis of all dermatology electives offered in the Visiting Student Application Service (VSAS) database wasperformed. Results indicate that students have the option to pursue electives in a variety of subjects offered by 100 sponsoring institutions spread across a wide geographic distribution. Although manyopportunities exist, this analysis sheds light on several areas for improving the quality of this experience for interested applicants, including providing more electives in advanced subject matter, permitting more flexibility in scheduling, and promoting wider participation in VSAS.

  17. Conversion of National Health Insurance Service-National Sample Cohort (NHIS-NSC) Database into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM).

    Science.gov (United States)

    You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong

    2017-01-01

    It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.

  18. Second-Tier Database for Ecosystem Focus, 2000-2001 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Van Holmes, Chris; Muongchanh, Christine; Anderson, James J. (University of Washington, School of Aquatic and Fishery Sciences, Seattle, WA)

    2001-11-01

    The Second-Tier Database for Ecosystem Focus (Contract 00004124) provides direct and timely public access to Columbia Basin environmental, operational, fishery and riverine data resources for federal, state, public and private entities. The Second-Tier Database known as Data Access in Realtime (DART) does not duplicate services provided by other government entities in the region. Rather, it integrates public data for effective access, consideration and application.

  19. Assessment of Residential History Generation Using a Public-Record Database

    Directory of Open Access Journals (Sweden)

    David C. Wheeler

    2015-09-01

    Full Text Available In studies of disease with potential environmental risk factors, residential location is often used as a surrogate for unknown environmental exposures or as a basis for assigning environmental exposures. These studies most typically use the residential location at the time of diagnosis due to ease of collection. However, previous residential locations may be more useful for risk analysis because of population mobility and disease latency. When residential histories have not been collected in a study, it may be possible to generate them through public-record databases. In this study, we evaluated the ability of a public-records database from LexisNexis to provide residential histories for subjects in a geographically diverse cohort study. We calculated 11 performance metrics comparing study-collected addresses and two address retrieval services from LexisNexis. We found 77% and 90% match rates for city and state and 72% and 87% detailed address match rates with the basic and enhanced services, respectively. The enhanced LexisNexis service covered 86% of the time at residential addresses recorded in the study. The mean match rate for detailed address matches varied spatially over states. The results suggest that public record databases can be useful for reconstructing residential histories for subjects in epidemiologic studies.

  20. BioServices

    DEFF Research Database (Denmark)

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M

    2013-01-01

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework....

  1. MareyMap Online: A User-Friendly Web Application and Database Service for Estimating Recombination Rates Using Physical and Genetic Maps.

    Science.gov (United States)

    Siberchicot, Aurélie; Bessy, Adrien; Guéguen, Laurent; Marais, Gabriel A B

    2017-10-01

    Given the importance of meiotic recombination in biology, there is a need to develop robust methods to estimate meiotic recombination rates. A popular approach, called the Marey map approach, relies on comparing genetic and physical maps of a chromosome to estimate local recombination rates. In the past, we have implemented this approach in an R package called MareyMap, which includes many functionalities useful to get reliable recombination rate estimates in a semi-automated way. MareyMap has been used repeatedly in studies looking at the effect of recombination on genome evolution. Here, we propose a simpler user-friendly web service version of MareyMap, called MareyMap Online, which allows a user to get recombination rates from her/his own data or from a publicly available database that we offer in a few clicks. When the analysis is done, the user is asked whether her/his curated data can be placed in the database and shared with other users, which we hope will make meta-analysis on recombination rates including many species easy in the future. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  2. Large-scale Health Information Database and Privacy Protection*1

    OpenAIRE

    YAMAMOTO, Ryuichi

    2016-01-01

    Japan was once progressive in the digitalization of healthcare fields but unfortunately has fallen behind in terms of the secondary use of data for public interest. There has recently been a trend to establish large-scale health databases in the nation, and a conflict between data use for public interest and privacy protection has surfaced as this trend has progressed. Databases for health insurance claims or for specific health checkups and guidance services were created according to the law...

  3. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    Directory of Open Access Journals (Sweden)

    Seok-Hyoung Lee

    2012-06-01

    Full Text Available While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to achieve interoperability of the information and thus not easy to implement meaningful science technology information services through information convergence. This study aims to address the aforementioned issue by analyzing mapping systems between classification systems in order to design a structure to connect a variety of classification systems used in the academic information database of the Korea Institute of Science and Technology Information, which provides science and technology information portal service. This study also aims to design a mapping system for the classification systems to be applied to actual science and technology information services and information management systems.

  4. Structures and short linear motif of disordered transcription factor regions provide clues to the interactome of the cellular hub radical-induced cell death1

    DEFF Research Database (Denmark)

    O'Shea, Charlotte; Staby, Lasse; Bendsen, Sidsel Krogh

    2017-01-01

    Intrinsically disordered protein regions (IDRs) lack a well-defined three-dimensional structure, but often facilitate key protein functions. Some interactions between IDRs and folded protein domains rely on short linear motifs (SLiMs). These motifs are challenging to identify, but once found can...... point to larger networks of interactions, such as with proteins that serve as hubs for essential cellular functions. The stress-associated plant protein Radical-Induced Cell Death1 (RCD1) is one such hub, interacting with many transcription factors via their flexible IDRs. To identify the SLiM bound......046 formed different structures or were fuzzy in the complexes. These findings allow us to present a model of the stress-associated RCD1-transcription factor interactome and to contribute to the emerging understanding of the interactions between folded hubs and their intrinsically disordered partners....

  5. The INFN-CNAF Tier-1 GEMSS Mass Storage System and database facility activity

    Science.gov (United States)

    Ricci, Pier Paolo; Cavalli, Alessandro; Dell'Agnello, Luca; Favaro, Matteo; Gregori, Daniele; Prosperini, Andrea; Pezzi, Michele; Sapunenko, Vladimir; Zizzi, Giovanni; Vagnoni, Vincenzo

    2015-05-01

    The consolidation of Mass Storage services at the INFN-CNAF Tier1 Storage department that has occurred during the last 5 years, resulted in a reliable, high performance and moderately easy-to-manage facility that provides data access, archive, backup and database services to several different use cases. At present, the GEMSS Mass Storage System, developed and installed at CNAF and based upon an integration between the IBM GPFS parallel filesystem and the Tivoli Storage Manager (TSM) tape management software, is one of the largest hierarchical storage sites in Europe. It provides storage resources for about 12% of LHC data, as well as for data of other non-LHC experiments. Files are accessed using standard SRM Grid services provided by the Storage Resource Manager (StoRM), also developed at CNAF. Data access is also provided by XRootD and HTTP/WebDaV endpoints. Besides these services, an Oracle database facility is in production characterized by an effective level of parallelism, redundancy and availability. This facility is running databases for storing and accessing relational data objects and for providing database services to the currently active use cases. It takes advantage of several Oracle technologies, like Real Application Cluster (RAC), Automatic Storage Manager (ASM) and Enterprise Manager centralized management tools, together with other technologies for performance optimization, ease of management and downtime reduction. The aim of the present paper is to illustrate the state-of-the-art of the INFN-CNAF Tier1 Storage department infrastructures and software services, and to give a brief outlook to forthcoming projects. A description of the administrative, monitoring and problem-tracking tools that play a primary role in managing the whole storage framework is also given.

  6. Update History of This Database - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database Update History of This Database Date Update contents 2017/02/27 Arabidopsis Phenome Data...base English archive site is opened. - Arabidopsis Phenome Database (http://jphenom...e.info/?page_id=95) is opened. About This Database Database Description Download License Update History of This Database... Site Policy | Contact Us Update History of This Database - Arabidopsis Phenome Database | LSDB Archive ...

  7. Key Techniques for Dynamic Updating of National Fundamental Geographic Information Database

    Directory of Open Access Journals (Sweden)

    WANG Donghua

    2015-07-01

    Full Text Available One of the most important missions of fundamental surveying and mapping work is to keep the fundamental geographic information fresh. In this respect, National Administration of Surveying, Mapping and Geoinformation has launched the project of dynamic updating of national fundamental geographic information database since 2012, which aims to update 1:50 000, 1:250 000 and 1:1 000 000 national fundamental geographic information database continuously and quickly, by updating and publishing once a year. This paper introduces the general technical thinking of dynamic updating, states main technical methods, such as dynamic updating of fundamental database, linkage updating of derived databases, and multi-tense database management and service and so on, and finally introduces main technical characteristics and engineering applications.

  8. Refactoring databases evolutionary database design

    CERN Document Server

    Ambler, Scott W

    2006-01-01

    Refactoring has proven its value in a wide range of development projects–helping software professionals improve system designs, maintainability, extensibility, and performance. Now, for the first time, leading agile methodologist Scott Ambler and renowned consultant Pramodkumar Sadalage introduce powerful refactoring techniques specifically designed for database systems. Ambler and Sadalage demonstrate how small changes to table structures, data, stored procedures, and triggers can significantly enhance virtually any database design–without changing semantics. You’ll learn how to evolve database schemas in step with source code–and become far more effective in projects relying on iterative, agile methodologies. This comprehensive guide and reference helps you overcome the practical obstacles to refactoring real-world databases by covering every fundamental concept underlying database refactoring. Using start-to-finish examples, the authors walk you through refactoring simple standalone databas...

  9. National Land Cover Database (NLCD) Percent Tree Canopy Collection

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The National Land Cover Database (NLCD) Percent Tree Canopy Collection is a product of the U.S. Forest Service (USFS), and is produced through a cooperative project...

  10. Update History of This Database - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database Update History of This Database Date Update contents 2017/03/13 SKIP Stemcell Database... English archive site is opened. 2013/03/29 SKIP Stemcell Database ( https://www.skip.med.k...eio.ac.jp/SKIPSearch/top?lang=en ) is opened. About This Database Database Description Download License Update History of This Databa...se Site Policy | Contact Us Update History of This Database - SKIP Stemcell Database | LSDB Archive ...

  11. Analysis Tool Web Services from the EMBL-EBI.

    Science.gov (United States)

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  12. The Danish Cardiac Rehabilitation Database.

    Science.gov (United States)

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients.

  13. Abstract databases in nuclear medicine; New database for articles not indexed in PubMed

    International Nuclear Information System (INIS)

    Ugrinska, A.; Mustafa, B.

    2004-01-01

    large number of abstracts and servicing a larger user-community. The database is placed at the URL: http://www.nucmediex.net. We hope that nuclear medicine professionals will contribute building this database and that it will be valuable source of information. (author)

  14. Database design and database administration for a kindergarten

    OpenAIRE

    Vítek, Daniel

    2009-01-01

    The bachelor thesis deals with creation of database design for a standard kindergarten, installation of the designed database into the database system Oracle Database 10g Express Edition and demonstration of the administration tasks in this database system. The verification of the database was proved by a developed access application.

  15. Investigation on Oracle GoldenGate Veridata for Data Consistency in WLCG Distributed Database Environment

    OpenAIRE

    Asko, Anti; Lobato Pardavila, Lorena

    2014-01-01

    Abstract In the distributed database environment, the data divergence can be an important problem: if it is not discovered and correctly identified, incorrect data can lead to poor decision making, errors in the service and in the operative errors. Oracle GoldenGate Veridata is a product to compare two sets of data and identify and report on data that is out of synchronization. IT DB is providing a replication service between databases at CERN and other computer centers worldwide as a par...

  16. Active in-database processing to support ambient assisted living systems.

    Science.gov (United States)

    de Morais, Wagner O; Lundström, Jens; Wickström, Nicholas

    2014-08-12

    As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL) systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs) exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  17. NNDC Data Services

    International Nuclear Information System (INIS)

    Tuli, J.K.; Sonzogni, A.

    2010-01-01

    The National Nuclear Data Center has provided remote access to some of its resources since 1986. The major databases and other resources available currently through NNDC Web site are summarized. The National Nuclear Data Center (NNDC) has provided remote access to the nuclear physics databases it maintains and to other resources since 1986. With considerable innovation access is now mostly through the Web. The NNDC Web pages have been modernized to provide a consistent state-of-the-art style. The improved database services and other resources available from the NNOC site at www.nndc.bnl.gov will be described.

  18. Database Description - Open TG-GATEs Pathological Image Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Open TG-GATEs Pathological Image Database Database Description General information of database Database... name Open TG-GATEs Pathological Image Database Alternative name - DOI 10.18908/lsdba.nbdc00954-0...iomedical Innovation 7-6-8, Saito-asagi, Ibaraki-city, Osaka 567-0085, Japan TEL:81-72-641-9826 Email: Database... classification Toxicogenomics Database Organism Taxonomy Name: Rattus norvegi... Article title: Author name(s): Journal: External Links: Original website information Database

  19. Testing in Service-Oriented Environments

    Science.gov (United States)

    2010-03-01

    software releases (versions, service packs, vulnerability patches) for one com- mon ESB during the 13-month period from January 1, 2008 through...impact on quality of service : Unlike traditional software compo- nents, a single instance of a web service can be used by multiple consumers. Since the...distributed, with heterogeneous hardware and software (SOA infrastructure, services , operating systems, and databases). Because of cost and security, it

  20. Agricultural Research Service

    Science.gov (United States)

    ... Menu United States Department of Agriculture Agricultural Research Service Research Research Home National Programs Research Projects Scientific Manuscripts International Programs Scientific Software/Models Databases and Datasets Office of Scientific Quality ...

  1. GMDD: a database of GMO detection methods.

    Science.gov (United States)

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans J P; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-06-04

    Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier.

  2. Experience and Lessons learnt from running High Availability Databases on Network Attached Storage

    CERN Document Server

    Guijarro, Manuel

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over.

  3. Experience and lessons learnt from running high availability databases on network attached storage

    International Nuclear Information System (INIS)

    Guijarro, M; Gaspar, R

    2008-01-01

    The Database and Engineering Services Group of CERN's Information Technology Department supplies the Oracle Central Database services used in many activities at CERN. In order to provide High Availability and ease management for those services, a NAS (Network Attached Storage) based infrastructure has been setup. It runs several instances of the Oracle RAC (Real Application Cluster) using NFS (Network File System) as shared disk space for RAC purposes and Data hosting. It is composed of two private LANs (Local Area Network), one to provide access to the NAS filers and a second to implement the Oracle RAC private interconnect, both using Network Bonding. NAS filers are configured in partnership to prevent having single points of failure and to provide automatic NAS filer fail-over

  4. Data Mining on Distributed Medical Databases: Recent Trends and Future Directions

    Science.gov (United States)

    Atilgan, Yasemin; Dogan, Firat

    As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.

  5. National Database for Clinical Trials Related to Mental Illness (NDCT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Database for Clinical Trials Related to Mental Illness (NDCT) is an extensible informatics platform for relevant data at all levels of biological and...

  6. Interactome Screening Identifies the ER Luminal Chaperone Hsp47 as a Regulator of the Unfolded Protein Response Transducer IRE1α.

    Science.gov (United States)

    Sepulveda, Denisse; Rojas-Rivera, Diego; Rodríguez, Diego A; Groenendyk, Jody; Köhler, Andres; Lebeaupin, Cynthia; Ito, Shinya; Urra, Hery; Carreras-Sureda, Amado; Hazari, Younis; Vasseur-Cognet, Mireille; Ali, Maruf M U; Chevet, Eric; Campos, Gisela; Godoy, Patricio; Vaisar, Tomas; Bailly-Maitre, Béatrice; Nagata, Kazuhiro; Michalak, Marek; Sierralta, Jimena; Hetz, Claudio

    2018-01-18

    Maintenance of endoplasmic reticulum (ER) proteostasis is controlled by a dynamic signaling network known as the unfolded protein response (UPR). IRE1α is a major UPR transducer, determining cell fate under ER stress. We used an interactome screening to unveil several regulators of the UPR, highlighting the ER chaperone Hsp47 as the major hit. Cellular and biochemical analysis indicated that Hsp47 instigates IRE1α signaling through a physical interaction. Hsp47 directly binds to the ER luminal domain of IRE1α with high affinity, displacing the negative regulator BiP from the complex to facilitate IRE1α oligomerization. The regulation of IRE1α signaling by Hsp47 is evolutionarily conserved as validated using fly and mouse models of ER stress. Hsp47 deficiency sensitized cells and animals to experimental ER stress, revealing the significance of Hsp47 to global proteostasis maintenance. We conclude that Hsp47 adjusts IRE1α signaling by fine-tuning the threshold to engage an adaptive UPR. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. The Establishment of the Chinese Full-text Electronic Periodical Database and Service System

    Directory of Open Access Journals (Sweden)

    Huei-Chu Chang

    2003-12-01

    Full Text Available A database covers important journals to critical mass, with powerful search interface, and easy for remote access is the most reasonable electronic resource for users. This article try to start from the project of digitizing bio-medical journals in Taiwan area to the CEPS, discuss the related issues about the selection of journals, the digitized of back issues, the copyright transfer from authors to database producers, the feedback to authors for payment from revenue. It also talks about the flow of journal publishing, marketing, function and the proposed cost-effectiveness in CEPS.[Article content in Chinese

  8. The relational clinical database: a possible solution to the star wars in registry systems.

    Science.gov (United States)

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  9. Update History of This Database - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database Update History of This Database Date Update contents 201...0/03/29 Yeast Interacting Proteins Database English archive site is opened. 2000/12/4 Yeast Interacting Proteins Database...( http://itolab.cb.k.u-tokyo.ac.jp/Y2H/ ) is released. About This Database Database Description... Download License Update History of This Database Site Policy | Contact Us Update History of This Database... - Yeast Interacting Proteins Database | LSDB Archive ...

  10. FY 1993 annual report. Survey and study on establishment of databases for body functions; 1993 nendo shintai kino database no kochiku ni kansuru chosa kenkyu hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-03-01

    As part of the health/welfare-related information collection, analysis and information service project, establishment of the databases is surveyed and studied for human life technology and body functions of the aged in the aging society. The survey/study on establishment of the human life technology for the aged covers concept of human life technology, systems of the databases for human life technology, and techniques for the database systems. The case study on the human life technology databases for the aged takes up everyday life behaviors of the aged as the models, and analyzes human and life characteristics in everyday life, to clarify the human characteristic, human performance and human life technology design data to be stored in the databases. The validity of the method developed by this project is tested for their behaviors, such as bathing and outgoing. For establishment of the databases for body functions of the aged, literature surveys and interviews are conducted for the technological trends. (NEDO)

  11. Accuracy of Veterans Affairs Databases for Diagnoses of Chronic Diseases

    OpenAIRE

    Singh, Jasvinder A.

    2009-01-01

    Introduction Epidemiologic studies usually use database diagnoses or patient self-report to identify disease cohorts, but no previous research has examined the extent to which self-report of chronic disease agrees with database diagnoses in a Veterans Affairs (VA) health care setting. Methods All veterans who had a medical care visit from October 1, 1996, through May 31, 1998, at any of the Veterans Integrated Service Network 13 facilities were surveyed about physician diagnosis of chronic ob...

  12. JAMSTEC DARWIN Database Assimilates GANSEKI and COEDO

    Science.gov (United States)

    Tomiyama, T.; Toyoda, Y.; Horikawa, H.; Sasaki, T.; Fukuda, K.; Hase, H.; Saito, H.

    2017-12-01

    Introduction: Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives data and samples obtained by JAMSTEC research vessels and submersibles. As a common property of the human society, JAMSTEC archive is open for public users with scientific/educational purposes [1]. For publicizing its data and samples online, JAMSTEC is operating NUUNKUI data sites [2], a group of several databases for various data and sample types. For years, data and metadata of JAMSTEC rock samples, sediment core samples and cruise/dive observation were publicized through databases named GANSEKI, COEDO, and DARWIN, respectively. However, because they had different user interfaces and data structures, these services were somewhat confusing for unfamiliar users. Maintenance costs of multiple hardware and software were also problematic for performing sustainable services and continuous improvements. Database Integration: In 2017, GANSEKI, COEDO and DARWIN were integrated into DARWIN+ [3]. The update also included implementation of map-search function as a substitute of closed portal site. Major functions of previous systems were incorporated into the new system; users can perform the complex search, by thumbnail browsing, map area, keyword filtering, and metadata constraints. As for data handling, the new system is more flexible, allowing the entry of variety of additional data types. Data Management: After the DARWIN major update, JAMSTEC data & sample team has been dealing with minor issues of individual sample data/metadata which sometimes need manual modification to be transferred to the new system. Some new data sets, such as onboard sample photos and surface close-up photos of rock samples, are getting available online. Geochemical data of sediment core samples will supposedly be added in the near future. Reference: [1] http://www.jamstec.go.jp/e/database/data_policy.html [2] http://www.godac.jamstec.go.jp/jmedia/portal/e/ [3] http://www.godac.jamstec.go.jp/darwin/e/

  13. OGC Geographic Information Service Deductive Semantic Reasoning Based on Description Vocabularies Reduction

    Directory of Open Access Journals (Sweden)

    MIAO Lizhi

    2015-09-01

    Full Text Available As geographic information interoperability and sharing developing, more and more interoperable OGC (open geospatial consortium Web services (OWS are generated and published through the internet. These services can facilitate the integration of different scientific applications by searching, finding, and utilizing the large number of scientific data and Web services. However, these services are widely dispersed and hard to be found and utilized with executive semantic retrieval. This is especially true when considering the weak semantic description of geographic information service data. Focusing on semantic retrieval and reasoning of the distributed OWS resources, a deductive and semantic reasoning method is proposed to describe and search relevant OWS resources. Specifically, ①description words are extracted from OWS metadata file to generate GISe ontology-database and instance-database based on geographic ontology according to basic geographic elements category, ②a description words reduction model is put forward to implement knowledge reduction on GISe instance-database based on rough set theory and generate optimized instances database, ③utilizing GISe ontology-database and optimized instance-database to implement semantic inference and reasoning of geographic searching objects is used as an example to demonstrate the efficiency, feasibility and recall ration of the proposed description-word-based reduction model.

  14. Database Description - RMOS | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RMOS Alternative nam...arch Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Microarray Data and other Gene Expression Database...s Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The Ric...19&lang=en Whole data download - Referenced database Rice Expression Database (RED) Rice full-length cDNA Database... (KOME) Rice Genome Integrated Map Database (INE) Rice Mutant Panel Database (Tos17) Rice Genome Annotation Database

  15. Active In-Database Processing to Support Ambient Assisted Living Systems

    Directory of Open Access Journals (Sweden)

    Wagner O. de Morais

    2014-08-01

    Full Text Available As an alternative to the existing software architectures that underpin the development of smart homes and ambient assisted living (AAL systems, this work presents a database-centric architecture that takes advantage of active databases and in-database processing. Current platforms supporting AAL systems use database management systems (DBMSs exclusively for data storage. Active databases employ database triggers to detect and react to events taking place inside or outside of the database. DBMSs can be extended with stored procedures and functions that enable in-database processing. This means that the data processing is integrated and performed within the DBMS. The feasibility and flexibility of the proposed approach were demonstrated with the implementation of three distinct AAL services. The active database was used to detect bed-exits and to discover common room transitions and deviations during the night. In-database machine learning methods were used to model early night behaviors. Consequently, active in-database processing avoids transferring sensitive data outside the database, and this improves performance, security and privacy. Furthermore, centralizing the computation into the DBMS facilitates code reuse, adaptation and maintenance. These are important system properties that take into account the evolving heterogeneity of users, their needs and the devices that are characteristic of smart homes and AAL systems. Therefore, DBMSs can provide capabilities to address requirements for scalability, security, privacy, dependability and personalization in applications of smart environments in healthcare.

  16. PEP725 Pan European Phenological Database

    Science.gov (United States)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  17. KALIMER database development (database configuration and design methodology)

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Kwon, Young Min; Lee, Young Bum; Chang, Won Pyo; Hahn, Do Hee

    2001-10-01

    KALIMER Database is an advanced database to utilize the integration management for Liquid Metal Reactor Design Technology Development using Web Applicatins. KALIMER Design database consists of Results Database, Inter-Office Communication (IOC), and 3D CAD database, Team Cooperation system, and Reserved Documents, Results Database is a research results database during phase II for Liquid Metal Reactor Design Technology Develpment of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD Database is s schematic design overview for KALIMER. Team Cooperation System is to inform team member of research cooperation and meetings. Finally, KALIMER Reserved Documents is developed to manage collected data and several documents since project accomplishment. This report describes the features of Hardware and Software and the Database Design Methodology for KALIMER

  18. BioServices: a common Python package to access biological Web Services programmatically.

    Science.gov (United States)

    Cokelaer, Thomas; Pultz, Dennis; Harder, Lea M; Serra-Musach, Jordi; Saez-Rodriguez, Julio

    2013-12-15

    Web interfaces provide access to numerous biological databases. Many can be accessed to in a programmatic way thanks to Web Services. Building applications that combine several of them would benefit from a single framework. BioServices is a comprehensive Python framework that provides programmatic access to major bioinformatics Web Services (e.g. KEGG, UniProt, BioModels, ChEMBLdb). Wrapping additional Web Services based either on Representational State Transfer or Simple Object Access Protocol/Web Services Description Language technologies is eased by the usage of object-oriented programming. BioServices releases and documentation are available at http://pypi.python.org/pypi/bioservices under a GPL-v3 license.

  19. How to Find HIV Treatment Services

    Science.gov (United States)

    ... also find federally funded health centers through HRSA’s mobile apps . HIV.gov Service Locator This database from ... help with mental health or substance abuse and addiction. SAMHSA’s Behavioral Health Treatment Services Locator allows visitors ...

  20. The Coral Trait Database, a curated database of trait information for coral species from the global oceans

    Science.gov (United States)

    Madin, Joshua S.; Anderson, Kristen D.; Andreasen, Magnus Heide; Bridge, Tom C. L.; Cairns, Stephen D.; Connolly, Sean R.; Darling, Emily S.; Diaz, Marcela; Falster, Daniel S.; Franklin, Erik C.; Gates, Ruth D.; Hoogenboom, Mia O.; Huang, Danwei; Keith, Sally A.; Kosnik, Matthew A.; Kuo, Chao-Yang; Lough, Janice M.; Lovelock, Catherine E.; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M.; Pochon, Xavier; Pratchett, Morgan S.; Putnam, Hollie M.; Roberts, T. Edward; Stat, Michael; Wallace, Carden C.; Widman, Elizabeth; Baird, Andrew H.

    2016-03-01

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism’s function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  1. The Coral Trait Database, a curated database of trait information for coral species from the global oceans.

    Science.gov (United States)

    Madin, Joshua S; Anderson, Kristen D; Andreasen, Magnus Heide; Bridge, Tom C L; Cairns, Stephen D; Connolly, Sean R; Darling, Emily S; Diaz, Marcela; Falster, Daniel S; Franklin, Erik C; Gates, Ruth D; Harmer, Aaron; Hoogenboom, Mia O; Huang, Danwei; Keith, Sally A; Kosnik, Matthew A; Kuo, Chao-Yang; Lough, Janice M; Lovelock, Catherine E; Luiz, Osmar; Martinelli, Julieta; Mizerek, Toni; Pandolfi, John M; Pochon, Xavier; Pratchett, Morgan S; Putnam, Hollie M; Roberts, T Edward; Stat, Michael; Wallace, Carden C; Widman, Elizabeth; Baird, Andrew H

    2016-03-29

    Trait-based approaches advance ecological and evolutionary research because traits provide a strong link to an organism's function and fitness. Trait-based research might lead to a deeper understanding of the functions of, and services provided by, ecosystems, thereby improving management, which is vital in the current era of rapid environmental change. Coral reef scientists have long collected trait data for corals; however, these are difficult to access and often under-utilized in addressing large-scale questions. We present the Coral Trait Database initiative that aims to bring together physiological, morphological, ecological, phylogenetic and biogeographic trait information into a single repository. The database houses species- and individual-level data from published field and experimental studies alongside contextual data that provide important framing for analyses. In this data descriptor, we release data for 56 traits for 1547 species, and present a collaborative platform on which other trait data are being actively federated. Our overall goal is for the Coral Trait Database to become an open-source, community-led data clearinghouse that accelerates coral reef research.

  2. Multi tenancy for cloud-based in-memory column databases workload management and data placement

    CERN Document Server

    Schaffner, Jan

    2014-01-01

    With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using ""multi tenancy,"" a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently

  3. Data-base tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    The authors use a commercial data-base software package to create several data-base products that enhance the ability of experimental physicists to analyze data from the TMX-U experiment. This software resides on a Dec-20 computer in M-Divisions's user service center (USC), where data can be analyzed separately from the main acquisition computers. When these data-base tools are combined with interactive data analysis programs, physicists can perform automated (batch-style) processing or interactive data analysis on the computers in the USC or on the supercomputers of the NMFECC, in addition to the normal processing done on the acquisition system. One data-base tool provides highly reduced data for searching and correlation analysis of several diagnostic signals for a single shot or many shots. A second data-base tool provides retrieval and storage of unreduced data for detailed analysis of one or more diagnostic signals. The authors report how these data-base tools form the core of an evolving off-line data-analysis environment on the USC computers

  4. Bioinformatics and moonlighting proteins

    Directory of Open Access Journals (Sweden)

    Sergio eHernández

    2015-06-01

    Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.

  5. Database Description - SAHG | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SAHG Alternative nam...h: Contact address Chie Motono Tel : +81-3-3599-8067 E-mail : Database classification Structure Databases - ...e databases - Protein properties Organism Taxonomy Name: Homo sapiens Taxonomy ID: 9606 Database description... Links: Original website information Database maintenance site The Molecular Profiling Research Center for D...stration Not available About This Database Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - SAHG | LSDB Archive ...

  6. VnpPersonService

    Data.gov (United States)

    Department of Veterans Affairs — Common Web Service for VONAPP. Retrieves, creates, updates and deletes a veteranÆs VNP_PERSON record in the Corporate database for a claim selected in the GUI by the...

  7. Construction of nuclear special information service

    International Nuclear Information System (INIS)

    Oh, Jeong Hoon; Kim, Tae Whan; Yi, Ji Ho; Chun, Young Choon; Yoo, Jae Bok; Yoo, An Na; Choi, Heon Soo

    2012-01-01

    Domestic INIS project has carried out various activities on supporting a decision-making for INIS Secretariat, exchanges of the statistical information between INIS and the country, and technical assistance for domestic end-users using INIS database. Based on the construction of INIS database sent by member states, the data published in the country has been gathered, collected, and inputted to INIS database according to the INIS reference series. Using the INIS output data, it has provided domestic users with searching INIS CD-ROM DB, INIS online database, INIS SDI service, and non-conventional literature delivery services. INIS2 DB Host site in Korea has serviced users of domestic and INIS member countries. In order to maintain the same data as Vienna center, the data update process has been performed. Also, publicity information activities were performed by many ways. To construct digital information infrastructure, we changed web server of KORNIS21 and we reconstruct digital library system as separation of KAERI network system. We constructed patent trend analysis system and new SDI service system. Also we collected Web DB, digital journals and so on and made an effort on operation of knowledge management system and research documents management. We have inputted over 4,000 records per year since 2009 and the input amount this year has reached 4,284 records. In order to input the comprehensive domestic publication related to nuclear energy, and rise in position of the national center, it is necessary to continue efforts and support budgets. We expect the INIS2 DB Host site will make a contribution to the improvement of productivity in the nuclear energy research as well as the diffusion of information about nuclear energy. We provided users with stable services changing web server of KORNIS21. Also we contributed to the improvement of researcher's productivity by constructing patent trend analysis system, new SDI service system and provided users with services

  8. The SAMGrid database server component: its upgraded infrastructure and future development path

    International Nuclear Information System (INIS)

    Loebel-Carpenter, L.; White, S.; Baranovski, A.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; Burgon-Lyon, M.; St Denis, R.; Belforte, S.; Kerzel, U.; Bartsch, V.; Leslie, M.

    2004-01-01

    The SAMGrid Database Server encapsulates several important services, such as accessing file metadata and replica catalog, keeping track of the processing information, as well as providing the runtime support for SAMGrid station services. Recent deployment of the SAMGrid system for CDF has resulted in unification of the database schema used by CDF and D0, and the complexity of changes required for the unified metadata catalog has warranted a complete redesign of the DB Server. We describe here the architecture and features of the new server. In particular, we discuss the new CORBA infrastructure that utilizes python wrapper classes around IDL structs and exceptions. Such infrastructure allows us to use the same code on both server and client sides, which in turn results in significantly improved code maintainability and easier development. We also discuss future integration of the new server with an SBIR II project which is directed toward allowing the DB Server to access distributed databases, implemented in different DB systems and possibly using different schema

  9. Database Description - ASTRA | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name ASTRA Alternative n...tics Journal Search: Contact address Database classification Nucleotide Sequence Databases - Gene structure,...3702 Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description The database represents classified p...(10):1211-6. External Links: Original website information Database maintenance site National Institute of Ad... for user registration Not available About This Database Database Description Dow

  10. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  11. Database Description - RPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RPD Alternative name Rice Proteome Database...titute of Crop Science, National Agriculture and Food Research Organization Setsuko Komatsu E-mail: Database... classification Proteomics Resources Plant databases - Rice Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database... description Rice Proteome Database contains information on protei...and entered in the Rice Proteome Database. The database is searchable by keyword,

  12. Database Description - PLACE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name PLACE Alternative name A Database...Kannondai, Tsukuba, Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences E-mail : Databas...e classification Plant databases Organism Taxonomy Name: Tracheophyta Taxonomy ID: 58023 Database...99, Vol.27, No.1 :297-300 External Links: Original website information Database maintenance site National In...- Need for user registration Not available About This Database Database Descripti

  13. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    Science.gov (United States)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  14. Internet-Based Indoor Navigation Services

    OpenAIRE

    Zeinalipour-Yazti, Demetrios; Laoudias, Christos; Georgiou, Kyriakos

    2017-01-01

    Smartphone advances are leading to a class of Internet-based Indoor Navigation services. IIN services rely on geolocation databases that store indoor models, comprising floor maps and points of interest, along with wireless, light, and magnetic signals for localizing users. Developing IIN services creates new information management challenges - such as crowdsourcing indoor models, acquiring and fusing big data velocity signals, localization algorithms, and custodians' location privacy. Here, ...

  15. Database Description - JSNP | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name JSNP Alternative nam...n Science and Technology Agency Creator Affiliation: Contact address E-mail : Database...sapiens Taxonomy ID: 9606 Database description A database of about 197,000 polymorphisms in Japanese populat...1):605-610 External Links: Original website information Database maintenance site Institute of Medical Scien...er registration Not available About This Database Database Description Download License Update History of This Database

  16. Database Description - RED | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available ase Description General information of database Database name RED Alternative name Rice Expression Database...enome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice Database classifi...cation Microarray, Gene Expression Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database descripti... Article title: Rice Expression Database: the gateway to rice functional genomics...nt Science (2002) Dec 7 (12):563-564 External Links: Original website information Database maintenance site

  17. ODIN. Online Database Information Network: ODIN Policy & Procedure Manual.

    Science.gov (United States)

    Townley, Charles T.; And Others

    Policies and procedures are outlined for the Online Database Information Network (ODIN), a cooperative of libraries in south-central Pennsylvania, which was organized to improve library services through technology. The first section covers organization and goals, members, and responsibilities of the administrative council and libraries. Patrons…

  18. IT Services availability during CERN annual closure

    CERN Multimedia

    2003-01-01

    Mail, Cern Windows (NICE ),  Web services,  LXPLUS, LXBATCH, Automated tape devices, Castor, Backups, software license servers, Sundev, CVS and Print Servers, CDS and Agenda-Maker, EDMS (in collaboration with EST Division), CMS disc servers, Campus Network,  Remedy, Security and VPN services will be available during the CERN annual closure. The physics database cluster, replication location service as well as accdb, cerndb and admsdb are the only databases available, CCDB will be closed for public access. Problems developing on scheduled services should be addressed within about half a day except around  Christmas Eve and Christmas Day ( 24 and 25th December ) and New Year's Eve and New Years Day (31st December and 1st Janauary. All other services will be left running mostly unattended. No interruptions are scheduled but restoration of the service in case of failure cannot be guaranteed. Please note that the Helpdesk will be closed, that no file restores from backups will be possible and damaged tapes wi...

  19. Simulation for IT Service Desk Improvement

    Directory of Open Access Journals (Sweden)

    Peter Bober

    2014-07-01

    Full Text Available IT service desk is a complex service that IT service company provides to its customers. This article provides a methodology which uses discrete-event simulation to help IT service management to make decision and to plan service strategy. Simulation model considers learning ability of service desk agents and growth of knowledge database of the company. Model shows how the learning curve influences the time development of service desk quality and efficiency. This article promotes using simulation to define quantitative goals for the service desk improvement.

  20. Introducing the hypothome

    DEFF Research Database (Denmark)

    Madsen, Claus Desler; Zambach, Sine; Suravajhala, Prashanth

    2014-01-01

    of doing so is the risk of devaluing the definition of interactomes. By adding proteins that have only been predicted, an interactome can no longer be classified as experimentally verified and the integrity of the interactome will be endured. Therefore, we propose the term 'hypothome' (collection......An interactome is defined as a network of protein-protein interactions built from experimentally verified interactions. Basic science as well as application-based research of potential new drugs can be promoted by including proteins that are only predicted into interactomes. The disadvantage...

  1. PhEDEx Data Service

    International Nuclear Information System (INIS)

    Egeland, Ricky; Wildish, Tony; Huang, Chih-Hao

    2010-01-01

    The PhEDEx Data Service provides access to information from the central PhEDEx database, as well as certificate-authenticated managerial operations such as requesting the transfer or deletion of data. The Data Service is integrated with the 'SiteDB' service for fine-grained access control, providing a safe and secure environment for operations. A plug-in architecture allows server-side modules to be developed rapidly and easily by anyone familiar with the schema, and can automatically return the data in a variety of formats for use by different client technologies. Using HTTP access via the Data Service instead of direct database connections makes it possible to build monitoring web-pages with complex drill-down operations, suitable for debugging or presentation from many aspects. This will form the basis of the new PhEDEx website in the near future, as well as providing access to PhEDEx information and certificate-authenticated services for other CMS dataflow and workflow management tools such as CRAB, WMCore, DBS and the dashboard. A PhEDEx command-line client tool provides one-stop access to all the functions of the PhEDEx Data Service interactively, for use in simple scripts that do not access the service directly. The client tool provides certificate-authenticated access to managerial functions, so all the functions of the PhEDEx Data Service are available to it. The tool can be expanded by plug-ins which can combine or extend the client-side manipulation of data from the Data Service, providing a powerful environment for manipulating data within PhEDEx.

  2. Database management systems understanding and applying database technology

    CERN Document Server

    Gorman, Michael M

    1991-01-01

    Database Management Systems: Understanding and Applying Database Technology focuses on the processes, methodologies, techniques, and approaches involved in database management systems (DBMSs).The book first takes a look at ANSI database standards and DBMS applications and components. Discussion focus on application components and DBMS components, implementing the dynamic relationship application, problems and benefits of dynamic relationship DBMSs, nature of a dynamic relationship application, ANSI/NDL, and DBMS standards. The manuscript then ponders on logical database, interrogation, and phy

  3. 78 FR 2363 - Notification of Deletion of a System of Records; Automated Trust Funds Database

    Science.gov (United States)

    2013-01-11

    ... [Docket No. APHIS-2012-0041] Notification of Deletion of a System of Records; Automated Trust Funds Database AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice of deletion of a system... establishing the Automated Trust Funds (ATF) database system of records. The Federal Information Security...

  4. Global Findex Database 2017 : Measuring Financial Inclusion and the Fintech Revolution

    OpenAIRE

    Demirguc-Kunt, Asli; Klapper, Leora; Singer, Dorothe; Ansar, Saniya; Hess, Jake

    2018-01-01

    The Global Findex database is the world's most comprehensive set of data on how people make payments, save money, borrow and manage risk. Launched in 2011, it includes more than 100 financial inclusion indicators in a format allowing users to compare access to financial services among adults worldwide -- including by gender, age and household income. This third edition of the database was compiled in 2017 using nationally representative surveys in more than 140 developing and high-income...

  5. Relational databases

    CERN Document Server

    Bell, D A

    1986-01-01

    Relational Databases explores the major advances in relational databases and provides a balanced analysis of the state of the art in relational databases. Topics covered include capture and analysis of data placement requirements; distributed relational database systems; data dependency manipulation in database schemata; and relational database support for computer graphics and computer aided design. This book is divided into three sections and begins with an overview of the theory and practice of distributed systems, using the example of INGRES from Relational Technology as illustration. The

  6. Database Description - DGBY | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DGBY Alternative name Database...EL: +81-29-838-8066 E-mail: Database classification Microarray Data and other Gene Expression Databases Orga...nism Taxonomy Name: Saccharomyces cerevisiae Taxonomy ID: 4932 Database descripti...-called phenomics). We uploaded these data on this website which is designated DGBY(Database for Gene expres...ma J, Ando A, Takagi H. Journal: Yeast. 2008 Mar;25(3):179-90. External Links: Original website information Database

  7. Database Description - KOME | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name KOME Alternative nam... Sciences Plant Genome Research Unit Shoshi Kikuchi E-mail : Database classification Plant databases - Rice ...Organism Taxonomy Name: Oryza sativa Taxonomy ID: 4530 Database description Information about approximately ...Hayashizaki Y, Kikuchi S. Journal: PLoS One. 2007 Nov 28; 2(11):e1235. External Links: Original website information Database...OS) Rice mutant panel database (Tos17) A Database of Plant Cis-acting Regulatory

  8. Time-Critical Database Conditions Data-Handling for the CMS Experiment

    CERN Document Server

    De Gruttola, M; Innocente, V; Pierro, A

    2011-01-01

    Automatic, synchronous and of course reliable population of the condition database is critical for the correct operation of the online selection as well as of the offline reconstruction and data analysis. We will describe here the system put in place in the CMS experiment to automate the processes to populate centrally the database and make condition data promptly available both online for the high-level trigger and offline for reconstruction. The data are ``dropped{''} by the users in a dedicated service which synchronizes them and takes care of writing them into the online database. Then they are automatically streamed to the offline database, hence immediately accessible offline worldwide. This mechanism was intensively used during 2008 and 2009 operation with cosmic ray challenges and first LHC collision data, and many improvements were done so far. The experience of this first years of operation will be discussed in detail.

  9. Intelligent Access to Sequence and Structure Databases (IASSD) - an interface for accessing information from major web databases.

    Science.gov (United States)

    Ganguli, Sayak; Gupta, Manoj Kumar; Basu, Protip; Banik, Rahul; Singh, Pankaj Kumar; Vishal, Vineet; Bera, Abhisek Ranjan; Chakraborty, Hirak Jyoti; Das, Sasti Gopal

    2014-01-01

    With the advent of age of big data and advances in high throughput technology accessing data has become one of the most important step in the entire knowledge discovery process. Most users are not able to decipher the query result that is obtained when non specific keywords or a combination of keywords are used. Intelligent access to sequence and structure databases (IASSD) is a desktop application for windows operating system. It is written in Java and utilizes the web service description language (wsdl) files and Jar files of E-utilities of various databases such as National Centre for Biotechnology Information (NCBI) and Protein Data Bank (PDB). Apart from that IASSD allows the user to view protein structure using a JMOL application which supports conditional editing. The Jar file is freely available through e-mail from the corresponding author.

  10. SoyFN: a knowledge database of soybean functional networks.

    Science.gov (United States)

    Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.

  11. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Divisions's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed off line from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving off-line data analysis environment on the USC computers

  12. A comparison of database systems for XML-type data.

    Science.gov (United States)

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  13. Database tools for enhanced analysis of TMX-U data

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  14. Scanning an individual monitoring database for multiple occurrences using bi-gram analysis

    International Nuclear Information System (INIS)

    Van Dijk, J. W. E.

    2007-01-01

    Maintaining the integrity of the databases is one of the important aspects of quality assurance at individual monitoring services and national dose registers. This paper presents a method for finding and preventing the occurrence of duplicate entries in the databases that can occur, e.g. because of a variable spelling or misspelling of the name. The method is based on bi-gram text analysis techniques. The methods can also be used for retrieving dose data in historical databases in the framework of dose reconstruction efforts of persons of whom the spelling of the name as originally entered, possibly decades ago, is uncertain. (authors)

  15. Databases

    Directory of Open Access Journals (Sweden)

    Nick Ryan

    2004-01-01

    Full Text Available Databases are deeply embedded in archaeology, underpinning and supporting many aspects of the subject. However, as well as providing a means for storing, retrieving and modifying data, databases themselves must be a result of a detailed analysis and design process. This article looks at this process, and shows how the characteristics of data models affect the process of database design and implementation. The impact of the Internet on the development of databases is examined, and the article concludes with a discussion of a range of issues associated with the recording and management of archaeological data.

  16. Towards Building a Uniform Cloud Database Representation for Data Interchange

    Directory of Open Access Journals (Sweden)

    Andreica Alina

    2016-12-01

    Full Text Available The paper proposes design principles for data representation and simplification in order to design cloud services for data exchange between various information systems. We use equivalence algorithms and canonical representation in the cloud database. The solution we describe brings important advantages in organizational / entity communication and cooperation, with important societal benefits and can be provided within cloud architectures. The generic design principles we apply bring important advantages in the design of the interchange services.

  17. NNDC Stand: Activities and Services of the National Nuclear Data Center

    International Nuclear Information System (INIS)

    Pritychenko, B.; Arcilla, R.; Burrows, T.W.; Dunford, C.L.; Herman, M.W.; McLane, V.; Oblozinsky, P.; Sonzogni, A.A.; Tuli, J.K.; Winchell, D.F.

    2005-01-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic nuclear research, applied nuclear technologies including energy, shielding, medical and homeland security. In 2004, to answer the needs of nuclear data users community, NNDC completed a project to modernize data storage and management of its databases and began offering new nuclear data Web services. The principles of database and Web application development as well as related nuclear reaction and structure database services are briefly described

  18. A side-effect free method for identifying cancer drug targets.

    Science.gov (United States)

    Ashraf, Md Izhar; Ong, Seng-Kai; Mujawar, Shama; Pawar, Shrikant; More, Pallavi; Paul, Somnath; Lahiri, Chandrajit

    2018-04-27

    Identifying effective drug targets, with little or no side effects, remains an ever challenging task. A potential pitfall of failing to uncover the correct drug targets, due to side effect of pleiotropic genes, might lead the potential drugs to be illicit and withdrawn. Simplifying disease complexity, for the investigation of the mechanistic aspects and identification of effective drug targets, have been done through several approaches of protein interactome analysis. Of these, centrality measures have always gained importance in identifying candidate drug targets. Here, we put forward an integrated method of analysing a complex network of cancer and depict the importance of k-core, functional connectivity and centrality (KFC) for identifying effective drug targets. Essentially, we have extracted the proteins involved in the pathways leading to cancer from the pathway databases which enlist real experimental datasets. The interactions between these proteins were mapped to build an interactome. Integrative analyses of the interactome enabled us to unearth plausible reasons for drugs being rendered withdrawn, thereby giving future scope to pharmaceutical industries to potentially avoid them (e.g. ESR1, HDAC2, F2, PLG, PPARA, RXRA, etc). Based upon our KFC criteria, we have shortlisted ten proteins (GRB2, FYN, PIK3R1, CBL, JAK2, LCK, LYN, SYK, JAK1 and SOCS3) as effective candidates for drug development.

  19. The GLIMS Glacier Database

    Science.gov (United States)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  20. The Reactive Species Interactome: Evolutionary Emergence, Biological Significance, and Opportunities for Redox Metabolomics and Personalized Medicine.

    Science.gov (United States)

    Cortese-Krott, Miriam M; Koning, Anne; Kuhnle, Gunter G C; Nagy, Peter; Bianco, Christopher L; Pasch, Andreas; Wink, David A; Fukuto, Jon M; Jackson, Alan A; van Goor, Harry; Olson, Kenneth R; Feelisch, Martin

    2017-10-01

    Oxidative stress is thought to account for aberrant redox homeostasis and contribute to aging and disease. However, more often than not, administration of antioxidants is ineffective, suggesting that our current understanding of the underlying regulatory processes is incomplete. Recent Advances: Similar to reactive oxygen species and reactive nitrogen species, reactive sulfur species are now emerging as important signaling molecules, targeting regulatory cysteine redox switches in proteins, affecting gene regulation, ion transport, intermediary metabolism, and mitochondrial function. To rationalize the complexity of chemical interactions of reactive species with themselves and their targets and help define their role in systemic metabolic control, we here introduce a novel integrative concept defined as the reactive species interactome (RSI). The RSI is a primeval multilevel redox regulatory system whose architecture, together with the physicochemical characteristics of its constituents, allows efficient sensing and rapid adaptation to environmental changes and various other stressors to enhance fitness and resilience at the local and whole-organism level. To better characterize the RSI-related processes that determine fluxes through specific pathways and enable integration, it is necessary to disentangle the chemical biology and activity of reactive species (including precursors and reaction products), their targets, communication systems, and effects on cellular, organ, and whole-organism bioenergetics using system-level/network analyses. Understanding the mechanisms through which the RSI operates will enable a better appreciation of the possibilities to modulate the entire biological system; moreover, unveiling molecular signatures that characterize specific environmental challenges or other forms of stress will provide new prevention/intervention opportunities for personalized medicine. Antioxid. Redox Signal. 00, 000-000.

  1. Database Description - SSBD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name SSBD Alternative nam...ss 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe 650-0047, Japan, RIKEN Quantitative Biology Center Shuichi Onami E-mail: Database... classification Other Molecular Biology Databases Database classification Dynamic databa...elegans Taxonomy ID: 6239 Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database description Systems Scie...i Onami Journal: Bioinformatics/April, 2015/Volume 31, Issue 7 External Links: Original website information Database

  2. Database Description - GETDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name GETDB Alternative n...ame Gal4 Enhancer Trap Insertion Database DOI 10.18908/lsdba.nbdc00236-000 Creator Creator Name: Shigeo Haya... Chuo-ku, Kobe 650-0047 Tel: +81-78-306-3185 FAX: +81-78-306-3183 E-mail: Database classification Expression... Invertebrate genome database Organism Taxonomy Name: Drosophila melanogaster Taxonomy ID: 7227 Database des...riginal website information Database maintenance site Drosophila Genetic Resource

  3. Comparative proteomic analysis of normal and collagen IX null mouse cartilage reveals altered extracellular matrix composition and novel components of the collagen IX interactome.

    Science.gov (United States)

    Brachvogel, Bent; Zaucke, Frank; Dave, Keyur; Norris, Emma L; Stermann, Jacek; Dayakli, Münire; Koch, Manuel; Gorman, Jeffrey J; Bateman, John F; Wilson, Richard

    2013-05-10

    Collagen IX is an integral cartilage extracellular matrix component important in skeletal development and joint function. Proteomic analysis and validation studies revealed novel alterations in collagen IX null cartilage. Matrilin-4, collagen XII, thrombospondin-4, fibronectin, βig-h3, and epiphycan are components of the in vivo collagen IX interactome. We applied a proteomics approach to advance our understanding of collagen IX ablation in cartilage. The cartilage extracellular matrix is essential for endochondral bone development and joint function. In addition to the major aggrecan/collagen II framework, the interacting complex of collagen IX, matrilin-3, and cartilage oligomeric matrix protein (COMP) is essential for cartilage matrix stability, as mutations in Col9a1, Col9a2, Col9a3, Comp, and Matn3 genes cause multiple epiphyseal dysplasia, in which patients develop early onset osteoarthritis. In mice, collagen IX ablation results in severely disturbed growth plate organization, hypocellular regions, and abnormal chondrocyte shape. This abnormal differentiation is likely to involve altered cell-matrix interactions but the mechanism is not known. To investigate the molecular basis of the collagen IX null phenotype we analyzed global differences in protein abundance between wild-type and knock-out femoral head cartilage by capillary HPLC tandem mass spectrometry. We identified 297 proteins in 3-day cartilage and 397 proteins in 21-day cartilage. Components that were differentially abundant between wild-type and collagen IX-deficient cartilage included 15 extracellular matrix proteins. Collagen IX ablation was associated with dramatically reduced COMP and matrilin-3, consistent with known interactions. Matrilin-1, matrilin-4, epiphycan, and thrombospondin-4 levels were reduced in collagen IX null cartilage, providing the first in vivo evidence for these proteins belonging to the collagen IX interactome. Thrombospondin-4 expression was reduced at the mRNA level

  4. The ATLAS conditions database architecture for the Muon spectrometer

    International Nuclear Information System (INIS)

    Verducci, Monica

    2010-01-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  5. The ATLAS conditions database architecture for the Muon spectrometer

    Science.gov (United States)

    Verducci, Monica; ATLAS Muon Collaboration

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  6. Public participation in genetic databases: crossing the boundaries between biobanks and forensic DNA databases through the principle of solidarity.

    Science.gov (United States)

    Machado, Helena; Silva, Susana

    2015-10-01

    The ethical aspects of biobanks and forensic DNA databases are often treated as separate issues. As a reflection of this, public participation, or the involvement of citizens in genetic databases, has been approached differently in the fields of forensics and medicine. This paper aims to cross the boundaries between medicine and forensics by exploring the flows between the ethical issues presented in the two domains and the subsequent conceptualisation of public trust and legitimisation. We propose to introduce the concept of 'solidarity', traditionally applied only to medical and research biobanks, into a consideration of public engagement in medicine and forensics. Inclusion of a solidarity-based framework, in both medical biobanks and forensic DNA databases, raises new questions that should be included in the ethical debate, in relation to both health services/medical research and activities associated with the criminal justice system. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  7. The CMS dataset bookkeeping service

    Science.gov (United States)

    Afaq, A.; Dolgert, A.; Guo, Y.; Jones, C.; Kosyakov, S.; Kuznetsov, V.; Lueking, L.; Riley, D.; Sekhri, V.

    2008-07-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  8. The CMS dataset bookkeeping service

    Energy Technology Data Exchange (ETDEWEB)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V [Fermilab, Batavia, Illinois 60510 (United States); Dolgert, A; Jones, C; Kuznetsov, V; Riley, D [Cornell University, Ithaca, New York 14850 (United States)

    2008-07-15

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems.

  9. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, A; Guo, Y; Kosyakov, S; Lueking, L; Sekhri, V; Dolgert, A; Jones, C; Kuznetsov, V; Riley, D

    2008-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  10. The CMS dataset bookkeeping service

    International Nuclear Information System (INIS)

    Afaq, Anzar; Dolgert, Andrew; Guo, Yuyi; Jones, Chris; Kosyakov, Sergey; Kuznetsov, Valentin; Lueking, Lee; Riley, Dan; Sekhri, Vijay

    2007-01-01

    The CMS Dataset Bookkeeping Service (DBS) has been developed to catalog all CMS event data from Monte Carlo and Detector sources. It provides the ability to identify MC or trigger source, track data provenance, construct datasets for analysis, and discover interesting data. CMS requires processing and analysis activities at various service levels and the DBS system provides support for localized processing or private analysis, as well as global access for CMS users at large. Catalog entries can be moved among the various service levels with a simple set of migration tools, thus forming a loose federation of databases. DBS is available to CMS users via a Python API, Command Line, and a Discovery web page interfaces. The system is built as a multi-tier web application with Java servlets running under Tomcat, with connections via JDBC to Oracle or MySQL database backends. Clients connect to the service through HTTP or HTTPS with authentication provided by GRID certificates and authorization through VOMS. DBS is an integral part of the overall CMS Data Management and Workflow Management systems

  11. Change management for semantic web services

    CERN Document Server

    Liu, Xumin; Bouguettaya, Athman

    2011-01-01

    Change Management for Semantic Web Services provides a thorough analysis of change management in the lifecycle of services for databases and workflows, including changes that occur at the individual service level or at the aggregate composed service level. This book describes taxonomy of changes that are expected in semantic service oriented environments. The process of change management consists of detecting, propagating, and reacting to changes. Change Management for Semantic Web Services is one of the first books that discuss the development of a theoretical foundation for managing changes

  12. Selection of Minerals properties using service oriented architecture

    Directory of Open Access Journals (Sweden)

    Pavel Horovčák

    2009-09-01

    Full Text Available Continually and impressive amplification of internet technologies development and implementation enables the creationof productive, efficient, useful and interactive web applications. The contribution briefly characterizes SOA (Service OrientedArchitecture, WS (Web Service and AJAX (Asynchronous JavaScript And XML technology and illustrates advantages of AJAX and WSintegration on application example for interactive selection of one or more minerals according to actually chosen selection criteria.Contribution presents three created web services (service for creating of web page’s select list based on given database table content,service for selection of one or a group of minerals according to specified criteria from the group of database tables, and service forcorrect depiction of chemical formulas on web page. The application makes use of two web services on the server side and one webservice plus Ajax technology on the client’s side. Application’s client’s side presents integration of these web services in a dynamic wayby means of Ajax technology and at the same time it is a mashup demonstration.

  13. Download - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Trypanosomes Database Download First of all, please read the license of this database. Data ...1.4 KB) Simple search and download Downlaod via FTP FTP server is sometimes jammed. If it is, access [here]. About This Database Data...base Description Download License Update History of This Database Site Policy | Contact Us Download - Trypanosomes Database | LSDB Archive ...

  14. License - Arabidopsis Phenome Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Arabidopsis Phenome Database License License to Use This Database Last updated : 2017/02/27 You may use this database...cense specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative ...Commons Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...ative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database, you ar

  15. SWS: accessing SRS sites contents through Web Services.

    Science.gov (United States)

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  16. Structure and needs of global loss databases about natural disaster

    Science.gov (United States)

    Steuer, Markus

    2010-05-01

    Global loss databases are used for trend analyses and statistics in scientific projects, studies for governmental and nongovernmental organizations and for the insurance and finance industry as well. At the moment three global data sets are established: EM-DAT (CRED), Sigma (Swiss Re) and NatCatSERVICE (Munich Re). Together with the Asian Disaster Reduction Center (ADRC) and United Nations Development Program (UNDP) started a collaborative initiative in 2007 with the aim to agreed on and implemented a common "Disaster Category Classification and Peril Terminology for Operational Databases". This common classification has been established through several technical meetings and working groups and represents a first and important step in the development of a standardized international classification of disasters and terminology of perils. This means concrete to set up a common hierarchy and terminology for all global and regional databases on natural disasters and establish a common and agreed definition of disaster groups, main types and sub-types of events. Also the theme of georeferencing, temporal aspects, methodology and sourcing were other issues that have been identified and will be discussed. The implementation of the new and defined structure for global loss databases is already set up for Munich Re NatCatSERVICE. In the following oral session we will show the structure of the global databases as defined and in addition to give more transparency of the data sets behind published statistics and analyses. The special focus will be on the catastrophe classification from a moderate loss event up to a great natural catastrophe, also to show the quality of sources and give inside information about the assessment of overall and insured losses. Keywords: disaster category classification, peril terminology, overall and insured losses, definition

  17. Database Description - AcEST | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available abase Description General information of database Database name AcEST Alternative n...hi, Tokyo-to 192-0397 Tel: +81-42-677-1111(ext.3654) E-mail: Database classificat...eneris Taxonomy ID: 13818 Database description This is a database of EST sequences of Adiantum capillus-vene...(3): 223-227. External Links: Original website information Database maintenance site Plant Environmental Res...base Database Description Download License Update History of This Database Site Policy | Contact Us Database Description - AcEST | LSDB Archive ...

  18. License - SKIP Stemcell Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us SKIP Stemcell Database License License to Use This Database Last updated : 2017/03/13 You may use this database...specifies the license terms regarding the use of this database and the requirements you must follow in using this database.... The license for this database is specified in the Creative Common...s Attribution-Share Alike 4.0 International . If you use data from this database, please be sure attribute this database...al ... . The summary of the Creative Commons Attribution-Share Alike 4.0 International is found here . With regard to this database

  19. KALIMER database development

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment.

  20. KALIMER database development

    International Nuclear Information System (INIS)

    Jeong, Kwan Seong; Lee, Yong Bum; Jeong, Hae Yong; Ha, Kwi Seok

    2003-03-01

    KALIMER database is an advanced database to utilize the integration management for liquid metal reactor design technology development using Web applications. KALIMER design database is composed of results database, Inter-Office Communication (IOC), 3D CAD database, and reserved documents database. Results database is a research results database during all phase for liquid metal reactor design technology development of mid-term and long-term nuclear R and D. IOC is a linkage control system inter sub project to share and integrate the research results for KALIMER. 3D CAD database is a schematic overview for KALIMER design structure. And reserved documents database is developed to manage several documents and reports since project accomplishment

  1. Database Description - RPSD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name RPSD Alternative nam...e Rice Protein Structure Database DOI 10.18908/lsdba.nbdc00749-000 Creator Creator Name: Toshimasa Yamazaki ... Ibaraki 305-8602, Japan National Institute of Agrobiological Sciences Toshimasa Yamazaki E-mail : Databas...e classification Structure Databases - Protein structure Organism Taxonomy Name: Or...or name(s): Journal: External Links: Original website information Database maintenance site National Institu

  2. Database architecture evolution: Mammals flourished long before dinosaurs became extinct

    NARCIS (Netherlands)

    S. Manegold (Stefan); M.L. Kersten (Martin); P.A. Boncz (Peter)

    2009-01-01

    textabstractThe holy grail for database architecture research is to find a solution that is Scalable & Speedy, to run on anything from small ARM processors up to globally distributed compute clusters, Stable & Secure, to service a broad user community, Small & Simple, to be comprehensible to a small

  3. NoSQL databases

    OpenAIRE

    Mrozek, Jakub

    2012-01-01

    This thesis deals with database systems referred to as NoSQL databases. In the second chapter, I explain basic terms and the theory of database systems. A short explanation is dedicated to database systems based on the relational data model and the SQL standardized query language. Chapter Three explains the concept and history of the NoSQL databases, and also presents database models, major features and the use of NoSQL databases in comparison with traditional database systems. In the fourth ...

  4. A performance study on the synchronisation of heterogeneous Grid databases using CONStanza

    CERN Document Server

    Pucciani, G; Domenici, Andrea; Stockinger, Heinz

    2010-01-01

    In Grid environments, several heterogeneous database management systems are used in various administrative domains. However, data exchange and synchronisation need to be available across different sites and different database systems. In this article we present our data consistency service CONStanza and give details on how we achieve relaxed update synchronisation between different database implementations. The integration in existing Grid environments is one of the major goals of the system. Performance tests have been executed following a factorial approach. Detailed experimental results and a statistical analysis are presented to evaluate the system components and drive future developments. (C) 2010 Elsevier B.V. All rights reserved.

  5. Solutions in radiology services management: a literature review.

    Science.gov (United States)

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. In the databases, 565 papers - 120 out of them, pdf free - were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies.

  6. Interactome analyses identify ties of PrP and its mammalian paralogs to oligomannosidic N-glycans and endoplasmic reticulum-derived chaperones.

    Directory of Open Access Journals (Sweden)

    Joel C Watts

    2009-10-01

    Full Text Available The physiological environment which hosts the conformational conversion of the cellular prion protein (PrP(C to disease-associated isoforms has remained enigmatic. A quantitative investigation of the PrP(C interactome was conducted in a cell culture model permissive to prion replication. To facilitate recognition of relevant interactors, the study was extended to Doppel (Prnd and Shadoo (Sprn, two mammalian PrP(C paralogs. Interestingly, this work not only established a similar physiological environment for the three prion protein family members in neuroblastoma cells, but also suggested direct interactions amongst them. Furthermore, multiple interactions between PrP(C and the neural cell adhesion molecule, the laminin receptor precursor, Na/K ATPases and protein disulfide isomerases (PDI were confirmed, thereby reconciling previously separate findings. Subsequent validation experiments established that interactions of PrP(C with PDIs may extend beyond the endoplasmic reticulum and may play a hitherto unrecognized role in the accumulation of PrP(Sc. A simple hypothesis is presented which accounts for the majority of interactions observed in uninfected cells and suggests that PrP(C organizes its molecular environment on account of its ability to bind to adhesion molecules harboring immunoglobulin-like domains, which in turn recognize oligomannose-bearing membrane proteins.

  7. Interactome analyses identify ties of PrP and its mammalian paralogs to oligomannosidic N-glycans and endoplasmic reticulum-derived chaperones.

    Science.gov (United States)

    Watts, Joel C; Huo, Hairu; Bai, Yu; Ehsani, Sepehr; Jeon, Amy Hye Won; Won, Amy Hye; Shi, Tujin; Daude, Nathalie; Lau, Agnes; Young, Rebecca; Xu, Lei; Carlson, George A; Williams, David; Westaway, David; Schmitt-Ulms, Gerold

    2009-10-01

    The physiological environment which hosts the conformational conversion of the cellular prion protein (PrP(C)) to disease-associated isoforms has remained enigmatic. A quantitative investigation of the PrP(C) interactome was conducted in a cell culture model permissive to prion replication. To facilitate recognition of relevant interactors, the study was extended to Doppel (Prnd) and Shadoo (Sprn), two mammalian PrP(C) paralogs. Interestingly, this work not only established a similar physiological environment for the three prion protein family members in neuroblastoma cells, but also suggested direct interactions amongst them. Furthermore, multiple interactions between PrP(C) and the neural cell adhesion molecule, the laminin receptor precursor, Na/K ATPases and protein disulfide isomerases (PDI) were confirmed, thereby reconciling previously separate findings. Subsequent validation experiments established that interactions of PrP(C) with PDIs may extend beyond the endoplasmic reticulum and may play a hitherto unrecognized role in the accumulation of PrP(Sc). A simple hypothesis is presented which accounts for the majority of interactions observed in uninfected cells and suggests that PrP(C) organizes its molecular environment on account of its ability to bind to adhesion molecules harboring immunoglobulin-like domains, which in turn recognize oligomannose-bearing membrane proteins.

  8. Use of a German longitudinal prescription database (LRx) in pharmacoepidemiology.

    Science.gov (United States)

    Richter, Hartmut; Dombrowski, Silvia; Hamer, Hajo; Hadji, Peyman; Kostev, Karel

    2015-01-01

    Large epidemiological databases are often used to examine matters pertaining to drug utilization, health services, and drug safety. The major strength of such databases is that they include large sample sizes, which allow precise estimates to be made. The IMS® LRx database has in recent years been used as a data source for epidemiological research. The aim of this paper is to review a number of recent studies published with the aid of this database and compare these with the results of similar studies using independent data published in the literature. In spite of being somewhat limited to studies for which comparative independent results were available, it was possible to include a wide range of possible uses of the LRx database in a variety of therapeutic fields: prevalence/incidence rate determination (diabetes, epilepsy), persistence analyses (diabetes, osteoporosis), use of comedication (diabetes), drug utilization (G-CSF market) and treatment costs (diabetes, G-CSF market). In general, the results of the LRx studies were found to be clearly in line with previously published reports. In some cases, noticeable discrepancies between the LRx results and the literature data were found (e.g. prevalence in epilepsy, persistence in osteoporosis) and these were discussed and possible reasons presented. Overall, it was concluded that the IMS® LRx database forms a suitable database for pharmacoepidemiological studies.

  9. SOA - Service Registry and Repository

    Data.gov (United States)

    Social Security Administration — SOA is an approach utilizing reusable components or services in order to build applications by plugging in the appropriate components. It utilizes COTS databases as...

  10. PERBANDINGAN ANTARA “BIG” WEB SERVICE DENGAN RESTFUL WEB SERVICE UNTUK INTEGRASI DATA BERFORMAT GML

    Directory of Open Access Journals (Sweden)

    Adi Nugroho

    2012-01-01

    Full Text Available Web Service with Java: SOAP (JAX-WS/Java API for XML Web Services and Java RESTful Web Service (JAX-RS/Java RESTful API for XML Web Services are now a technology competing with each other in terms of their use for integrates data residing in different systems. Both Web Service technologies, of course, have advantages and disadvantages. In this paper, we discuss the comparison of the two technologies is a Java Web Service in relation to the development of GIS application (Geographic Information System integrates the use of data-formatted GML (Geography Markup Language, which is stored in the system database XML (eXtensible Markup Language.

  11. The CERN accelerator measurement database: on the road to federation

    International Nuclear Information System (INIS)

    Roderick, C.; Billen, R.; Gourber-Pace, M.; Hoibian, N.; Peryt, M.

    2012-01-01

    The Measurement database, acting as short-term central persistence and front-end of the CERN accelerator Logging Service, receives billions of time-series data per day for 200000+ signals. A variety of data acquisition systems on hundreds of front-end computers publish source data that eventually end up being logged in the Measurement database. As part of a federated approach to data management, information about source devices are defined in a Configuration database, whilst the signals to be logged are defined in the Measurement database. A mapping, which is often complex and subject to change/extension, is required in order to subscribe to the source devices, and write the published data to the corresponding named signals. Since 2005, this mapping was done by means of dozens of XML files, which were manually maintained by multiple persons, resulting in a configuration that was error prone. In 2010 this configuration was fully centralized in the Measurement database itself, reducing significantly the complexity and the actors in the process. Furthermore, logging processes immediately pick up modified configurations via JMS based notifications sent directly from the database. This paper will describe the architecture and the benefits of current implementation, as well as the next steps on the road to a fully federated solution. (authors)

  12. Distributed Database Semantic Integration of Wireless Sensor Network to Access the Environmental Monitoring System

    Directory of Open Access Journals (Sweden)

    Ubaidillah Umar

    2018-06-01

    Full Text Available A wireless sensor network (WSN works continuously to gather information from sensors that generate large volumes of data to be handled and processed by applications. Current efforts in sensor networks focus more on networking and development services for a variety of applications and less on processing and integrating data from heterogeneous sensors. There is an increased need for information to become shareable across different sensors, database platforms, and applications that are not easily implemented in traditional database systems. To solve the issue of these large amounts of data from different servers and database platforms (including sensor data, a semantic sensor web service platform is needed to enable a machine to extract meaningful information from the sensor’s raw data. This additionally helps to minimize and simplify data processing and to deduce new information from existing data. This paper implements a semantic web data platform (SWDP to manage the distribution of data sensors based on the semantic database system. SWDP uses sensors for temperature, humidity, carbon monoxide, carbon dioxide, luminosity, and noise. The system uses the Sesame semantic web database for data processing and a WSN to distribute, minimize, and simplify information processing. The sensor nodes are distributed in different places to collect sensor data. The SWDP generates context information in the form of a resource description framework. The experiment results demonstrate that the SWDP is more efficient than the traditional database system in terms of memory usage and processing time.

  13. HIPdb: a database of experimentally validated HIV inhibiting peptides.

    Science.gov (United States)

    Qureshi, Abid; Thakur, Nishant; Kumar, Manoj

    2013-01-01

    Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.

  14. Provision of ICT services and user satisfaction in public libraries in ...

    African Journals Online (AJOL)

    The study revealed that the ICT-based services provided are: bar coded circulation services; bibliographic database services; electronic-mail services; online selective dissemination of information resources; e-library services; and full text journal articles. Users are not satisfied with the available ICT-based services provided ...

  15. Sketching web services backends with SERPE

    NARCIS (Netherlands)

    Aprile, W.A.

    2009-01-01

    In the face of current strong commercial interest in services that, from an implementation point of view, consist of databases provided with a web API and/or a front end, there is a scarcity of tools that allow quickly sketching the service backend in order to deliver an interactive prototype. SERPE

  16. The data and system Nikkei Telecom "Industry/Technology Information Service"

    Science.gov (United States)

    Kurata, Shizuya; Sueyoshi, Yukio

    Nihoh Keizai Shimbun started supplying "Industry/Technology Information Service" from July 1989 as a part of Nikkei Telecom Package, which is online information service using personal computers for its terminals. Previously Nikkei's database service mainly covered such areas as economy, corporations and markets. On the other hand, the new "Industry/Technology Information Service" (main data covers industry by industry information-semi macro) is attracting a good deal of attention as it is the first to supply science and technology related database which has not been touched before. Moreover it is attracting attention technically as it has an access by gateway system to JOIS which is the first class science technology file in Japan. This report introduces data and system of "Industry/Technology Information Service" briefly.

  17. High-integrity databases for helicopter operations

    Science.gov (United States)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  18. Construction of In-house Databases in a Corporation

    Science.gov (United States)

    Tamura, Haruki; Mezaki, Koji

    This paper describes fundamental idea of technical information management in Mitsubishi Heavy Industries, Ltd., and present status of the activities. Then it introduces the background and history of the development, problems and countermeasures against them regarding Mitsubishi Heavy Industries Technical Information Retrieval System (called MARON) which started its service in May, 1985. The system deals with databases which cover information common to the whole company (in-house research and technical reports, holding information of books, journals and so on), and local information held in each business division or department. Anybody from any division can access to these databases through the company-wide network. The in-house interlibrary loan subsystem called Orderentry is available, which supports acquiring operation of original materials.

  19. Database Description - DMPD | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name DMPD Alternative nam...e Dynamic Macrophage Pathway CSML Database DOI 10.18908/lsdba.nbdc00558-000 Creator Creator Name: Masao Naga...ty of Tokyo 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639 Tel: +81-3-5449-5615 FAX: +83-3-5449-5442 E-mail: Database...606 Taxonomy Name: Mammalia Taxonomy ID: 40674 Database description DMPD collects...e(s) Article title: Author name(s): Journal: External Links: Original website information Database maintenan

  20. The ATLAS conditions database architecture for the Muon spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    Verducci, Monica, E-mail: monica.verducci@cern.c [University of Wuerzburg Am Hubland, 97074, Wuerzburg (Germany)

    2010-04-01

    The Muon System, facing the challenge requirement of the conditions data storage, has extensively started to use the conditions database project 'COOL' as the basis for all its conditions data storage both at CERN and throughout the worldwide collaboration as decided by the ATLAS Collaboration. The management of the Muon COOL conditions database will be one of the most challenging applications for Muon System, both in terms of data volumes and rates, but also in terms of the variety of data stored. The Muon conditions database is responsible for almost all of the 'non event' data and detector quality flags storage needed for debugging of the detector operations and for performing reconstruction and analysis. The COOL database allows database applications to be written independently of the underlying database technology and ensures long term compatibility with the entire ATLAS Software. COOL implements an interval of validity database, i.e. objects stored or referenced in COOL have an associated start and end time between which they are valid, the data is stored in folders, which are themselves arranged in a hierarchical structure of folder sets. The structure is simple and mainly optimized to store and retrieve object(s) associated with a particular time. In this work, an overview of the entire Muon conditions database architecture is given, including the different sources of the data and the storage model used. In addiction the software interfaces used to access to the conditions data are described, more emphasis is given to the Offline Reconstruction framework ATHENA and the services developed to provide the conditions data to the reconstruction.

  1. Database Dump - fRNAdb | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us fRNAdb Database Dump Data detail Data name Database Dump DOI 10.18908/lsdba.nbdc00452-002 De... data (tab separeted text) Data file File name: Database_Dump File URL: ftp://ftp....biosciencedbc.jp/archive/frnadb/LATEST/Database_Dump File size: 673 MB Simple search URL - Data acquisition...s. Data analysis method - Number of data entries 4 files - About This Database Database Description Download... License Update History of This Database Site Policy | Contact Us Database Dump - fRNAdb | LSDB Archive ...

  2. OPERA-a human performance database under simulated emergencies of nuclear power plants

    International Nuclear Information System (INIS)

    Park, Jinkyun; Jung, Wondea

    2007-01-01

    In complex systems such as the nuclear and chemical industry, the importance of human performance related problems is well recognized. Thus a lot of effort has been spent on this area, and one of the main streams for unraveling human performance related problems is the execution of HRA. Unfortunately a lack of prerequisite information has been pointed out as the most critical problem in conducting HRA. From this necessity, OPERA database that can provide operators' performance data obtained under simulated emergencies has been developed. In this study, typical operators' performance data that are available from OPERA database are briefly explained. After that, in order to ensure the appropriateness of OPERA database, operators' performance data from OPERA database are compared with those of other studies and real events. As a result, it is believed that operators' performance data of OPERA database are fairly comparable to those of other studies and real events. Therefore it is meaningful to expect that OPERA database can be used as a serviceable data source for scrutinizing human performance related problems including HRA

  3. The KTOI Ecosystem Project Relational Database : a Report Prepared by Statistical Consulting Services for KTOI Describing the Key Components and Specifications of the KTOI Relational Database.

    Energy Technology Data Exchange (ETDEWEB)

    Shafii, Bahman [Statistical Consulting Services

    2009-09-24

    Data are the central focus of any research project. Their collection and analysis are crucial to meeting project goals, testing scientific hypotheses, and drawing relevant conclusions. Typical research projects often devote the majority of their resources to the collection, storage and analysis of data. Therefore, issues related to data quality should be of foremost concern. Data quality issues are even more important when conducting multifaceted studies involving several teams of researchers. Without the use of a standardized protocol, for example, independent data collection carried out by separate research efforts can lead to inconsistencies, confusion and errors throughout the larger project. A database management system can be utilized to help avoid all of the aforementioned problems. The centralization of data into a common relational unit, i.e. a relational database, shifts the responsibility for data quality and maintenance from multiple individuals to a single database manager, thus allowing data quality issues to be assessed and corrected in a timely manner. The database system also provides an easy mechanism for standardizing data components, such as variable names and values uniformly across all segments of a project. This is particularly an important issue when data are collected on a number of biological/physical response and explanatory variables from various locations and times. The database system can integrate all segments of a large study into one unit, while providing oversight and accessibility to the data collection process. The quality of all data collected is uniformly maintained and compatibility between research efforts ensured. While the physical database would exist in a central location, access will not be physically limited. Advanced database interfaces are created to operate over the internet utilizing a Web-based relational database, allowing project members to access their data from virtually anywhere. These interfaces provide users

  4. 77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database

    Science.gov (United States)

    2012-11-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES HIT Policy and Standards Committees; Workgroup Application... of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has.... Name of Committees: HIT Standards Committee and HIT Policy Committee. General Function of the...

  5. Amazon Web Services- a Case Study

    OpenAIRE

    Narendula, Rammohan

    2012-01-01

    The Amazon Web Services (AWS) is a set (more than 25) of proprietary web-based services owned by Amazon.com. All these services ranging from simple storage to sophisticated database services constitute the cloud platform oered by Amazon. An extensive list of customers for AWS include Dropbox, UniLever, Airbnb, Nasdaq, Netflix. As of 2007, there are more than 300K developers actively using AWS. It is one of the pioneers which brought the cloud computing closer to masses helping number of start...

  6. Medical databases in studies of drug teratogenicity: methodological issues

    Directory of Open Access Journals (Sweden)

    Vera Ehrenstein

    2010-03-01

    Full Text Available Vera Ehrenstein1, Henrik T Sørensen1, Leiv S Bakketeig1,2, Lars Pedersen11Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark; 2Norwegian Institute of Public Health, Oslo, NorwayAbstract: More than half of all pregnant women take prescription medications, raising concerns about fetal safety. Medical databases routinely collecting data from large populations are potentially valuable resources for cohort studies addressing teratogenicity of drugs. These include electronic medical records, administrative databases, population health registries, and teratogenicity information services. Medical databases allow estimation of prevalences of birth defects with enhanced precision, but systematic error remains a potentially serious problem. In this review, we first provide a brief description of types of North American and European medical databases suitable for studying teratogenicity of drugs and then discuss manifestation of systematic errors in teratogenicity studies based on such databases. Selection bias stems primarily from the inability to ascertain all reproductive outcomes. Information bias (misclassification may be caused by paucity of recorded clinical details or incomplete documentation of medication use. Confounding, particularly confounding by indication, can rarely be ruled out. Bias that either masks teratogenicity or creates false appearance thereof, may have adverse consequences for the health of the child and the mother. Biases should be quantified and their potential impact on the study results should be assessed. Both theory and software are available for such estimation. Provided that methodological problems are understood and effectively handled, computerized medical databases are a valuable source of data for studies of teratogenicity of drugs.Keywords: databases, birth defects, epidemiologic methods, pharmacoepidemiology

  7. Multi-dimensional database design and implementation of dam safety monitoring system

    Directory of Open Access Journals (Sweden)

    Zhao Erfeng

    2008-09-01

    Full Text Available To improve the effectiveness of dam safety monitoring database systems, the development process of a multi-dimensional conceptual data model was analyzed and a logic design was achieved in multi-dimensional database mode. The optimal data model was confirmed by identifying data objects, defining relations and reviewing entities. The conversion of relations among entities to external keys and entities and physical attributes to tables and fields was interpreted completely. On this basis, a multi-dimensional database that reflects the management and analysis of a dam safety monitoring system on monitoring data information has been established, for which factual tables and dimensional tables have been designed. Finally, based on service design and user interface design, the dam safety monitoring system has been developed with Delphi as the development tool. This development project shows that the multi-dimensional database can simplify the development process and minimize hidden dangers in the database structure design. It is superior to other dam safety monitoring system development models and can provide a new research direction for system developers.

  8. 75 FR 70128 - 2011 Changes for Domestic Mailing Services

    Science.gov (United States)

    2010-11-17

    ...LOT, RDI, and Five-Digit ZIP. The Postal Service certifies software meeting its standards until the... Delivery Point Validation (DPV) service in conjunction with CASS-Certified address matching software... interface between address-matching software and the LACS \\Link\\ database service. 1.21.2 Interface...

  9. Evidence That a Psychopathology Interactome Has Diagnostic Value, Predicting Clinical Needs: An Experience Sampling Study

    Science.gov (United States)

    van Os, Jim; Lataster, Tineke; Delespaul, Philippe; Wichers, Marieke; Myin-Germeys, Inez

    2014-01-01

    measures of psychopathology, similarly moderated by momentary interactions with emotions and context. Conclusion The results suggest that psychopathology, represented as an interactome at the momentary level of temporal resolution, is informative in diagnosing clinical needs, over and above traditional symptom measures. PMID:24466189

  10. DOT Online Database

    Science.gov (United States)

    Page Home Table of Contents Contents Search Database Search Login Login Databases Advisory Circulars accessed by clicking below: Full-Text WebSearch Databases Database Records Date Advisory Circulars 2092 5 data collection and distribution policies. Document Database Website provided by MicroSearch

  11. Construction of nuclear special information service

    International Nuclear Information System (INIS)

    Oh, Jeong Hoon; Kim, Tae Whan; Lee, Kyu Jeong; Yi, Ji Ho; Chun, Young Choon; Yoo, Jae Bok; Yoo, An Na

    2009-02-01

    Domestic INIS project has carried out various activities on supporting a decision-making for INIS Secretariat, exchanges of the statistical information between INIS and the country,and technical assistance for domestic end-users using INIS database. Based on the construction of INIS database sent by member states, the data published in the country has been gathered, collected, and inputted to INIS database according to the INIS reference series. Using the INIS output data, it has provided domestic users with searching INIS CD-ROM DB, INIS online database, INIS SDI service, and non-conventional literature delivery services. INIS2 DB Host site in Korea has serviced users of domestic and INIS member countries. In order to maintain the same data as Vienna center, the data update process has been performed. Also, publicity information activities were performed by many ways. To construct digital information infrastructure, we changed web server of NUCLIS21 and upgraded information management manager system and constructed full text DB on research and technical reports. Also we collected web DB, digital journals and so on and made an effort on operation of knowledge management system and research documents management. We have inputted over 3,000 records per year since 2002 and the input amount this year has reached 3,738 records. In order to input the comprehensive domestic publication related to nuclear energy, and rise in position of the national center, it is necessary to continue efforts and support budgets. We expect the INIS2 DB Host site will make a contribution to the improvement of productivity in the nuclear energy research as well as the diffusion of information about nuclear energy. We provided users with stable services changing web server of NUCLIS21. Also we contributed to the improvement of librarians' productivity by upgrading information management manager system and provided users with services of web DB, digital journals and so on

  12. Interfacce Web per database bibliografici il sistema di informazioni scientifiche del CERN

    CERN Document Server

    Brugnolo, F

    1997-01-01

    Analysis of how to develop and organise a scientific information service based on the Word Wide Web, the specificity of the databases word-oriented and the problems linked to the information retrieval on the WWW. The analysis is done both in the theoretical and in the practical point of view. The case of the CERN scientific information service is taken into account. We study the reorganisation of t he whole architecture and the development of the Web User Interface. We conclude with the description of the service Personal Virtual Library, developed for CERN Library Catalogue.

  13. Databases

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.

    Information on bibliographic as well as numeric/textual databases relevant to coastal geomorphology has been included in a tabular form. Databases cover a broad spectrum of related subjects like coastal environment and population aspects, coastline...

  14. Sequence- and interactome-based prediction of viral protein hotspots targeting host proteins: a case study for HIV Nef.

    Directory of Open Access Journals (Sweden)

    Mahdi Sarmady

    Full Text Available Virus proteins alter protein pathways of the host toward the synthesis of viral particles by breaking and making edges via binding to host proteins. In this study, we developed a computational approach to predict viral sequence hotspots for binding to host proteins based on sequences of viral and host proteins and literature-curated virus-host protein interactome data. We use a motif discovery algorithm repeatedly on collections of sequences of viral proteins and immediate binding partners of their host targets and choose only those motifs that are conserved on viral sequences and highly statistically enriched among binding partners of virus protein targeted host proteins. Our results match experimental data on binding sites of Nef to host proteins such as MAPK1, VAV1, LCK, HCK, HLA-A, CD4, FYN, and GNB2L1 with high statistical significance but is a poor predictor of Nef binding sites on highly flexible, hoop-like regions. Predicted hotspots recapture CD8 cell epitopes of HIV Nef highlighting their importance in modulating virus-host interactions. Host proteins potentially targeted or outcompeted by Nef appear crowding the T cell receptor, natural killer cell mediated cytotoxicity, and neurotrophin signaling pathways. Scanning of HIV Nef motifs on multiple alignments of hepatitis C protein NS5A produces results consistent with literature, indicating the potential value of the hotspot discovery in advancing our understanding of virus-host crosstalk.

  15. The Barcelona Hospital Clínic therapeutic apheresis database.

    Science.gov (United States)

    Cid, Joan; Carbassé, Gloria; Cid-Caballero, Marc; López-Púa, Yolanda; Alba, Cristina; Perea, Dolores; Lozano, Miguel

    2017-09-22

    A therapeutic apheresis (TA) database helps to increase knowledge about indications and type of apheresis procedures that are performed in clinical practice. The objective of the present report was to describe the type and number of TA procedures that were performed at our institution in a 10-year period, from 2007 to 2016. The TA electronic database was created by transferring patient data from electronic medical records and consultation forms into a Microsoft Access database developed exclusively for this purpose. Since 2007, prospective data from every TA procedure were entered in the database. A total of 5940 TA procedures were performed: 3762 (63.3%) plasma exchange (PE) procedures, 1096 (18.5%) hematopoietic progenitor cell (HPC) collections, and 1082 (18.2%) TA procedures other than PEs and HPC collections. The overall trend for the time-period was progressive increase in total number of TA procedures performed each year (from 483 TA procedures in 2007 to 822 in 2016). The tracking trend of each procedure during the 10-year period was different: the number of PE and other type of TA procedures increased 22% and 2818%, respectively, and the number of HPC collections decreased 28%. The TA database helped us to increase our knowledge about various indications and type of TA procedures that were performed in our current practice. We also believe that this database could serve as a model that other institutions can use to track service metrics. © 2017 Wiley Periodicals, Inc.

  16. Database Description - eSOL | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available base Description General information of database Database name eSOL Alternative nam...eator Affiliation: The Research and Development of Biological Databases Project, National Institute of Genet...nology 4259 Nagatsuta-cho, Midori-ku, Yokohama, Kanagawa 226-8501 Japan Email: Tel.: +81-45-924-5785 Database... classification Protein sequence databases - Protein properties Organism Taxonomy Name: Escherichia coli Taxonomy ID: 562 Database...i U S A. 2009 Mar 17;106(11):4201-6. External Links: Original website information Database maintenance site

  17. "Mr. Database" : Jim Gray and the History of Database Technologies.

    Science.gov (United States)

    Hanwahr, Nils C

    2017-12-01

    Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.

  18. Mathematics for Databases

    NARCIS (Netherlands)

    ir. Sander van Laar

    2007-01-01

    A formal description of a database consists of the description of the relations (tables) of the database together with the constraints that must hold on the database. Furthermore the contents of a database can be retrieved using queries. These constraints and queries for databases can very well be

  19. Online Reference Service--How to Begin: A Selected Bibliography.

    Science.gov (United States)

    Shroder, Emelie J., Ed.

    1982-01-01

    Materials in this bibliography were selected and recommended by members of the Use of Machine-Assisted Reference in Public Libraries Committee, Reference and Adult Services Division, American Library Association. Topics include: financial aspects, equipment and communications considerations, comparing databases and database systems, advertising…

  20. The ChArMEx database

    Science.gov (United States)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. At present, the ChArMEx database contains about 75 datasets, including 50 in situ datasets (2012 and 2013 campaigns, Ersa background monitoring station), 25 model outputs (dust model intercomparison, MEDCORDEX scenarios), and a high resolution emission inventory over the Mediterranean. Many in situ datasets have been inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A data catalogue that complies with metadata international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). - Metadata forms to document

  1. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Directory of Open Access Journals (Sweden)

    Surasak Saokaew

    Full Text Available Health technology assessment (HTA has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced.Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided.Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources.Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  2. Healthcare Databases in Thailand and Japan: Potential Sources for Health Technology Assessment Research.

    Science.gov (United States)

    Saokaew, Surasak; Sugimoto, Takashi; Kamae, Isao; Pratoomsoot, Chayanin; Chaiyakunapruk, Nathorn

    2015-01-01

    Health technology assessment (HTA) has been continuously used for value-based healthcare decisions over the last decade. Healthcare databases represent an important source of information for HTA, which has seen a surge in use in Western countries. Although HTA agencies have been established in Asia-Pacific region, application and understanding of healthcare databases for HTA is rather limited. Thus, we reviewed existing databases to assess their potential for HTA in Thailand where HTA has been used officially and Japan where HTA is going to be officially introduced. Existing healthcare databases in Thailand and Japan were compiled and reviewed. Databases' characteristics e.g. name of database, host, scope/objective, time/sample size, design, data collection method, population/sample, and variables were described. Databases were assessed for its potential HTA use in terms of safety/efficacy/effectiveness, social/ethical, organization/professional, economic, and epidemiological domains. Request route for each database was also provided. Forty databases- 20 from Thailand and 20 from Japan-were included. These comprised of national censuses, surveys, registries, administrative data, and claimed databases. All databases were potentially used for epidemiological studies. In addition, data on mortality, morbidity, disability, adverse events, quality of life, service/technology utilization, length of stay, and economics were also found in some databases. However, access to patient-level data was limited since information about the databases was not available on public sources. Our findings have shown that existing databases provided valuable information for HTA research with limitation on accessibility. Mutual dialogue on healthcare database development and usage for HTA among Asia-Pacific region is needed.

  3. The ChArMEx database

    Science.gov (United States)

    Ferré, Hélène; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Cloché, Sophie; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mière, Arnaud; Ramage, Karim; Vermeulen, Anne; Boulanger, Damien

    2015-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters , intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between ICARE, IPSL and OMP data centers and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. ChArMEx data, either produced or used by the project, are documented and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The website offers the usual but user-friendly functionalities: data catalog, user registration procedure, search tool to select and access data... The metadata (data description) are standardized, and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). A Digital Object Identifier (DOI) assignement procedure allows to automatically register the datasets, in order to make them easier to access, cite, reuse and verify. At present, the ChArMEx database contains about 120 datasets, including more than 80 in situ datasets (2012, 2013 and 2014 summer campaigns, background monitoring station of Ersa...), 25 model output sets (dust model intercomparison, MEDCORDEX scenarios...), a high resolution emission inventory over the Mediterranean... Many in situ datasets

  4. The ChArMEx database

    Science.gov (United States)

    Ferré, Hélène; Descloitres, Jacques; Fleury, Laurence; Boichard, Jean-Luc; Brissebrat, Guillaume; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2013-04-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and falls within the scope of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - Forms to document observations or products that will be provided to the database in compliance with metadata international standards (ISO 19115-19139; INSPIRE; Global Change Master Directory Thesaurus). - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - Sorted lists of the datasets by thematic keywords, by measured parameters, by instruments or by platform type. - A shopping-cart web interface to order in situ data files. At present datasets from the background monitoring station of Ersa, Cape Corsica and from the 2012 ChArMEx pre-campaign are available. - A user-friendly access to satellite products

  5. Creating and analyzing pathway and protein interaction compendia for modelling signal transduction networks

    Directory of Open Access Journals (Sweden)

    Kirouac Daniel C

    2012-05-01

    Full Text Available Abstract Background Understanding the information-processing capabilities of signal transduction networks, how those networks are disrupted in disease, and rationally designing therapies to manipulate diseased states require systematic and accurate reconstruction of network topology. Data on networks central to human physiology, such as the inflammatory signalling networks analyzed here, are found in a multiplicity of on-line resources of pathway and interactome databases (Cancer CellMap, GeneGo, KEGG, NCI-Pathway Interactome Database (NCI-PID, PANTHER, Reactome, I2D, and STRING. We sought to determine whether these databases contain overlapping information and whether they can be used to construct high reliability prior knowledge networks for subsequent modeling of experimental data. Results We have assembled an ensemble network from multiple on-line sources representing a significant portion of all machine-readable and reconcilable human knowledge on proteins and protein interactions involved in inflammation. This ensemble network has many features expected of complex signalling networks assembled from high-throughput data: a power law distribution of both node degree and edge annotations, and topological features of a “bow tie” architecture in which diverse pathways converge on a highly conserved set of enzymatic cascades focused around PI3K/AKT, MAPK/ERK, JAK/STAT, NFκB, and apoptotic signaling. Individual pathways exhibit “fuzzy” modularity that is statistically significant but still involving a majority of “cross-talk” interactions. However, we find that the most widely used pathway databases are highly inconsistent with respect to the actual constituents and interactions in this network. Using a set of growth factor signalling networks as examples (epidermal growth factor, transforming growth factor-beta, tumor necrosis factor, and wingless, we find a multiplicity of network topologies in which receptors couple to downstream

  6. Nuclear data services provided by the IAEA

    International Nuclear Information System (INIS)

    Schwerer, O.; Oblozinsky, P.

    2001-01-01

    This paper summarizes the various nuclear data types, libraries and services available free of charge from the IAEA Nuclear Data Section. The databases are collected, maintained and made available within the framework of an international nuclear data center's network. Particular emphasis is given to online services via the Internet. The URL address of the IAEA Nuclear Services is http://www-nds.iaea.or.at. (author)

  7. Database development and management

    CERN Document Server

    Chao, Lee

    2006-01-01

    Introduction to Database Systems Functions of a DatabaseDatabase Management SystemDatabase ComponentsDatabase Development ProcessConceptual Design and Data Modeling Introduction to Database Design Process Understanding Business ProcessEntity-Relationship Data Model Representing Business Process with Entity-RelationshipModelTable Structure and NormalizationIntroduction to TablesTable NormalizationTransforming Data Models to Relational Databases .DBMS Selection Transforming Data Models to Relational DatabasesEnforcing ConstraintsCreating Database for Business ProcessPhysical Design and Database

  8. A Preliminary Study on the Multiple Mapping Structure of Classification Systems for Heterogeneous Databases

    OpenAIRE

    Seok-Hyoung Lee; Hwan-Min Kim; Ho-Seop Choe

    2012-01-01

    While science and technology information service portals and heterogeneous databases produced in Korea and other countries are integrated, methods of connecting the unique classification systems applied to each database have been studied. Results of technologists' research, such as, journal articles, patent specifications, and research reports, are organically related to each other. In this case, if the most basic and meaningful classification systems are not connected, it is difficult to ach...

  9. Control of Database Applications at the Defense Finance and Accounting Service Indianapolis Center

    National Research Council Canada - National Science Library

    1997-01-01

    The Defense Finance and Accounting Service Financial Systems Organization, under the control of the Deputy Director for Information Management, Defense Finance and Accounting Service, is responsible...

  10. Solving Relational Database Problems with ORDBMS in an Advanced Database Course

    Science.gov (United States)

    Wang, Ming

    2011-01-01

    This paper introduces how to use the object-relational database management system (ORDBMS) to solve relational database (RDB) problems in an advanced database course. The purpose of the paper is to provide a guideline for database instructors who desire to incorporate the ORDB technology in their traditional database courses. The paper presents…

  11. Generalized Database Management System Support for Numeric Database Environments.

    Science.gov (United States)

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  12. IT Services Availability during the CERN Annual Closure 2005

    CERN Multimedia

    2005-01-01

    Mail, CERN Windows (NICE), Web services, LXPLUS, LXBATCH, AFS, the automated tape devices, Castor, CDS Search, Submit and Agenda, InDiCo, EDMS (in collaboration with TS Department), Campus Network, Remedy, and Security services will be available during the CERN annual closure. All production databases will remain available (but not development databases such as devdb and devdb10). Problems occuring on scheduled services should be addressed within about half a day, except around Christmas Eve and Christmas Day (24 and 25 December) and New Year's Eve and New Year's Day (31 December and 1st January). As far as administrative computings services are concerned, ERT (Electronic Recruitment Tool) and PPT (Project Progress Tracking) for EGEE will be the only operational services; all other AIS applications such as EDH, CET, HRT, etc. will be unavailable from Wednesday 21/12 12:00 to Thursday 5/01/2005 8:00. The CERN LCG Production Services will be run on a 'best effort” basis. All other services (such as the ...

  13. Complex Systems Analysis of Cell Cycling Models in Carcinogenesis:II. Cell Genome and Interactome, Neoplastic Non-random Transformation Models in Topoi with Lukasiewicz-Logic and MV Algebras

    CERN Document Server

    Baianu, I C

    2004-01-01

    Quantitative Biology, abstract q-bio.OT/0406045 From: I.C. Baianu Dr. [view email] Date (v1): Thu, 24 Jun 2004 02:45:13 GMT (164kb) Date (revised v2): Fri, 2 Jul 2004 00:58:06 GMT (160kb) Complex Systems Analysis of Cell Cycling Models in Carcinogenesis: II. Authors: I.C. Baianu Comments: 23 pages, 1 Figure Report-no: CC04 Subj-class: Other Carcinogenesis is a complex process that involves dynamically inter-connected modular sub-networks that evolve under the influence of micro-environmentally induced perturbations, in non-random, pseudo-Markov chain processes. An appropriate n-stage model of carcinogenesis involves therefore n-valued Logic treatments of nonlinear dynamic transformations of complex functional genomes and cell interactomes. Lukasiewicz Algebraic Logic models of genetic networks and signaling pathways in cells are formulated in terms of nonlinear dynamic systems with n-state components that allow for the generalization of previous, Boolean or "fuzzy", logic models of genetic activities in vivo....

  14. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  15. Solutions in radiology services management: a literature review

    Directory of Open Access Journals (Sweden)

    Aline Garcia Pereira

    2015-10-01

    Full Text Available AbstractObjective:The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services.Materials and Methods:Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares.Results:In the databases, 565 papers – 120 out of them, pdf free – were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others.Conclusion:Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies.

  16. Solutions in radiology services management: a literature review*

    Science.gov (United States)

    Pereira, Aline Garcia; Vergara, Lizandra Garcia Lupi; Merino, Eugenio Andrés Díaz; Wagner, Adriano

    2015-01-01

    Objective The present study was aimed at reviewing the literature to identify solutions for problems observed in radiology services. Materials and Methods Basic, qualitative, exploratory literature review at Scopus and SciELO databases, utilizing the Mendeley and Illustrator CC Adobe softwares. Results In the databases, 565 papers – 120 out of them, pdf free – were identified. Problems observed in the radiology sector are related to procedures scheduling, humanization, lack of training, poor knowledge and use of management techniques, and interaction with users. The design management provides the services with interesting solutions such as Benchmarking, CRM, Lean Approach, ServiceBlueprinting, continued education, among others. Conclusion Literature review is an important tool to identify problems and respective solutions. However, considering the small number of studies approaching management of radiology services, this is a great field of research for the development of deeper studies. PMID:26543281

  17. License - Trypanosomes Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us Trypanoso... Attribution-Share Alike 2.1 Japan . If you use data from this database, please be sure attribute this database as follows: Trypanoso...nse Update History of This Database Site Policy | Contact Us License - Trypanosomes Database | LSDB Archive ...

  18. WEB-BASED DATABASE ON RENEWAL TECHNOLOGIES ...

    Science.gov (United States)

    As U.S. utilities continue to shore up their aging infrastructure, renewal needs now represent over 43% of annual expenditures compared to new construction for drinking water distribution and wastewater collection systems (Underground Construction [UC], 2016). An increased understanding of renewal options will ultimately assist drinking water utilities in reducing water loss and help wastewater utilities to address infiltration and inflow issues in a cost-effective manner. It will also help to extend the service lives of both drinking water and wastewater mains. This research effort involved collecting case studies on the use of various trenchless pipeline renewal methods and providing the information in an online searchable database. The overall objective was to further support technology transfer and information sharing regarding emerging and innovative renewal technologies for water and wastewater mains. The result of this research is a Web-based, searchable database that utility personnel can use to obtain technology performance and cost data, as well as case study references. The renewal case studies include: technologies used; the conditions under which the technology was implemented; costs; lessons learned; and utility contact information. The online database also features a data mining tool for automated review of the technologies selected and cost data. Based on a review of the case study results and industry data, several findings are presented on tren

  19. Development of Pre-Service and In-Service Information Management System (iSIMS)

    International Nuclear Information System (INIS)

    Yoo, H. J.; Choi, S. N.; Kim, H. N.; Kim, Y. H.; Yang, S. H.

    2004-01-01

    The iSTMS is a web-based integrated information system supporting Pre-Service and In-Service Inspection(PSI/ISI) processes for the nuclear power plants of KHNP(Korea Hydro and Nuclear Power Co. Ltd.). The system provides a full spectrum coverage of the inspection processes from the planning stage to the final report of examination in accordance with applicable codes, standards, and regulatory requirements. The major functions of the system includes the inspection planning, examination, reporting, project control and status reporting, resource management as well as objects search and navigation. The system also provides two dimensional or three dimensional visualization interface to identify the location and geometry of components and weld areas subject to examination in collaboration with database applications. The iSIMS is implemented with commercial software packages such as database management system, 2-D and 3-D visualization tool, etc., which provide open, updated and verified foundations. This paper describes the key functions and the technologies for the implementation of the iSIMS

  20. Development of Pre-Service and In-Service Information Management System (iSIMS)

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, H. J.; Choi, S. N.; Kim, H. N.; Kim, Y. H.; Yang, S. H. [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2004-08-15

    The iSTMS is a web-based integrated information system supporting Pre-Service and In-Service Inspection(PSI/ISI) processes for the nuclear power plants of KHNP(Korea Hydro and Nuclear Power Co. Ltd.). The system provides a full spectrum coverage of the inspection processes from the planning stage to the final report of examination in accordance with applicable codes, standards, and regulatory requirements. The major functions of the system includes the inspection planning, examination, reporting, project control and status reporting, resource management as well as objects search and navigation. The system also provides two dimensional or three dimensional visualization interface to identify the location and geometry of components and weld areas subject to examination in collaboration with database applications. The iSIMS is implemented with commercial software packages such as database management system, 2-D and 3-D visualization tool, etc., which provide open, updated and verified foundations. This paper describes the key functions and the technologies for the implementation of the iSIMS

  1. Reinstitutionalization by stealth: The Forensic Mental Health Service ...

    African Journals Online (AJOL)

    In 1997 the Forensic Mental Health Service (FMHS) in the. Western Cape had a total of 205 state patients in its database, most of whom were inpatients at Valkenberg and Lenteguer. Hospitals. Currently the database has just over 800 state patients. The number of state patients in the Western Cape has more than.

  2. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  3. Federal databases

    International Nuclear Information System (INIS)

    Welch, M.J.; Welles, B.W.

    1988-01-01

    Accident statistics on all modes of transportation are available as risk assessment analytical tools through several federal agencies. This paper reports on the examination of the accident databases by personal contact with the federal staff responsible for administration of the database programs. This activity, sponsored by the Department of Energy through Sandia National Laboratories, is an overview of the national accident data on highway, rail, air, and marine shipping. For each mode, the definition or reporting requirements of an accident are determined and the method of entering the accident data into the database is established. Availability of the database to others, ease of access, costs, and who to contact were prime questions to each of the database program managers. Additionally, how the agency uses the accident data was of major interest

  4. The YH database: the first Asian diploid genome database

    DEFF Research Database (Denmark)

    Li, Guoqing; Ma, Lijia; Song, Chao

    2009-01-01

    genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn....

  5. Database Description - tRNADB-CE | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data List Contact us tRNAD...B-CE Database Description General information of database Database name tRNADB-CE Alter...CC BY-SA Detail Background and funding Name: MEXT Integrated Database Project Reference(s) Article title: tRNAD... 2009 Jan;37(Database issue):D163-8. External Links: Article title: tRNADB-CE 2011: tRNA gene database curat...n Download License Update History of This Database Site Policy | Contact Us Database Description - tRNADB-CE | LSDB Archive ...

  6. Development of a national, dynamic reservoir-sedimentation database

    Science.gov (United States)

    Gray, J.R.; Bernard, J.M.; Stewart, D.W.; McFaul, E.J.; Laurent, K.W.; Schwarz, G.E.; Stinson, J.T.; Jonas, M.M.; Randle, T.J.; Webb, J.W.

    2010-01-01

    The importance of dependable, long-term water supplies, coupled with the need to quantify rates of capacity loss of the Nation’s re servoirs due to sediment deposition, were the most compelling reasons for developing the REServoir- SEDimentation survey information (RESSED) database and website. Created under the auspices of the Advisory Committee on Water Information’s Subcommittee on Sedimenta ion by the U.S. Geological Survey and the Natural Resources Conservation Service, the RESSED database is the most comprehensive compilation of data from reservoir bathymetric and dry-basin surveys in the United States. As of March 2010, the database, which contains data compiled on the 1950s vintage Soil Conservation Service’s Form SCS-34 data sheets, contained results from 6,616 surveys on 1,823 reservoirs in the United States and two surveys on one reservoir in Puerto Rico. The data span the period 1755–1997, with 95 percent of the surveys performed from 1930–1990. The reservoir surface areas range from sub-hectare-scale farm ponds to 658 km2 Lake Powell. The data in the RESSED database can be useful for a number of purposes, including calculating changes in reservoir-storage characteristics, quantifying sediment budgets, and estimating erosion rates in a reservoir’s watershed. The March 2010 version of the RESSED database has a number of deficiencies, including a cryptic and out-of-date database architecture; some geospatial inaccuracies (although most have been corrected); other data errors; an inability to store all data in a readily retrievable manner; and an inability to store all data types that currently exist. Perhaps most importantly, the March 2010 version of RESSED database provides no publically available means to submit new data and corrections to existing data. To address these and other deficiencies, the Subcommittee on Sedimentation, through the U.S. Geological Survey and the U.S. Army Corps of Engineers, began a collaborative project in

  7. Calculation of Investments for the Distribution of GPON Technology in the village of Bishtazhin through database

    Directory of Open Access Journals (Sweden)

    MSc. Jusuf Qarkaxhija

    2013-12-01

    Full Text Available According to daily reports, the income from internet services is getting lower each year. Landline phone services are running at a loss,  whereas mobile phone services are getting too mainstream and the only bright spot holding together cable operators (ISP  in positive balance is the income from broadband services (Fast internet, IPTV. Broadband technology is a term that defines multiple methods of information distribution through internet at great speed. Some of the broadband technologies are: optic fiber, coaxial cable, DSL, Wireless, mobile broadband, and satellite connection.  The ultimate goal of any broadband service provider is being able to provide voice, data and the video through a single network, called triple play service. The Internet distribution remains an important issue in Kosovo and particularly in rural zones. Considering the immense development of the technologies and different alternatives that we can face, the goal of this paper is to emphasize the necessity of a forecasting of such investment and to give an experience in this aspect. Because of the fact that in this investment are involved many factors related to population, geographical factors, several technologies and the fact that these factors are in continuously change, the best way is, to store all the data in a database and to use this database for different results. This database helps us to substitute the previous manual calculations with an automatic procedure of calculations. This way of work will improve the work style, having now all the tools to take the right decision about an Internet investment considering all the aspects of this investment.

  8. Application of computational systems biology to explore environmental toxicity hazards

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Grandjean, Philippe

    2011-01-01

    Background: Computer-based modeling is part of a new approach to predictive toxicology.Objectives: We investigated the usefulness of an integrated computational systems biology approach in a case study involving the isomers and metabolites of the pesticide dichlorodiphenyltrichloroethane (DDT......) to ascertain their possible links to relevant adverse effects.Methods: We extracted chemical-protein association networks for each DDT isomer and its metabolites using ChemProt, a disease chemical biology database that includes both binding and gene expression data, and we explored protein-protein interactions...... using a human interactome network. To identify associated dysfunctions and diseases, we integrated protein-disease annotations into the protein complexes using the Online Mendelian Inheritance in Man database and the Comparative Toxicogenomics Database.Results: We found 175 human proteins linked to p,p´-DDT...

  9. Database Administrator

    Science.gov (United States)

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  10. Database tools for enhanced analysis of TMX-U data. Revision 1

    International Nuclear Information System (INIS)

    Stewart, M.E.; Carter, M.R.; Casper, T.A.; Meyer, W.H.; Perkins, D.E.; Whitney, D.M.

    1986-01-01

    A commercial database software package has been used to create several databases and tools that assist and enhance the ability of experimental physicists to analyze data from the Tandem Mirror Experiment-Upgrade (TMX-U) experiment. This software runs on a DEC-20 computer in M-Division's User Service Center at Lawrence Livermore National Laboratory (LLNL), where data can be analyzed offline from the main TMX-U acquisition computers. When combined with interactive data analysis programs, these tools provide the capability to do batch-style processing or interactive data analysis on the computers in the USC or the supercomputers of the National Magnetic Fusion Energy Computer Center (NMFECC) in addition to the normal processing done by the TMX-U acquisition system. One database tool provides highly reduced data for searching and correlation analysis of several diagnostic signals within a single shot or over many shots. A second database tool provides retrieval and storage of unreduced data for use in detailed analysis of one or more diagnostic signals. We will show how these database tools form the core of an evolving offline data analysis environment on the USC computers

  11. Database Description - TMFunction | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available sidue (or mutant) in a protein. The experimental data are collected from the literature both by searching th...the sequence database, UniProt, structural database, PDB, and literature database

  12. A precipitation database of station-based daily and monthly measurements for West Africa: Overview, quality control and harmonization

    Science.gov (United States)

    Bliefernicht, Jan; Waongo, Moussa; Annor, Thompson; Laux, Patrick; Lorenz, Manuel; Salack, Seyni; Kunstmann, Harald

    2017-04-01

    West Africa is a data sparse region. High quality and long-term precipitation data are often not readily available for applications in hydrology, agriculture, meteorology and other needs. To close this gap, we use multiple data sources to develop a precipitation database with long-term daily and monthly time series. This database was compiled from 16 archives including global databases e.g. from the Global Historical Climatology Network (GHCN), databases from research projects (e.g. the AMMA database) and databases of the national meteorological services of some West African countries. The collection consists of more than 2000 precipitation gauges with measurements dating from 1850 to 2015. Due to erroneous measurements (e.g. temporal offsets, unit conversion errors), missing values and inconsistent meta-data, the merging of this precipitation dataset is not straightforward and requires a thorough quality control and harmonization. To this end, we developed geostatistical-based algorithms for quality control of individual databases and harmonization to a joint database. The algorithms are based on a pairwise comparison of the correspondence of precipitation time series in dependence to the distance between stations. They were tested for precipitation time series from gages located in a rectangular domain covering Burkina Faso, Ghana, Benin and Togo. This harmonized and quality controlled precipitation database was recently used for several applications such as the validation of a high resolution regional climate model and the bias correction of precipitation projections provided the Coordinated Regional Climate Downscaling Experiment (CORDEX). In this presentation, we will give an overview of the novel daily and monthly precipitation database and the algorithms used for quality control and harmonization. We will also highlight the quality of global and regional archives (e.g. GHCN, GSOD, AMMA database) in comparison to the precipitation databases provided by the

  13. Improvements in the Protein Identifier Cross-Reference service.

    Science.gov (United States)

    Wein, Samuel P; Côté, Richard G; Dumousseau, Marine; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan A

    2012-07-01

    The Protein Identifier Cross-Reference (PICR) service is a tool that allows users to map protein identifiers, protein sequences and gene identifiers across over 100 different source databases. PICR takes input through an interactive website as well as Representational State Transfer (REST) and Simple Object Access Protocol (SOAP) services. It returns the results as HTML pages, XLS and CSV files. It has been in production since 2007 and has been recently enhanced to add new functionality and increase the number of databases it covers. Protein subsequences can be Basic Local Alignment Search Tool (BLAST) against the UniProt Knowledgebase (UniProtKB) to provide an entry point to the standard PICR mapping algorithm. In addition, gene identifiers from UniProtKB and Ensembl can now be submitted as input or mapped to as output from PICR. We have also implemented a 'best-guess' mapping algorithm for UniProt. In this article, we describe the usefulness of PICR, how these changes have been implemented, and the corresponding additions to the web services. Finally, we explain that the number of source databases covered by PICR has increased from the initial 73 to the current 102. New resources include several new species-specific Ensembl databases as well as the Ensembl Genome ones. PICR can be accessed at http://www.ebi.ac.uk/Tools/picr/.

  14. License - Yeast Interacting Proteins Database | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us Yeast Interacting Proteins Database License to Use This Database Last updated : 2010/02/15 You may use this database...nal License described below. The Standard License specifies the license terms regarding the use of this database... and the requirements you must follow in using this database. The Additional ...the Standard License. Standard License The Standard License for this database is the license specified in th...e Creative Commons Attribution-Share Alike 2.1 Japan . If you use data from this database

  15. NoSQL database scaling

    OpenAIRE

    Žardin, Norbert

    2017-01-01

    NoSQL database scaling is a decision, where system resources or financial expenses are traded for database performance or other benefits. By scaling a database, database performance and resource usage might increase or decrease, such changes might have a negative impact on an application that uses the database. In this work it is analyzed how database scaling affect database resource usage and performance. As a results, calculations are acquired, using which database scaling types and differe...

  16. 18 CFR 37.7 - Auditing Transmission Service Information.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Auditing Transmission Service Information. 37.7 Section 37.7 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... SYSTEMS § 37.7 Auditing Transmission Service Information. (a) All OASIS database transactions, except...

  17. The factographic database on atomic spectroscopy ''spectr-2'' for information service in the field of thermonuclear and quantum electronic investigations

    International Nuclear Information System (INIS)

    Bugaev, V.Yu.; Pal'chikov, V.G.; Skobelev, I.Yu.; Faenov, A.Ya.

    1990-01-01

    ''Spectr-2'' automated database in an extension of ''Spectr-1'' database developed in VNIIFTRI (USSR) for storage and rapid search of multicharged ions atomic characteristics. The information structure, the interaction of the terminal user and the information accumulation in this database are described. 4 figs

  18. Searching Harvard Business Review Online. . . Lessons in Searching a Full Text Database.

    Science.gov (United States)

    Tenopir, Carol

    1985-01-01

    This article examines the Harvard Business Review Online (HBRO) database (bibliographic description fields, abstracts, extracted information, full text, subject descriptors) and reports on 31 sample HBRO searches conducted in Bibliographic Retrieval Services to test differences between searching full text and searching bibliographic record. Sample…

  19. A Review of Abstracting and Indexing Services for Biomedical Journals

    Directory of Open Access Journals (Sweden)

    Sarita Bhardwaj

    2017-10-01

    Full Text Available The days are gone when the researchers used to go to library to look for the articles of their choice. With the introduction of electronic era, searching an article online has become easier. This has been possible due to the availability of various Abstracting and Indexing (A & I services in the world. Of more than 400 online A & I services available, only a few like Google and Thomson Reuters cover all disciplines. Most A & I services cover just one discipline allowing them to cover their area in more depth. There are many databases and indexing services for biomedical journals, most important ones being PubMed/Medline, Scopus, and Web of Science (ISI. This article gives a review of various databases and indexes available for dental journals in the world.

  20. Extracting Databases from Dark Data with DeepDive.

    Science.gov (United States)

    Zhang, Ce; Shin, Jaeho; Ré, Christopher; Cafarella, Michael; Niu, Feng

    2016-01-01

    DeepDive is a system for extracting relational databases from dark data : the mass of text, tables, and images that are widely collected and stored but which cannot be exploited by standard relational tools. If the information in dark data - scientific papers, Web classified ads, customer service notes, and so on - were instead in a relational database, it would give analysts a massive and valuable new set of "big data." DeepDive is distinctive when compared to previous information extraction systems in its ability to obtain very high precision and recall at reasonable engineering cost; in a number of applications, we have used DeepDive to create databases with accuracy that meets that of human annotators. To date we have successfully deployed DeepDive to create data-centric applications for insurance, materials science, genomics, paleontologists, law enforcement, and others. The data unlocked by DeepDive represents a massive opportunity for industry, government, and scientific researchers. DeepDive is enabled by an unusual design that combines large-scale probabilistic inference with a novel developer interaction cycle. This design is enabled by several core innovations around probabilistic training and inference.