WorldWideScience

Sample records for annotation pipelines differences

  1. JGI Plant Genomics Gene Annotation Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Shengqiang; Rokhsar, Dan; Goodstein, David; Hayes, David; Mitros, Therese

    2014-07-14

    Plant genomes vary in size and are highly complex with a high amount of repeats, genome duplication and tandem duplication. Gene encodes a wealth of information useful in studying organism and it is critical to have high quality and stable gene annotation. Thanks to advancement of sequencing technology, many plant species genomes have been sequenced and transcriptomes are also sequenced. To use these vastly large amounts of sequence data to make gene annotation or re-annotation in a timely fashion, an automatic pipeline is needed. JGI plant genomics gene annotation pipeline, called integrated gene call (IGC), is our effort toward this aim with aid of a RNA-seq transcriptome assembly pipeline. It utilizes several gene predictors based on homolog peptides and transcript ORFs. See Methods for detail. Here we present genome annotation of JGI flagship green plants produced by this pipeline plus Arabidopsis and rice except for chlamy which is done by a third party. The genome annotations of these species and others are used in our gene family build pipeline and accessible via JGI Phytozome portal whose URL and front page snapshot are shown below.

  2. Pipeline to upgrade the genome annotations

    Directory of Open Access Journals (Sweden)

    Lijin K. Gopi

    2017-12-01

    Full Text Available Current era of functional genomics is enriched with good quality draft genomes and annotations for many thousands of species and varieties with the support of the advancements in the next generation sequencing technologies (NGS. Around 25,250 genomes, of the organisms from various kingdoms, are submitted in the NCBI genome resource till date. Each of these genomes was annotated using various tools and knowledge-bases that were available during the period of the annotation. It is obvious that these annotations will be improved if the same genome is annotated using improved tools and knowledge-bases. Here we present a new genome annotation pipeline, strengthened with various tools and knowledge-bases that are capable of producing better quality annotations from the consensus of the predictions from different tools. This resource also perform various additional annotations, apart from the usual gene predictions and functional annotations, which involve SSRs, novel repeats, paralogs, proteins with transmembrane helices, signal peptides etc. This new annotation resource is trained to evaluate and integrate all the predictions together to resolve the overlaps and ambiguities of the boundaries. One of the important highlights of this resource is the capability of predicting the phylogenetic relations of the repeats using the evolutionary trace analysis and orthologous gene clusters. We also present a case study, of the pipeline, in which we upgrade the genome annotation of Nelumbo nucifera (sacred lotus. It is demonstrated that this resource is capable of producing an improved annotation for a better understanding of the biology of various organisms.

  3. The use of semantic similarity measures for optimally integrating heterogeneous Gene Ontology data from large scale annotation pipelines

    Directory of Open Access Journals (Sweden)

    Gaston K Mazandu

    2014-08-01

    Full Text Available With the advancement of new high throughput sequencing technologies, there has been an increase in the number of genome sequencing projects worldwide, which has yielded complete genome sequences of human, animals and plants. Subsequently, several labs have focused on genome annotation, consisting of assigning functions to gene products, mostly using Gene Ontology (GO terms. As a consequence, there is an increased heterogeneity in annotations across genomes due to different approaches used by different pipelines to infer these annotations and also due to the nature of the GO structure itself. This makes a curator's task difficult, even if they adhere to the established guidelines for assessing these protein annotations. Here we develop a genome-scale approach for integrating GO annotations from different pipelines using semantic similarity measures. We used this approach to identify inconsistencies and similarities in functional annotations between orthologs of human and Drosophila melanogaster, to assess the quality of GO annotations derived from InterPro2GO mappings compared to manually annotated GO annotations for the Drosophila melanogaster proteome from a FlyBase dataset and human, and to filter GO annotation data for these proteomes. Results obtained indicate that an efficient integration of GO annotations eliminates redundancy up to 27.08 and 22.32% in the Drosophila melanogaster and human GO annotation datasets, respectively. Furthermore, we identified lack of and missing annotations for some orthologs, and annotation mismatches between InterPro2GO and manual pipelines in these two proteomes, thus requiring further curation. This simplifies and facilitates tasks of curators in assessing protein annotations, reduces redundancy and eliminates inconsistencies in large annotation datasets for ease of comparative functional genomics.

  4. RASTtk: A modular and extensible implementation of the RAST algorithm for building custom annotation pipelines and annotating batches of genomes

    Energy Technology Data Exchange (ETDEWEB)

    Brettin, Thomas; Davis, James J.; Disz, Terry; Edwards, Robert A.; Gerdes, Svetlana; Olsen, Gary J.; Olson, Robert; Overbeek, Ross; Parrello, Bruce; Pusch, Gordon D.; Shukla, Maulik; Thomason, James A.; Stevens, Rick; Vonstein, Veronika; Wattam, Alice R.; Xia, Fangfang

    2015-02-10

    The RAST (Rapid Annotation using Subsystem Technology) annotation engine was built in 2008 to annotate bacterial and archaeal genomes. It works by offering a standard software pipeline for identifying genomic features (i.e., protein-encoding genes and RNA) and annotating their functions. Recently, in order to make RAST a more useful research tool and to keep pace with advancements in bioinformatics, it has become desirable to build a version of RAST that is both customizable and extensible. In this paper, we describe the RAST tool kit (RASTtk), a modular version of RAST that enables researchers to build custom annotation pipelines. RASTtk offers a choice of software for identifying and annotating genomic features as well as the ability to add custom features to an annotation job. RASTtk also accommodates the batch submission of genomes and the ability to customize annotation protocols for batch submissions. This is the first major software restructuring of RAST since its inception.

  5. RASTtk: a modular and extensible implementation of the RAST algorithm for building custom annotation pipelines and annotating batches of genomes.

    Science.gov (United States)

    Brettin, Thomas; Davis, James J; Disz, Terry; Edwards, Robert A; Gerdes, Svetlana; Olsen, Gary J; Olson, Robert; Overbeek, Ross; Parrello, Bruce; Pusch, Gordon D; Shukla, Maulik; Thomason, James A; Stevens, Rick; Vonstein, Veronika; Wattam, Alice R; Xia, Fangfang

    2015-02-10

    The RAST (Rapid Annotation using Subsystem Technology) annotation engine was built in 2008 to annotate bacterial and archaeal genomes. It works by offering a standard software pipeline for identifying genomic features (i.e., protein-encoding genes and RNA) and annotating their functions. Recently, in order to make RAST a more useful research tool and to keep pace with advancements in bioinformatics, it has become desirable to build a version of RAST that is both customizable and extensible. In this paper, we describe the RAST tool kit (RASTtk), a modular version of RAST that enables researchers to build custom annotation pipelines. RASTtk offers a choice of software for identifying and annotating genomic features as well as the ability to add custom features to an annotation job. RASTtk also accommodates the batch submission of genomes and the ability to customize annotation protocols for batch submissions. This is the first major software restructuring of RAST since its inception.

  6. The standard operating procedure of the DOE-JGI Microbial Genome Annotation Pipeline (MGAP v.4).

    Science.gov (United States)

    Huntemann, Marcel; Ivanova, Natalia N; Mavromatis, Konstantinos; Tripp, H James; Paez-Espino, David; Palaniappan, Krishnaveni; Szeto, Ernest; Pillay, Manoj; Chen, I-Min A; Pati, Amrita; Nielsen, Torben; Markowitz, Victor M; Kyrpides, Nikos C

    2015-01-01

    The DOE-JGI Microbial Genome Annotation Pipeline performs structural and functional annotation of microbial genomes that are further included into the Integrated Microbial Genome comparative analysis system. MGAP is applied to assembled nucleotide sequence datasets that are provided via the IMG submission site. Dataset submission for annotation first requires project and associated metadata description in GOLD. The MGAP sequence data processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNA features, as well as CRISPR elements. Structural annotation is followed by assignment of protein product names and functions.

  7. DFAST: a flexible prokaryotic genome annotation pipeline for faster genome publication.

    Science.gov (United States)

    Tanizawa, Yasuhiro; Fujisawa, Takatomo; Nakamura, Yasukazu

    2018-03-15

    We developed a prokaryotic genome annotation pipeline, DFAST, that also supports genome submission to public sequence databases. DFAST was originally started as an on-line annotation server, and to date, over 7000 jobs have been processed since its first launch in 2016. Here, we present a newly implemented background annotation engine for DFAST, which is also available as a standalone command-line program. The new engine can annotate a typical-sized bacterial genome within 10 min, with rich information such as pseudogenes, translation exceptions and orthologous gene assignment between given reference genomes. In addition, the modular framework of DFAST allows users to customize the annotation workflow easily and will also facilitate extensions for new functions and incorporation of new tools in the future. The software is implemented in Python 3 and runs in both Python 2.7 and 3.4-on Macintosh and Linux systems. It is freely available at https://github.com/nigyta/dfast_core/under the GPLv3 license with external binaries bundled in the software distribution. An on-line version is also available at https://dfast.nig.ac.jp/. yn@nig.ac.jp. Supplementary data are available at Bioinformatics online.

  8. Transcriptator: An Automated Computational Pipeline to Annotate Assembled Reads and Identify Non Coding RNA.

    Directory of Open Access Journals (Sweden)

    Kumar Parijat Tripathi

    Full Text Available RNA-seq is a new tool to measure RNA transcript counts, using high-throughput sequencing at an extraordinary accuracy. It provides quantitative means to explore the transcriptome of an organism of interest. However, interpreting this extremely large data into biological knowledge is a problem, and biologist-friendly tools are lacking. In our lab, we developed Transcriptator, a web application based on a computational Python pipeline with a user-friendly Java interface. This pipeline uses the web services available for BLAST (Basis Local Search Alignment Tool, QuickGO and DAVID (Database for Annotation, Visualization and Integrated Discovery tools. It offers a report on statistical analysis of functional and Gene Ontology (GO annotation's enrichment. It helps users to identify enriched biological themes, particularly GO terms, pathways, domains, gene/proteins features and protein-protein interactions related informations. It clusters the transcripts based on functional annotations and generates a tabular report for functional and gene ontology annotations for each submitted transcript to the web server. The implementation of QuickGo web-services in our pipeline enable the users to carry out GO-Slim analysis, whereas the integration of PORTRAIT (Prediction of transcriptomic non coding RNA (ncRNA by ab initio methods helps to identify the non coding RNAs and their regulatory role in transcriptome. In summary, Transcriptator is a useful software for both NGS and array data. It helps the users to characterize the de-novo assembled reads, obtained from NGS experiments for non-referenced organisms, while it also performs the functional enrichment analysis of differentially expressed transcripts/genes for both RNA-seq and micro-array experiments. It generates easy to read tables and interactive charts for better understanding of the data. The pipeline is modular in nature, and provides an opportunity to add new plugins in the future. Web application is

  9. Protannotator: a semiautomated pipeline for chromosome-wise functional annotation of the "missing" human proteome.

    Science.gov (United States)

    Islam, Mohammad T; Garg, Gagan; Hancock, William S; Risk, Brian A; Baker, Mark S; Ranganathan, Shoba

    2014-01-03

    The chromosome-centric human proteome project (C-HPP) aims to define the complete set of proteins encoded in each human chromosome. The neXtProt database (September 2013) lists 20,128 proteins for the human proteome, of which 3831 human proteins (∼19%) are considered "missing" according to the standard metrics table (released September 27, 2013). In support of the C-HPP initiative, we have extended the annotation strategy developed for human chromosome 7 "missing" proteins into a semiautomated pipeline to functionally annotate the "missing" human proteome. This pipeline integrates a suite of bioinformatics analysis and annotation software tools to identify homologues and map putative functional signatures, gene ontology, and biochemical pathways. From sequential BLAST searches, we have primarily identified homologues from reviewed nonhuman mammalian proteins with protein evidence for 1271 (33.2%) "missing" proteins, followed by 703 (18.4%) homologues from reviewed nonhuman mammalian proteins and subsequently 564 (14.7%) homologues from reviewed human proteins. Functional annotations for 1945 (50.8%) "missing" proteins were also determined. To accelerate the identification of "missing" proteins from proteomics studies, we generated proteotypic peptides in silico. Matching these proteotypic peptides to ENCODE proteogenomic data resulted in proteomic evidence for 107 (2.8%) of the 3831 "missing proteins, while evidence from a recent membrane proteomic study supported the existence for another 15 "missing" proteins. The chromosome-wise functional annotation of all "missing" proteins is freely available to the scientific community through our web server (http://biolinfo.org/protannotator).

  10. Pairagon+N-SCAN_EST: a model-based gene annotation pipeline

    DEFF Research Database (Denmark)

    Arumugam, Manimozhiyan; Wei, Chaochun; Brown, Randall H

    2006-01-01

    This paper describes Pairagon+N-SCAN_EST, a gene annotation pipeline that uses only native alignments. For each expressed sequence it chooses the best genomic alignment. Systems like ENSEMBL and ExoGean rely on trans alignments, in which expressed sequences are aligned to the genomic loci...... with de novo gene prediction by using N-SCAN_EST. N-SCAN_EST is based on a generalized HMM probability model augmented with a phylogenetic conservation model and EST alignments. It can predict complete transcripts by extending or merging EST alignments, but it can also predict genes in regions without EST...

  11. GI-POP: a combinational annotation and genomic island prediction pipeline for ongoing microbial genome projects.

    Science.gov (United States)

    Lee, Chi-Ching; Chen, Yi-Ping Phoebe; Yao, Tzu-Jung; Ma, Cheng-Yu; Lo, Wei-Cheng; Lyu, Ping-Chiang; Tang, Chuan Yi

    2013-04-10

    Sequencing of microbial genomes is important because of microbial-carrying antibiotic and pathogenetic activities. However, even with the help of new assembling software, finishing a whole genome is a time-consuming task. In most bacteria, pathogenetic or antibiotic genes are carried in genomic islands. Therefore, a quick genomic island (GI) prediction method is useful for ongoing sequencing genomes. In this work, we built a Web server called GI-POP (http://gipop.life.nthu.edu.tw) which integrates a sequence assembling tool, a functional annotation pipeline, and a high-performance GI predicting module, in a support vector machine (SVM)-based method called genomic island genomic profile scanning (GI-GPS). The draft genomes of the ongoing genome projects in contigs or scaffolds can be submitted to our Web server, and it provides the functional annotation and highly probable GI-predicting results. GI-POP is a comprehensive annotation Web server designed for ongoing genome project analysis. Researchers can perform annotation and obtain pre-analytic information include possible GIs, coding/non-coding sequences and functional analysis from their draft genomes. This pre-analytic system can provide useful information for finishing a genome sequencing project. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. CLOTU: An online pipeline for processing and clustering of 454 amplicon reads into OTUs followed by taxonomic annotation

    Directory of Open Access Journals (Sweden)

    Shalchian-Tabrizi Kamran

    2011-05-01

    Full Text Available Abstract Background The implementation of high throughput sequencing for exploring biodiversity poses high demands on bioinformatics applications for automated data processing. Here we introduce CLOTU, an online and open access pipeline for processing 454 amplicon reads. CLOTU has been constructed to be highly user-friendly and flexible, since different types of analyses are needed for different datasets. Results In CLOTU, the user can filter out low quality sequences, trim tags, primers, adaptors, perform clustering of sequence reads, and run BLAST against NCBInr or a customized database in a high performance computing environment. The resulting data may be browsed in a user-friendly manner and easily forwarded to downstream analyses. Although CLOTU is specifically designed for analyzing 454 amplicon reads, other types of DNA sequence data can also be processed. A fungal ITS sequence dataset generated by 454 sequencing of environmental samples is used to demonstrate the utility of CLOTU. Conclusions CLOTU is a flexible and easy to use bioinformatics pipeline that includes different options for filtering, trimming, clustering and taxonomic annotation of high throughput sequence reads. Some of these options are not included in comparable pipelines. CLOTU is implemented in a Linux computer cluster and is freely accessible to academic users through the Bioportal web-based bioinformatics service (http://www.bioportal.uio.no.

  13. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    Directory of Open Access Journals (Sweden)

    Gustavo Arango-Argoty

    Full Text Available Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/, which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  14. MetaStorm: A Public Resource for Customizable Metagenomics Annotation.

    Science.gov (United States)

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution.

  15. MetaStorm: A Public Resource for Customizable Metagenomics Annotation

    Science.gov (United States)

    Arango-Argoty, Gustavo; Singh, Gargi; Heath, Lenwood S.; Pruden, Amy; Xiao, Weidong; Zhang, Liqing

    2016-01-01

    Metagenomics is a trending research area, calling for the need to analyze large quantities of data generated from next generation DNA sequencing technologies. The need to store, retrieve, analyze, share, and visualize such data challenges current online computational systems. Interpretation and annotation of specific information is especially a challenge for metagenomic data sets derived from environmental samples, because current annotation systems only offer broad classification of microbial diversity and function. Moreover, existing resources are not configured to readily address common questions relevant to environmental systems. Here we developed a new online user-friendly metagenomic analysis server called MetaStorm (http://bench.cs.vt.edu/MetaStorm/), which facilitates customization of computational analysis for metagenomic data sets. Users can upload their own reference databases to tailor the metagenomics annotation to focus on various taxonomic and functional gene markers of interest. MetaStorm offers two major analysis pipelines: an assembly-based annotation pipeline and the standard read annotation pipeline used by existing web servers. These pipelines can be selected individually or together. Overall, MetaStorm provides enhanced interactive visualization to allow researchers to explore and manipulate taxonomy and functional annotation at various levels of resolution. PMID:27632579

  16. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    Directory of Open Access Journals (Sweden)

    Pontarotti Pierre

    2005-08-01

    Full Text Available Abstract Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes. Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset. The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest.

  17. AIGO: Towards a unified framework for the Analysis and the Inter-comparison of GO functional annotations

    Directory of Open Access Journals (Sweden)

    Defoin-Platel Michael

    2011-11-01

    Full Text Available Abstract Background In response to the rapid growth of available genome sequences, efforts have been made to develop automatic inference methods to functionally characterize them. Pipelines that infer functional annotation are now routinely used to produce new annotations at a genome scale and for a broad variety of species. These pipelines differ widely in their inference algorithms, confidence thresholds and data sources for reasoning. This heterogeneity makes a comparison of the relative merits of each approach extremely complex. The evaluation of the quality of the resultant annotations is also challenging given there is often no existing gold-standard against which to evaluate precision and recall. Results In this paper, we present a pragmatic approach to the study of functional annotations. An ensemble of 12 metrics, describing various aspects of functional annotations, is defined and implemented in a unified framework, which facilitates their systematic analysis and inter-comparison. The use of this framework is demonstrated on three illustrative examples: analysing the outputs of state-of-the-art inference pipelines, comparing electronic versus manual annotation methods, and monitoring the evolution of publicly available functional annotations. The framework is part of the AIGO library (http://code.google.com/p/aigo for the Analysis and the Inter-comparison of the products of Gene Ontology (GO annotation pipelines. The AIGO library also provides functionalities to easily load, analyse, manipulate and compare functional annotations and also to plot and export the results of the analysis in various formats. Conclusions This work is a step toward developing a unified framework for the systematic study of GO functional annotations. This framework has been designed so that new metrics on GO functional annotations can be added in a very straightforward way.

  18. Graph-based sequence annotation using a data integration approach

    Directory of Open Access Journals (Sweden)

    Pesch Robert

    2008-06-01

    Full Text Available The automated annotation of data from high throughput sequencing and genomics experiments is a significant challenge for bioinformatics. Most current approaches rely on sequential pipelines of gene finding and gene function prediction methods that annotate a gene with information from different reference data sources. Each function prediction method contributes evidence supporting a functional assignment. Such approaches generally ignore the links between the information in the reference datasets. These links, however, are valuable for assessing the plausibility of a function assignment and can be used to evaluate the confidence in a prediction. We are working towards a novel annotation system that uses the network of information supporting the function assignment to enrich the annotation process for use by expert curators and predicting the function of previously unannotated genes. In this paper we describe our success in the first stages of this development. We present the data integration steps that are needed to create the core database of integrated reference databases (UniProt, PFAM, PDB, GO and the pathway database Ara- Cyc which has been established in the ONDEX data integration system. We also present a comparison between different methods for integration of GO terms as part of the function assignment pipeline and discuss the consequences of this analysis for improving the accuracy of gene function annotation.

  19. Graph-based sequence annotation using a data integration approach.

    Science.gov (United States)

    Pesch, Robert; Lysenko, Artem; Hindle, Matthew; Hassani-Pak, Keywan; Thiele, Ralf; Rawlings, Christopher; Köhler, Jacob; Taubert, Jan

    2008-08-25

    The automated annotation of data from high throughput sequencing and genomics experiments is a significant challenge for bioinformatics. Most current approaches rely on sequential pipelines of gene finding and gene function prediction methods that annotate a gene with information from different reference data sources. Each function prediction method contributes evidence supporting a functional assignment. Such approaches generally ignore the links between the information in the reference datasets. These links, however, are valuable for assessing the plausibility of a function assignment and can be used to evaluate the confidence in a prediction. We are working towards a novel annotation system that uses the network of information supporting the function assignment to enrich the annotation process for use by expert curators and predicting the function of previously unannotated genes. In this paper we describe our success in the first stages of this development. We present the data integration steps that are needed to create the core database of integrated reference databases (UniProt, PFAM, PDB, GO and the pathway database Ara-Cyc) which has been established in the ONDEX data integration system. We also present a comparison between different methods for integration of GO terms as part of the function assignment pipeline and discuss the consequences of this analysis for improving the accuracy of gene function annotation. The methods and algorithms presented in this publication are an integral part of the ONDEX system which is freely available from http://ondex.sf.net/.

  20. A computational genomics pipeline for prokaryotic sequencing projects.

    Science.gov (United States)

    Kislyuk, Andrey O; Katz, Lee S; Agrawal, Sonia; Hagen, Matthew S; Conley, Andrew B; Jayaraman, Pushkala; Nelakuditi, Viswateja; Humphrey, Jay C; Sammons, Scott A; Govil, Dhwani; Mair, Raydel D; Tatti, Kathleen M; Tondella, Maria L; Harcourt, Brian H; Mayer, Leonard W; Jordan, I King

    2010-08-01

    New sequencing technologies have accelerated research on prokaryotic genomes and have made genome sequencing operations outside major genome sequencing centers routine. However, no off-the-shelf solution exists for the combined assembly, gene prediction, genome annotation and data presentation necessary to interpret sequencing data. The resulting requirement to invest significant resources into custom informatics support for genome sequencing projects remains a major impediment to the accessibility of high-throughput sequence data. We present a self-contained, automated high-throughput open source genome sequencing and computational genomics pipeline suitable for prokaryotic sequencing projects. The pipeline has been used at the Georgia Institute of Technology and the Centers for Disease Control and Prevention for the analysis of Neisseria meningitidis and Bordetella bronchiseptica genomes. The pipeline is capable of enhanced or manually assisted reference-based assembly using multiple assemblers and modes; gene predictor combining; and functional annotation of genes and gene products. Because every component of the pipeline is executed on a local machine with no need to access resources over the Internet, the pipeline is suitable for projects of a sensitive nature. Annotation of virulence-related features makes the pipeline particularly useful for projects working with pathogenic prokaryotes. The pipeline is licensed under the open-source GNU General Public License and available at the Georgia Tech Neisseria Base (http://nbase.biology.gatech.edu/). The pipeline is implemented with a combination of Perl, Bourne Shell and MySQL and is compatible with Linux and other Unix systems.

  1. Combined evidence annotation of transposable elements in genome sequences.

    Directory of Open Access Journals (Sweden)

    Hadi Quesneville

    2005-07-01

    Full Text Available Transposable elements (TEs are mobile, repetitive sequences that make up significant fractions of metazoan genomes. Despite their near ubiquity and importance in genome and chromosome biology, most efforts to annotate TEs in genome sequences rely on the results of a single computational program, RepeatMasker. In contrast, recent advances in gene annotation indicate that high-quality gene models can be produced from combining multiple independent sources of computational evidence. To elevate the quality of TE annotations to a level comparable to that of gene models, we have developed a combined evidence-model TE annotation pipeline, analogous to systems used for gene annotation, by integrating results from multiple homology-based and de novo TE identification methods. As proof of principle, we have annotated "TE models" in Drosophila melanogaster Release 4 genomic sequences using the combined computational evidence derived from RepeatMasker, BLASTER, TBLASTX, all-by-all BLASTN, RECON, TE-HMM and the previous Release 3.1 annotation. Our system is designed for use with the Apollo genome annotation tool, allowing automatic results to be curated manually to produce reliable annotations. The euchromatic TE fraction of D. melanogaster is now estimated at 5.3% (cf. 3.86% in Release 3.1, and we found a substantially higher number of TEs (n = 6,013 than previously identified (n = 1,572. Most of the new TEs derive from small fragments of a few hundred nucleotides long and highly abundant families not previously annotated (e.g., INE-1. We also estimated that 518 TE copies (8.6% are inserted into at least one other TE, forming a nest of elements. The pipeline allows rapid and thorough annotation of even the most complex TE models, including highly deleted and/or nested elements such as those often found in heterochromatic sequences. Our pipeline can be easily adapted to other genome sequences, such as those of the D. melanogaster heterochromatin or other

  2. A document processing pipeline for annotating chemical entities in scientific documents.

    Science.gov (United States)

    Campos, David; Matos, Sérgio; Oliveira, José L

    2015-01-01

    The recognition of drugs and chemical entities in text is a very important task within the field of biomedical information extraction, given the rapid growth in the amount of published texts (scientific papers, patents, patient records) and the relevance of these and other related concepts. If done effectively, this could allow exploiting such textual resources to automatically extract or infer relevant information, such as drug profiles, relations and similarities between drugs, or associations between drugs and potential drug targets. The objective of this work was to develop and validate a document processing and information extraction pipeline for the identification of chemical entity mentions in text. We used the BioCreative IV CHEMDNER task data to train and evaluate a machine-learning based entity recognition system. Using a combination of two conditional random field models, a selected set of features, and a post-processing stage, we achieved F-measure results of 87.48% in the chemical entity mention recognition task and 87.75% in the chemical document indexing task. We present a machine learning-based solution for automatic recognition of chemical and drug names in scientific documents. The proposed approach applies a rich feature set, including linguistic, orthographic, morphological, dictionary matching and local context features. Post-processing modules are also integrated, performing parentheses correction, abbreviation resolution and filtering erroneous mentions using an exclusion list derived from the training data. The developed methods were implemented as a document annotation tool and web service, freely available at http://bioinformatics.ua.pt/becas-chemicals/.

  3. Plann: A command-line application for annotating plastome sequences.

    Science.gov (United States)

    Huang, Daisie I; Cronk, Quentin C B

    2015-08-01

    Plann automates the process of annotating a plastome sequence in GenBank format for either downstream processing or for GenBank submission by annotating a new plastome based on a similar, well-annotated plastome. Plann is a Perl script to be executed on the command line. Plann compares a new plastome sequence to the features annotated in a reference plastome and then shifts the intervals of any matching features to the locations in the new plastome. Plann's output can be used in the National Center for Biotechnology Information's tbl2asn to create a Sequin file for GenBank submission. Unlike Web-based annotation packages, Plann is a locally executable script that will accurately annotate a plastome sequence to a locally specified reference plastome. Because it executes from the command line, it is ready to use in other software pipelines and can be easily rerun as a draft plastome is improved.

  4. Sequence-based feature prediction and annotation of proteins

    DEFF Research Database (Denmark)

    Juncker, Agnieszka; Jensen, Lars J.; Pierleoni, Andrea

    2009-01-01

    A recent trend in computational methods for annotation of protein function is that many prediction tools are combined in complex workflows and pipelines to facilitate the analysis of feature combinations, for example, the entire repertoire of kinase-binding motifs in the human proteome....

  5. Improved methods and resources for paramecium genomics: transcription units, gene annotation and gene expression.

    Science.gov (United States)

    Arnaiz, Olivier; Van Dijk, Erwin; Bétermier, Mireille; Lhuillier-Akakpo, Maoussi; de Vanssay, Augustin; Duharcourt, Sandra; Sallet, Erika; Gouzy, Jérôme; Sperling, Linda

    2017-06-26

    The 15 sibling species of the Paramecium aurelia cryptic species complex emerged after a whole genome duplication that occurred tens of millions of years ago. Given extensive knowledge of the genetics and epigenetics of Paramecium acquired over the last century, this species complex offers a uniquely powerful system to investigate the consequences of whole genome duplication in a unicellular eukaryote as well as the genetic and epigenetic mechanisms that drive speciation. High quality Paramecium gene models are important for research using this system. The major aim of the work reported here was to build an improved gene annotation pipeline for the Paramecium lineage. We generated oriented RNA-Seq transcriptome data across the sexual process of autogamy for the model species Paramecium tetraurelia. We determined, for the first time in a ciliate, candidate P. tetraurelia transcription start sites using an adapted Cap-Seq protocol. We developed TrUC, multi-threaded Perl software that in conjunction with TopHat mapping of RNA-Seq data to a reference genome, predicts transcription units for the annotation pipeline. We used EuGene software to combine annotation evidence. The high quality gene structural annotations obtained for P. tetraurelia were used as evidence to improve published annotations for 3 other Paramecium species. The RNA-Seq data were also used for differential gene expression analysis, providing a gene expression atlas that is more sensitive than the previously established microarray resource. We have developed a gene annotation pipeline tailored for the compact genomes and tiny introns of Paramecium species. A novel component of this pipeline, TrUC, predicts transcription units using Cap-Seq and oriented RNA-Seq data. TrUC could prove useful beyond Paramecium, especially in the case of high gene density. Accurate predictions of 3' and 5' UTR will be particularly valuable for studies of gene expression (e.g. nucleosome positioning, identification of cis

  6. A framework for annotating human genome in disease context.

    Science.gov (United States)

    Xu, Wei; Wang, Huisong; Cheng, Wenqing; Fu, Dong; Xia, Tian; Kibbe, Warren A; Lin, Simon M

    2012-01-01

    Identification of gene-disease association is crucial to understanding disease mechanism. A rapid increase in biomedical literatures, led by advances of genome-scale technologies, poses challenge for manually-curated-based annotation databases to characterize gene-disease associations effectively and timely. We propose an automatic method-The Disease Ontology Annotation Framework (DOAF) to provide a comprehensive annotation of the human genome using the computable Disease Ontology (DO), the NCBO Annotator service and NCBI Gene Reference Into Function (GeneRIF). DOAF can keep the resulting knowledgebase current by periodically executing automatic pipeline to re-annotate the human genome using the latest DO and GeneRIF releases at any frequency such as daily or monthly. Further, DOAF provides a computable and programmable environment which enables large-scale and integrative analysis by working with external analytic software or online service platforms. A user-friendly web interface (doa.nubic.northwestern.edu) is implemented to allow users to efficiently query, download, and view disease annotations and the underlying evidences.

  7. MAKER2: an annotation pipeline and genome-database management tool for second-generation genome projects.

    Science.gov (United States)

    Holt, Carson; Yandell, Mark

    2011-12-22

    Second-generation sequencing technologies are precipitating major shifts with regards to what kinds of genomes are being sequenced and how they are annotated. While the first generation of genome projects focused on well-studied model organisms, many of today's projects involve exotic organisms whose genomes are largely terra incognita. This complicates their annotation, because unlike first-generation projects, there are no pre-existing 'gold-standard' gene-models with which to train gene-finders. Improvements in genome assembly and the wide availability of mRNA-seq data are also creating opportunities to update and re-annotate previously published genome annotations. Today's genome projects are thus in need of new genome annotation tools that can meet the challenges and opportunities presented by second-generation sequencing technologies. We present MAKER2, a genome annotation and data management tool designed for second-generation genome projects. MAKER2 is a multi-threaded, parallelized application that can process second-generation datasets of virtually any size. We show that MAKER2 can produce accurate annotations for novel genomes where training-data are limited, of low quality or even non-existent. MAKER2 also provides an easy means to use mRNA-seq data to improve annotation quality; and it can use these data to update legacy annotations, significantly improving their quality. We also show that MAKER2 can evaluate the quality of genome annotations, and identify and prioritize problematic annotations for manual review. MAKER2 is the first annotation engine specifically designed for second-generation genome projects. MAKER2 scales to datasets of any size, requires little in the way of training data, and can use mRNA-seq data to improve annotation quality. It can also update and manage legacy genome annotation datasets.

  8. Empirical Formulas for Calculation of Negative Pressure Difference in Vacuum Pipelines

    Directory of Open Access Journals (Sweden)

    Marek Kalenik

    2015-10-01

    Full Text Available The paper presents the analysis of results of empirical investigations of a negative pressure difference in vacuum pipelines with internal diameters of 57, 81, 102 mm. The investigations were performed in an experimental installation of a vacuum sewage system, built in a laboratory hall on a scale of 1:1. The paper contains a review of the literature concerning two-phase flows (liquid-gas in horizontal, vertical and diagonal pipelines. It presents the construction and working principles of the experimental installation of vacuum sewage system in steady and unsteady conditions during a two-phase flow of water and air. It also presents a methodology for determination of formula for calculation of a negative pressure difference in vacuum pipelines. The results obtained from the measurements of the negative pressure difference Δpvr in the vacuum pipelines were analyzed and compared with the results of calculations of the negative pressure difference Δpvr, obtained from the determined formula. The values of the negative pressure difference Δpvr calculated for the vacuum pipelines with internal diameters of 57, 81, and 102 mm with the use of Formula (19 coincide with the values of Δpvr measured in the experimental installation of a vacuum sewage system. The dependence of the negative pressure difference Δpvr along the length of the vacuum pipelines on the set negative pressure in the vacuum container pvzp is linear. The smaller the vacuum pipeline diameter, the greater the negative pressure difference Δpvr is along its length.

  9. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics.

    Directory of Open Access Journals (Sweden)

    Ram Vinay Pandey

    Full Text Available Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid.

  10. MutAid: Sanger and NGS Based Integrated Pipeline for Mutation Identification, Validation and Annotation in Human Molecular Genetics.

    Science.gov (United States)

    Pandey, Ram Vinay; Pabinger, Stephan; Kriegner, Albert; Weinhäusel, Andreas

    2016-01-01

    Traditional Sanger sequencing as well as Next-Generation Sequencing have been used for the identification of disease causing mutations in human molecular research. The majority of currently available tools are developed for research and explorative purposes and often do not provide a complete, efficient, one-stop solution. As the focus of currently developed tools is mainly on NGS data analysis, no integrative solution for the analysis of Sanger data is provided and consequently a one-stop solution to analyze reads from both sequencing platforms is not available. We have therefore developed a new pipeline called MutAid to analyze and interpret raw sequencing data produced by Sanger or several NGS sequencing platforms. It performs format conversion, base calling, quality trimming, filtering, read mapping, variant calling, variant annotation and analysis of Sanger and NGS data under a single platform. It is capable of analyzing reads from multiple patients in a single run to create a list of potential disease causing base substitutions as well as insertions and deletions. MutAid has been developed for expert and non-expert users and supports four sequencing platforms including Sanger, Illumina, 454 and Ion Torrent. Furthermore, for NGS data analysis, five read mappers including BWA, TMAP, Bowtie, Bowtie2 and GSNAP and four variant callers including GATK-HaplotypeCaller, SAMTOOLS, Freebayes and VarScan2 pipelines are supported. MutAid is freely available at https://sourceforge.net/projects/mutaid.

  11. On the relevance of sophisticated structural annotations for disulfide connectivity pattern prediction.

    Directory of Open Access Journals (Sweden)

    Julien Becker

    Full Text Available Disulfide bridges strongly constrain the native structure of many proteins and predicting their formation is therefore a key sub-problem of protein structure and function inference. Most recently proposed approaches for this prediction problem adopt the following pipeline: first they enrich the primary sequence with structural annotations, second they apply a binary classifier to each candidate pair of cysteines to predict disulfide bonding probabilities and finally, they use a maximum weight graph matching algorithm to derive the predicted disulfide connectivity pattern of a protein. In this paper, we adopt this three step pipeline and propose an extensive study of the relevance of various structural annotations and feature encodings. In particular, we consider five kinds of structural annotations, among which three are novel in the context of disulfide bridge prediction. So as to be usable by machine learning algorithms, these annotations must be encoded into features. For this purpose, we propose four different feature encodings based on local windows and on different kinds of histograms. The combination of structural annotations with these possible encodings leads to a large number of possible feature functions. In order to identify a minimal subset of relevant feature functions among those, we propose an efficient and interpretable feature function selection scheme, designed so as to avoid any form of overfitting. We apply this scheme on top of three supervised learning algorithms: k-nearest neighbors, support vector machines and extremely randomized trees. Our results indicate that the use of only the PSSM (position-specific scoring matrix together with the CSP (cysteine separation profile are sufficient to construct a high performance disulfide pattern predictor and that extremely randomized trees reach a disulfide pattern prediction accuracy of [Formula: see text] on the benchmark dataset SPX[Formula: see text], which corresponds to

  12. The GATO gene annotation tool for research laboratories

    Directory of Open Access Journals (Sweden)

    A. Fujita

    2005-11-01

    Full Text Available Large-scale genome projects have generated a rapidly increasing number of DNA sequences. Therefore, development of computational methods to rapidly analyze these sequences is essential for progress in genomic research. Here we present an automatic annotation system for preliminary analysis of DNA sequences. The gene annotation tool (GATO is a Bioinformatics pipeline designed to facilitate routine functional annotation and easy access to annotated genes. It was designed in view of the frequent need of genomic researchers to access data pertaining to a common set of genes. In the GATO system, annotation is generated by querying some of the Web-accessible resources and the information is stored in a local database, which keeps a record of all previous annotation results. GATO may be accessed from everywhere through the internet or may be run locally if a large number of sequences are going to be annotated. It is implemented in PHP and Perl and may be run on any suitable Web server. Usually, installation and application of annotation systems require experience and are time consuming, but GATO is simple and practical, allowing anyone with basic skills in informatics to access it without any special training. GATO can be downloaded at [http://mariwork.iq.usp.br/gato/]. Minimum computer free space required is 2 MB.

  13. Annotating Logical Forms for EHR Questions.

    Science.gov (United States)

    Roberts, Kirk; Demner-Fushman, Dina

    2016-05-01

    This paper discusses the creation of a semantically annotated corpus of questions about patient data in electronic health records (EHRs). The goal is to provide the training data necessary for semantic parsers to automatically convert EHR questions into a structured query. A layered annotation strategy is used which mirrors a typical natural language processing (NLP) pipeline. First, questions are syntactically analyzed to identify multi-part questions. Second, medical concepts are recognized and normalized to a clinical ontology. Finally, logical forms are created using a lambda calculus representation. We use a corpus of 446 questions asking for patient-specific information. From these, 468 specific questions are found containing 259 unique medical concepts and requiring 53 unique predicates to represent the logical forms. We further present detailed characteristics of the corpus, including inter-annotator agreement results, and describe the challenges automatic NLP systems will face on this task.

  14. Plann: A command-line application for annotating plastome sequences1

    Science.gov (United States)

    Huang, Daisie I.; Cronk, Quentin C. B.

    2015-01-01

    Premise of the study: Plann automates the process of annotating a plastome sequence in GenBank format for either downstream processing or for GenBank submission by annotating a new plastome based on a similar, well-annotated plastome. Methods and Results: Plann is a Perl script to be executed on the command line. Plann compares a new plastome sequence to the features annotated in a reference plastome and then shifts the intervals of any matching features to the locations in the new plastome. Plann’s output can be used in the National Center for Biotechnology Information’s tbl2asn to create a Sequin file for GenBank submission. Conclusions: Unlike Web-based annotation packages, Plann is a locally executable script that will accurately annotate a plastome sequence to a locally specified reference plastome. Because it executes from the command line, it is ready to use in other software pipelines and can be easily rerun as a draft plastome is improved. PMID:26312193

  15. Genomic variant annotation workflow for clinical applications [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Thomas Thurnherr

    2016-10-01

    Full Text Available Annotation and interpretation of DNA aberrations identified through next-generation sequencing is becoming an increasingly important task. Even more so in the context of data analysis pipelines for medical applications, where genomic aberrations are associated with phenotypic and clinical features. Here we describe a workflow to identify potential gene targets in aberrated genes or pathways and their corresponding drugs. To this end, we provide the R/Bioconductor package rDGIdb, an R wrapper to query the drug-gene interaction database (DGIdb. DGIdb accumulates drug-gene interaction data from 15 different resources and allows filtering on different levels. The rDGIdb package makes these resources and tools available to R users. Moreover, rDGIdb queries can be automated through incorporation of the rDGIdb package into NGS sequencing pipelines.

  16. The Development of PIPA: An Integrated and Automated Pipeline for Genome-Wide Protein Function Annotation

    National Research Council Canada - National Science Library

    Yu, Chenggang; Zavaljevski, Nela; Desai, Valmik; Johnson, Seth; Stevens, Fred J; Reifman, Jaques

    2008-01-01

    .... With the existence of many programs and databases for inferring different protein functions, a pipeline that properly integrates these resources will benefit from the advantages of each method...

  17. MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads

    DEFF Research Database (Denmark)

    Petersen, Thomas Nordahl; Lukjancenko, Oksana; Thomsen, Martin Christen Frølund

    2017-01-01

    number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post...... pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets....

  18. Tunnels: different construction methods and its use for pipelines installation

    Energy Technology Data Exchange (ETDEWEB)

    Mattos, Tales; Soares, Ana Cecilia; Assis, Slow de; Bolsonaro, Ralfo; Sanandres, Simon [Petroleo do Brasil S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    In a continental dimensions country like Brazil, the pipeline modal faces the challenge of opening ROW's in the most different kind of soils with the most different geomorphology. To safely fulfill the pipeline construction demand, the ROW opening uses all techniques in earthworks and route definition and, where is necessary, no digging techniques like horizontal directional drilling, micro tunneling and also full size tunnels design for pipelines installation in high topography terrains to avoid geotechnical risks. PETROBRAS has already used the tunnel technique to cross higher terrains with great construction difficult, and mainly to make it pipeline maintenance and operation easier. For the GASBOL Project, in Aparados da Serra region and in GASYRG, in Bolivia, two tunnels were opened with approximately 700 meters and 2,000 meters each one. The GASBOL Project had the particularity of being a gallery with only one excavation face, finishing under the hill and from this point was drilled a vertical shaft was drilled until the top to install the pipeline section, while in GASYRG Project the tunnel had two excavation faces. Currently, two projects are under development with tunnels, one of then is the Caraguatatuba-Taubate gas pipeline (GASTAU), with a 5 km tunnel, with the same concepts of the GASBOL tunnel, with a gallery to be opened with the use of a TBM (Tunneling Boring Machine), and a shaft to the surface, and the gas pipeline Cabiunas-Reduc III (GASDUC III) project is under construction with a 3.7 km tunnel, like the GASYRG tunnel with two faces. This paper presents the main excavation tunneling methods, conventional and mechanized, presenting the most relevant characteristics from both and, in particular, the use of tunnels for pipelines installation. (author)

  19. Identifying and annotating human bifunctional RNAs reveals their versatile functions.

    Science.gov (United States)

    Chen, Geng; Yang, Juan; Chen, Jiwei; Song, Yunjie; Cao, Ruifang; Shi, Tieliu; Shi, Leming

    2016-10-01

    Bifunctional RNAs that possess both protein-coding and noncoding functional properties were less explored and poorly understood. Here we systematically explored the characteristics and functions of such human bifunctional RNAs by integrating tandem mass spectrometry and RNA-seq data. We first constructed a pipeline to identify and annotate bifunctional RNAs, leading to the characterization of 132 high-confidence bifunctional RNAs. Our analyses indicate that bifunctional RNAs may be involved in human embryonic development and can be functional in diverse tissues. Moreover, bifunctional RNAs could interact with multiple miRNAs and RNA-binding proteins to exert their corresponding roles. Bifunctional RNAs may also function as competing endogenous RNAs to regulate the expression of many genes by competing for common targeting miRNAs. Finally, somatic mutations of diverse carcinomas may generate harmful effect on corresponding bifunctional RNAs. Collectively, our study not only provides the pipeline for identifying and annotating bifunctional RNAs but also reveals their important gene-regulatory functions.

  20. A homology-based pipeline for global prediction of post-translational modification sites

    Science.gov (United States)

    Chen, Xiang; Shi, Shao-Ping; Xu, Hao-Dong; Suo, Sheng-Bao; Qiu, Jian-Ding

    2016-05-01

    The pathways of protein post-translational modifications (PTMs) have been shown to play particularly important roles for almost any biological process. Identification of PTM substrates along with information on the exact sites is fundamental for fully understanding or controlling biological processes. Alternative computational strategies would help to annotate PTMs in a high-throughput manner. Traditional algorithms are suited for identifying the common organisms and tissues that have a complete PTM atlas or extensive experimental data. While annotation of rare PTMs in most organisms is a clear challenge. In this work, to this end we have developed a novel homology-based pipeline named PTMProber that allows identification of potential modification sites for most of the proteomes lacking PTMs data. Cross-promotion E-value (CPE) as stringent benchmark has been used in our pipeline to evaluate homology to known modification sites. Independent-validation tests show that PTMProber achieves over 58.8% recall with high precision by CPE benchmark. Comparisons with other machine-learning tools show that PTMProber pipeline performs better on general predictions. In addition, we developed a web-based tool to integrate this pipeline at http://bioinfo.ncu.edu.cn/PTMProber/index.aspx. In addition to pre-constructed prediction models of PTM, the website provides an extensional functionality to allow users to customize models.

  1. An Approach to Function Annotation for Proteins of Unknown Function (PUFs in the Transcriptome of Indian Mulberry.

    Directory of Open Access Journals (Sweden)

    K H Dhanyalakshmi

    Full Text Available The modern sequencing technologies are generating large volumes of information at the transcriptome and genome level. Translation of this information into a biological meaning is far behind the race due to which a significant portion of proteins discovered remain as proteins of unknown function (PUFs. Attempts to uncover the functional significance of PUFs are limited due to lack of easy and high throughput functional annotation tools. Here, we report an approach to assign putative functions to PUFs, identified in the transcriptome of mulberry, a perennial tree commonly cultivated as host of silkworm. We utilized the mulberry PUFs generated from leaf tissues exposed to drought stress at whole plant level. A sequence and structure based computational analysis predicted the probable function of the PUFs. For rapid and easy annotation of PUFs, we developed an automated pipeline by integrating diverse bioinformatics tools, designated as PUFs Annotation Server (PUFAS, which also provides a web service API (Application Programming Interface for a large-scale analysis up to a genome. The expression analysis of three selected PUFs annotated by the pipeline revealed abiotic stress responsiveness of the genes, and hence their potential role in stress acclimation pathways. The automated pipeline developed here could be extended to assign functions to PUFs from any organism in general. PUFAS web server is available at http://caps.ncbs.res.in/pufas/ and the web service is accessible at http://capservices.ncbs.res.in/help/pufas.

  2. VISPA2: a scalable pipeline for high-throughput identification and annotation of vector integration sites.

    Science.gov (United States)

    Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio

    2017-11-25

    Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for

  3. Viability of using different types of main oil pipelines pump drives

    Science.gov (United States)

    Zakirzakov, A. G.; Zemenkov, Yu D.; Akulov, K. A.

    2018-05-01

    The choice of the pumping units' drive of main oil pipelines is of great importance both for design of pipelines and for modernization of existing ones. At the beginning of oil pipeline transport development, due to the limited number and types of energy sources, the choice was not difficult. The combustion energy of the pumped product was often the only available energy resource for its transportation. In this regard, the pipelines that had autonomous energy sources favorably differed from other energy consumers in the sector. With the passage of time, with the development of the country's electricity supply system, the electric drive for power-line equipment of oil pipelines becomes the dominant type of a pumping station drive. Nowadays, the traditional component is an essential factor when choosing some type of the drive. For many years, oil companies have been using electric drives for pumps, while gas transport enterprises prefer self-contained gas turbines.

  4. AutoFACT: An Automatic Functional Annotation and Classification Tool

    Directory of Open Access Journals (Sweden)

    Lang B Franz

    2005-06-01

    Full Text Available Abstract Background Assignment of function to new molecular sequence data is an essential step in genomics projects. The usual process involves similarity searches of a given sequence against one or more databases, an arduous process for large datasets. Results We present AutoFACT, a fully automated and customizable annotation tool that assigns biologically informative functions to a sequence. Key features of this tool are that it (1 analyzes nucleotide and protein sequence data; (2 determines the most informative functional description by combining multiple BLAST reports from several user-selected databases; (3 assigns putative metabolic pathways, functional classes, enzyme classes, GeneOntology terms and locus names; and (4 generates output in HTML, text and GFF formats for the user's convenience. We have compared AutoFACT to four well-established annotation pipelines. The error rate of functional annotation is estimated to be only between 1–2%. Comparison of AutoFACT to the traditional top-BLAST-hit annotation method shows that our procedure increases the number of functionally informative annotations by approximately 50%. Conclusion AutoFACT will serve as a useful annotation tool for smaller sequencing groups lacking dedicated bioinformatics staff. It is implemented in PERL and runs on LINUX/UNIX platforms. AutoFACT is available at http://megasun.bch.umontreal.ca/Software/AutoFACT.htm.

  5. Challenges in Whole-Genome Annotation of Pyrosequenced Eukaryotic Genomes

    Energy Technology Data Exchange (ETDEWEB)

    Kuo, Alan; Grigoriev, Igor

    2009-04-17

    Pyrosequencing technologies such as 454/Roche and Solexa/Illumina vastly lower the cost of nucleotide sequencing compared to the traditional Sanger method, and thus promise to greatly expand the number of sequenced eukaryotic genomes. However, the new technologies also bring new challenges such as shorter reads and new kinds and higher rates of sequencing errors, which complicate genome assembly and gene prediction. At JGI we are deploying 454 technology for the sequencing and assembly of ever-larger eukaryotic genomes. Here we describe our first whole-genome annotation of a purely 454-sequenced fungal genome that is larger than a yeast (>30 Mbp). The pezizomycotine (filamentous ascomycote) Aspergillus carbonarius belongs to the Aspergillus section Nigri species complex, members of which are significant as platforms for bioenergy and bioindustrial technology, as members of soil microbial communities and players in the global carbon cycle, and as agricultural toxigens. Application of a modified version of the standard JGI Annotation Pipeline has so far predicted ~;;10k genes. ~;;12percent of these preliminary annotations suffer a potential frameshift error, which is somewhat higher than the ~;;9percent rate in the Sanger-sequenced and conventionally assembled and annotated genome of fellow Aspergillus section Nigri member A. niger. Also,>90percent of A. niger genes have potential homologs in the A. carbonarius preliminary annotation. Weconclude, and with further annotation and comparative analysis expect to confirm, that 454 sequencing strategies provide a promising substrate for annotation of modestly sized eukaryotic genomes. We will also present results of annotation of a number of other pyrosequenced fungal genomes of bioenergy interest.

  6. Canary: an atomic pipeline for clinical amplicon assays.

    Science.gov (United States)

    Doig, Kenneth D; Ellul, Jason; Fellowes, Andrew; Thompson, Ella R; Ryland, Georgina; Blombery, Piers; Papenfuss, Anthony T; Fox, Stephen B

    2017-12-15

    High throughput sequencing requires bioinformatics pipelines to process large volumes of data into meaningful variants that can be translated into a clinical report. These pipelines often suffer from a number of shortcomings: they lack robustness and have many components written in multiple languages, each with a variety of resource requirements. Pipeline components must be linked together with a workflow system to achieve the processing of FASTQ files through to a VCF file of variants. Crafting these pipelines requires considerable bioinformatics and IT skills beyond the reach of many clinical laboratories. Here we present Canary, a single program that can be run on a laptop, which takes FASTQ files from amplicon assays through to an annotated VCF file ready for clinical analysis. Canary can be installed and run with a single command using Docker containerization or run as a single JAR file on a wide range of platforms. Although it is a single utility, Canary performs all the functions present in more complex and unwieldy pipelines. All variants identified by Canary are 3' shifted and represented in their most parsimonious form to provide a consistent nomenclature, irrespective of sequencing variation. Further, proximate in-phase variants are represented as a single HGVS 'delins' variant. This allows for correct nomenclature and consequences to be ascribed to complex multi-nucleotide polymorphisms (MNPs), which are otherwise difficult to represent and interpret. Variants can also be annotated with hundreds of attributes sourced from MyVariant.info to give up to date details on pathogenicity, population statistics and in-silico predictors. Canary has been used at the Peter MacCallum Cancer Centre in Melbourne for the last 2 years for the processing of clinical sequencing data. By encapsulating clinical features in a single, easily installed executable, Canary makes sequencing more accessible to all pathology laboratories. Canary is available for download as source

  7. Steam condensation induced water hammer simulations for different pipelines

    International Nuclear Information System (INIS)

    Barna, I.F.; Ezsol, G.

    2011-01-01

    We investigate steam condensation induced water hammer (CIWH) phenomena and present theoretical results for different kind of pipelines. We analyze the process with the WAHA3 model based on two-phase flow six first-order partial differential equations that present one dimensional, surface averaged mass, momentum and energy balances. A second order accurate high-resolution shock-capturing numerical scheme was applied with different kind of limiters in the numerical calculations. At first, we present calculations for various pipelines in the VVER-440-312 type nuclear reactor. Our recent calculation clearly shows that the six conditions of Griffith are only necessary conditions for CIWH but not sufficient. As second results we performed calculations for various geometries and compare with the theory of Chun. (author)

  8. ADEPt, a semantically-enriched pipeline for extracting adverse drug events from free-text electronic health records.

    Directory of Open Access Journals (Sweden)

    Ehtesham Iqbal

    Full Text Available Adverse drug events (ADEs are unintended responses to medical treatment. They can greatly affect a patient's quality of life and present a substantial burden on healthcare. Although Electronic health records (EHRs document a wealth of information relating to ADEs, they are frequently stored in the unstructured or semi-structured free-text narrative requiring Natural Language Processing (NLP techniques to mine the relevant information. Here we present a rule-based ADE detection and classification pipeline built and tested on a large Psychiatric corpus comprising 264k patients using the de-identified EHRs of four UK-based psychiatric hospitals. The pipeline uses characteristics specific to Psychiatric EHRs to guide the annotation process, and distinguishes: a the temporal value associated with the ADE mention (whether it is historical or present, b the categorical value of the ADE (whether it is assertive, hypothetical, retrospective or a general discussion and c the implicit contextual value where the status of the ADE is deduced from surrounding indicators, rather than explicitly stated. We manually created the rulebase in collaboration with clinicians and pharmacists by studying ADE mentions in various types of clinical notes. We evaluated the open-source Adverse Drug Event annotation Pipeline (ADEPt using 19 ADEs specific to antipsychotics and antidepressants medication. The ADEs chosen vary in severity, regularity and persistence. The average F-measure and accuracy achieved by our tool across all tested ADEs were 0.83 and 0.83 respectively. In addition to annotation power, the ADEPT pipeline presents an improvement to the state of the art context-discerning algorithm, ConText.

  9. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    Directory of Open Access Journals (Sweden)

    Anjani Ragothaman

    2014-01-01

    Full Text Available While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  10. Rnnotator: an automated de novo transcriptome assembly pipeline from stranded RNA-Seq reads

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Jeffrey; Bruno, Vincent M.; Fang, Zhide; Meng, Xiandong; Blow, Matthew; Zhang, Tao; Sherlock, Gavin; Snyder, Michael; Wang, Zhong

    2010-11-19

    Background: Comprehensive annotation and quantification of transcriptomes are outstanding problems in functional genomics. While high throughput mRNA sequencing (RNA-Seq) has emerged as a powerful tool for addressing these problems, its success is dependent upon the availability and quality of reference genome sequences, thus limiting the organisms to which it can be applied. Results: Here, we describe Rnnotator, an automated software pipeline that generates transcript models by de novo assembly of RNA-Seq data without the need for a reference genome. We have applied the Rnnotator assembly pipeline to two yeast transcriptomes and compared the results to the reference gene catalogs of these organisms. The contigs produced by Rnnotator are highly accurate (95percent) and reconstruct full-length genes for the majority of the existing gene models (54.3percent). Furthermore, our analyses revealed many novel transcribed regions that are absent from well annotated genomes, suggesting Rnnotator serves as a complementary approach to analysis based on a reference genome for comprehensive transcriptomics. Conclusions: These results demonstrate that the Rnnotator pipeline is able to reconstruct full-length transcripts in the absence of a complete reference genome.

  11. WGSSAT: A High-Throughput Computational Pipeline for Mining and Annotation of SSR Markers From Whole Genomes.

    Science.gov (United States)

    Pandey, Manmohan; Kumar, Ravindra; Srivastava, Prachi; Agarwal, Suyash; Srivastava, Shreya; Nagpure, Naresh S; Jena, Joy K; Kushwaha, Basdeo

    2018-03-16

    Mining and characterization of Simple Sequence Repeat (SSR) markers from whole genomes provide valuable information about biological significance of SSR distribution and also facilitate development of markers for genetic analysis. Whole genome sequencing (WGS)-SSR Annotation Tool (WGSSAT) is a graphical user interface pipeline developed using Java Netbeans and Perl scripts which facilitates in simplifying the process of SSR mining and characterization. WGSSAT takes input in FASTA format and automates the prediction of genes, noncoding RNA (ncRNA), core genes, repeats and SSRs from whole genomes followed by mapping of the predicted SSRs onto a genome (classified according to genes, ncRNA, repeats, exonic, intronic, and core gene region) along with primer identification and mining of cross-species markers. The program also generates a detailed statistical report along with visualization of mapped SSRs, genes, core genes, and RNAs. The features of WGSSAT were demonstrated using Takifugu rubripes data. This yielded a total of 139 057 SSR, out of which 113 703 SSR primer pairs were uniquely amplified in silico onto a T. rubripes (fugu) genome. Out of 113 703 mined SSRs, 81 463 were from coding region (including 4286 exonic and 77 177 intronic), 7 from RNA, 267 from core genes of fugu, whereas 105 641 SSR and 601 SSR primer pairs were uniquely mapped onto the medaka genome. WGSSAT is tested under Ubuntu Linux. The source code, documentation, user manual, example dataset and scripts are available online at https://sourceforge.net/projects/wgssat-nbfgr.

  12. Ubiquitous Annotation Systems

    DEFF Research Database (Denmark)

    Hansen, Frank Allan

    2006-01-01

    Ubiquitous annotation systems allow users to annotate physical places, objects, and persons with digital information. Especially in the field of location based information systems much work has been done to implement adaptive and context-aware systems, but few efforts have focused on the general...... requirements for linking information to objects in both physical and digital space. This paper surveys annotation techniques from open hypermedia systems, Web based annotation systems, and mobile and augmented reality systems to illustrate different approaches to four central challenges ubiquitous annotation...... systems have to deal with: anchoring, structuring, presentation, and authoring. Through a number of examples each challenge is discussed and HyCon, a context-aware hypermedia framework developed at the University of Aarhus, Denmark, is used to illustrate an integrated approach to ubiquitous annotations...

  13. Evaluating Hierarchical Structure in Music Annotations.

    Science.gov (United States)

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  14. Evaluating Hierarchical Structure in Music Annotations

    Directory of Open Access Journals (Sweden)

    Brian McFee

    2017-08-01

    Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  15. MEETING: Chlamydomonas Annotation Jamboree - October 2003

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Arthur R

    2007-04-13

    Shotgun sequencing of the nuclear genome of Chlamydomonas reinhardtii (Chlamydomonas throughout) was performed at an approximate 10X coverage by JGI. Roughly half of the genome is now contained on 26 scaffolds, all of which are at least 1.6 Mb, and the coverage of the genome is ~95%. There are now over 200,000 cDNA sequence reads that we have generated as part of the Chlamydomonas genome project (Grossman, 2003; Shrager et al., 2003; Grossman et al. 2007; Merchant et al., 2007); other sequences have also been generated by the Kasuza sequence group (Asamizu et al., 1999; Asamizu et al., 2000) or individual laboratories that have focused on specific genes. Shrager et al. (2003) placed the reads into distinct contigs (an assemblage of reads with overlapping nucleotide sequences), and contigs that group together as part of the same genes have been designated ACEs (assembly of contigs generated from EST information). All of the reads have also been mapped to the Chlamydomonas nuclear genome and the cDNAs and their corresponding genomic sequences have been reassembled, and the resulting assemblage is called an ACEG (an Assembly of contiguous EST sequences supported by genomic sequence) (Jain et al., 2007). Most of the unique genes or ACEGs are also represented by gene models that have been generated by the Joint Genome Institute (JGI, Walnut Creek, CA). These gene models have been placed onto the DNA scaffolds and are presented as a track on the Chlamydomonas genome browser associated with the genome portal (http://genome.jgi-psf.org/Chlre3/Chlre3.home.html). Ultimately, the meeting grant awarded by DOE has helped enormously in the development of an annotation pipeline (a set of guidelines used in the annotation of genes) and resulted in high quality annotation of over 4,000 genes; the annotators were from both Europe and the USA. Some of the people who led the annotation initiative were Arthur Grossman, Olivier Vallon, and Sabeeha Merchant (with many individual

  16. Natural frequency analysis of fluid conveying pipeline with different boundary conditions

    International Nuclear Information System (INIS)

    Huang Yimin; Liu Yongshou; Li Baohui; Li Yanjiang; Yue Zhufeng

    2010-01-01

    In this study, the natural frequency of fluid-structure interaction in pipeline conveying fluid is investigated by eliminated element-Galerkin method, and the natural frequency equations with different boundary conditions are obtained. Furthermore, the expressions of first natural frequency are simplified in the case of different boundary conditions. Taking into account the Coriolis force, as an example, the natural frequency of a straight pipe simply supported at both ends is studied. In a given boundary condition, the four components (mass, stiffness, length and flow velocity) which relate to the natural frequency of pipeline conveying fluid are studied in detail and the results indicate that the effect of Coriolis force on natural frequency is inappreciable. Then the relationship between natural frequency of the pipeline conveying fluid and that of Euler beam is analyzed. Finally, a dimensionless flow velocity and limit values are presented, and they can be used to estimate the effect of the flow velocity on natural frequency. All the conclusions are well suited to nuclear plant and other industry fields.

  17. LS-SNP/PDB: annotated non-synonymous SNPs mapped to Protein Data Bank structures.

    Science.gov (United States)

    Ryan, Michael; Diekhans, Mark; Lien, Stephanie; Liu, Yun; Karchin, Rachel

    2009-06-01

    LS-SNP/PDB is a new WWW resource for genome-wide annotation of human non-synonymous (amino acid changing) SNPs. It serves high-quality protein graphics rendered with UCSF Chimera molecular visualization software. The system is kept up-to-date by an automated, high-throughput build pipeline that systematically maps human nsSNPs onto Protein Data Bank structures and annotates several biologically relevant features. LS-SNP/PDB is available at (http://ls-snp.icm.jhu.edu/ls-snp-pdb) and via links from protein data bank (PDB) biology and chemistry tabs, UCSC Genome Browser Gene Details and SNP Details pages and PharmGKB Gene Variants Downloads/Cross-References pages.

  18. Using Nonexperts for Annotating Pharmacokinetic Drug-Drug Interaction Mentions in Product Labeling: A Feasibility Study.

    Science.gov (United States)

    Hochheiser, Harry; Ning, Yifan; Hernandez, Andres; Horn, John R; Jacobson, Rebecca; Boyce, Richard D

    2016-04-11

    Because vital details of potential pharmacokinetic drug-drug interactions are often described in free-text structured product labels, manual curation is a necessary but expensive step in the development of electronic drug-drug interaction information resources. The use of nonexperts to annotate potential drug-drug interaction (PDDI) mentions in drug product label annotation may be a means of lessening the burden of manual curation. Our goal was to explore the practicality of using nonexpert participants to annotate drug-drug interaction descriptions from structured product labels. By presenting annotation tasks to both pharmacy experts and relatively naïve participants, we hoped to demonstrate the feasibility of using nonexpert annotators for drug-drug information annotation. We were also interested in exploring whether and to what extent natural language processing (NLP) preannotation helped improve task completion time, accuracy, and subjective satisfaction. Two experts and 4 nonexperts were asked to annotate 208 structured product label sections under 4 conditions completed sequentially: (1) no NLP assistance, (2) preannotation of drug mentions, (3) preannotation of drug mentions and PDDIs, and (4) a repeat of the no-annotation condition. Results were evaluated within the 2 groups and relative to an existing gold standard. Participants were asked to provide reports on the time required to complete tasks and their perceptions of task difficulty. One of the experts and 3 of the nonexperts completed all tasks. Annotation results from the nonexpert group were relatively strong in every scenario and better than the performance of the NLP pipeline. The expert and 2 of the nonexperts were able to complete most tasks in less than 3 hours. Usability perceptions were generally positive (3.67 for expert, mean of 3.33 for nonexperts). The results suggest that nonexpert annotation might be a feasible option for comprehensive labeling of annotated PDDIs across a broader

  19. [Comparison of gut microbiotal compositional analysis of patients with irritable bowel syndrome through different bioinformatics pipelines].

    Science.gov (United States)

    Zhu, S W; Liu, Z J; Li, M; Zhu, H Q; Duan, L P

    2018-04-18

    To assess whether the same biological conclusion, diagnostic or curative effects regarding microbial composition of irritable bowel syndrome (IBS) patients could be reached through different bioinformatics pipelines, we used two common bioinformatics pipelines (Uparse V2.0 and Mothur V1.39.5)to analyze the same fecal microbial 16S rRNA high-throughput sequencing data. The two pipelines were used to analyze the diversity and richness of fecal microbial 16S rRNA high-throughput sequencing data of 27 samples, including 9 healthy controls (HC group), 9 diarrhea IBS patients before (IBS group) and after Rifaximin treatment (IBS-treatment, IBSt group). Analyses such as microbial diversity, principal co-ordinates analysis (PCoA), nonmetric multidimensional scaling (NMDS) and linear discriminant analysis effect size (LEfSe) were used to find out the microbial differences among HC group vs. IBS group and IBS group vs. IBSt group. (1) Microbial composition comparison of the 27 samples in the two pipelines showed significant variations at both family and genera levels while no significant variations at phylum level; (2) There was no significant difference in the comparison of HC vs. IBS or IBS vs. IBSt (Uparse: HC vs. IBS, F=0.98, P=0.445; IBS vs. IBSt, F=0.47,P=0.926; Mothur: HC vs.IBS, F=0.82, P=0.646; IBS vs. IBSt, F=0.37, P=0.961). The Shannon index was significantly decreased in IBSt; (3) Both workshops distinguished the significantly enriched genera between HC and IBS groups. For example, Nitrosomonas and Paraprevotella increased while Pseudoalteromonadaceae and Anaerotruncus decreased in HC group through Uparse pipeline, nevertheless Roseburia 62 increased while Butyricicoccus and Moraxellaceae decreased in HC group through Mothur pipeline.Only Uparse pipeline could pick out significant genera between IBS and IBSt, such as Pseudobutyricibrio, Clostridiaceae 1 and Clostridiumsensustricto 1. There were taxonomic and phylogenetic diversity differences between the two

  20. Re-annotation of the genome sequence of Helicobacter pylori 26695

    Directory of Open Access Journals (Sweden)

    Resende Tiago

    2013-12-01

    Full Text Available Helicobacter pylori is a pathogenic bacterium that colonizes the human epithelia, causing duodenal and gastric ulcers, and gastric cancer. The genome of H. pylori 26695 has been previously sequenced and annotated. In addition, two genome-scale metabolic models have been developed. In order to maintain accurate and relevant information on coding sequences (CDS and to retrieve new information, the assignment of new functions to Helicobacter pylori 26695s genes was performed in this work. The use of software tools, on-line databases and an annotation pipeline for inspecting each gene allowed the attribution of validated EC numbers and TC numbers to metabolic genes encoding enzymes and transport proteins, respectively. 1212 genes encoding proteins were identified in this annotation, being 712 metabolic genes and 500 non-metabolic, while 191 new functions were assignment to the CDS of this bacterium. This information provides relevant biological information for the scientific community dealing with this organism and can be used as the basis for a new metabolic model reconstruction.

  1. AnnoLnc: a web server for systematically annotating novel human lncRNAs.

    Science.gov (United States)

    Hou, Mei; Tang, Xing; Tian, Feng; Shi, Fangyuan; Liu, Fenglin; Gao, Ge

    2016-11-16

    Long noncoding RNAs (lncRNAs) have been shown to play essential roles in almost every important biological process through multiple mechanisms. Although the repertoire of human lncRNAs has rapidly expanded, their biological function and regulation remain largely elusive, calling for a systematic and integrative annotation tool. Here we present AnnoLnc ( http://annolnc.cbi.pku.edu.cn ), a one-stop portal for systematically annotating novel human lncRNAs. Based on more than 700 data sources and various tool chains, AnnoLnc enables a systematic annotation covering genomic location, secondary structure, expression patterns, transcriptional regulation, miRNA interaction, protein interaction, genetic association and evolution. An intuitive web interface is available for interactive analysis through both desktops and mobile devices, and programmers can further integrate AnnoLnc into their pipeline through standard JSON-based Web Service APIs. To the best of our knowledge, AnnoLnc is the only web server to provide on-the-fly and systematic annotation for newly identified human lncRNAs. Compared with similar tools, the annotation generated by AnnoLnc covers a much wider spectrum with intuitive visualization. Case studies demonstrate the power of AnnoLnc in not only rediscovering known functions of human lncRNAs but also inspiring novel hypotheses.

  2. MicroScope: a platform for microbial genome annotation and comparative genomics.

    Science.gov (United States)

    Vallenet, D; Engelen, S; Mornico, D; Cruveiller, S; Fleury, L; Lajus, A; Rouy, Z; Roche, D; Salvignol, G; Scarpelli, C; Médigue, C

    2009-01-01

    The initial outcome of genome sequencing is the creation of long text strings written in a four letter alphabet. The role of in silico sequence analysis is to assist biologists in the act of associating biological knowledge with these sequences, allowing investigators to make inferences and predictions that can be tested experimentally. A wide variety of software is available to the scientific community, and can be used to identify genomic objects, before predicting their biological functions. However, only a limited number of biologically interesting features can be revealed from an isolated sequence. Comparative genomics tools, on the other hand, by bringing together the information contained in numerous genomes simultaneously, allow annotators to make inferences based on the idea that evolution and natural selection are central to the definition of all biological processes. We have developed the MicroScope platform in order to offer a web-based framework for the systematic and efficient revision of microbial genome annotation and comparative analysis (http://www.genoscope.cns.fr/agc/microscope). Starting with the description of the flow chart of the annotation processes implemented in the MicroScope pipeline, and the development of traditional and novel microbial annotation and comparative analysis tools, this article emphasizes the essential role of expert annotation as a complement of automatic annotation. Several examples illustrate the use of implemented tools for the review and curation of annotations of both new and publicly available microbial genomes within MicroScope's rich integrated genome framework. The platform is used as a viewer in order to browse updated annotation information of available microbial genomes (more than 440 organisms to date), and in the context of new annotation projects (117 bacterial genomes). The human expertise gathered in the MicroScope database (about 280,000 independent annotations) contributes to improve the quality of

  3. Pipeline system operability review

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, Kjell [Det Norske Veritas (Norway); Davies, Ray [CC Technologies, Dublin, OH (United States)

    2005-07-01

    Pipeline operators are continuously working to improve the safety of their systems and operations. In the US both liquid and gas pipeline operators have worked with the regulators over many years to develop more systematic approaches to pipeline integrity management. To successfully manage pipeline integrity, vast amounts of data from different sources needs to be collected, overlaid and analyzed in order to assess the current condition and predict future degradation. The efforts undertaken by the operators has had a significant impact on pipeline safety, nevertheless, during recent years we have seen a number of major high profile accidents. One can therefore ask how effective the pipeline integrity management systems and processes are. This paper will present one methodology 'The Pipeline System Operability Review' that can evaluate and rate the effectiveness of both the management systems and procedures, as well as the technical condition of the hardware. The result from the review can be used to compare the performance of different pipelines within one operating company, as well as benchmark with international best practices. (author)

  4. Pipeline system operability review

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, Kjell [Det Norske Veritas (Norway); Davies, Ray [CC Technologies, Dublin, OH (United States)

    2005-07-01

    Pipeline operators are continuously working to improve the safety of their systems and operations. In the US both liquid and gas pipeline operators have worked with the regulators over many years to develop more systematic approaches to pipeline integrity management. To successfully manage pipeline integrity, vast amounts of data from different sources needs to be collected, overlaid and analyzed in order to assess the current condition and predict future degradation. The efforts undertaken by the operators has had a significant impact on pipeline safety, nevertheless, during recent years we have seen a number of major high profile accidents. One can therefore ask how effective the pipeline integrity management systems and processes are. This paper will present one methodology 'The Pipeline System Operability Review' that can evaluate and rate the effectiveness of both the management systems and procedures, as well as the technical condition of the hardware. The result from the review can be used to compare the performance of different pipelines within one operating company, as well as benchmark with international best practices. (author)

  5. Studying Oogenesis in a Non-model Organism Using Transcriptomics: Assembling, Annotating, and Analyzing Your Data.

    Science.gov (United States)

    Carter, Jean-Michel; Gibbs, Melanie; Breuker, Casper J

    2016-01-01

    This chapter provides a guide to processing and analyzing RNA-Seq data in a non-model organism. This approach was implemented for studying oogenesis in the Speckled Wood Butterfly Pararge aegeria. We focus in particular on how to perform a more informative primary annotation of your non-model organism by implementing our multi-BLAST annotation strategy. We also provide a general guide to other essential steps in the next-generation sequencing analysis workflow. Before undertaking these methods, we recommend you familiarize yourself with command line usage and fundamental concepts of database handling. Most of the operations in the primary annotation pipeline can be performed in Galaxy (or equivalent standalone versions of the tools) and through the use of common database operations (e.g. to remove duplicates) but other equivalent programs and/or custom scripts can be implemented for further automation.

  6. Development and Applications of Pipeline Steel in Long-Distance Gas Pipeline of China

    Science.gov (United States)

    Chunyong, Huo; Yang, Li; Lingkang, Ji

    In past decades, with widely utilizing of Microalloying and Thermal Mechanical Control Processing (TMCP) technology, the good matching of strength, toughness, plasticity and weldability on pipeline steel has been reached so that oil and gas pipeline has been greatly developed in China to meet the demand of strong domestic consumption of energy. In this paper, development history of pipeline steel and gas pipeline in china is briefly reviewed. The microstructure characteristic and mechanical performance of pipeline steel used in some representative gas pipelines of china built in different stage are summarized. Through the analysis on the evolution of pipeline service environment, some prospective development trend of application of pipeline steel in China is also presented.

  7. De novo assembly and annotation of the Asian tiger mosquito (Aedes albopictus) repeatome with dnaPipeTE from raw genomic reads and comparative analysis with the yellow fever mosquito (Aedes aegypti).

    Science.gov (United States)

    Goubert, Clément; Modolo, Laurent; Vieira, Cristina; ValienteMoro, Claire; Mavingui, Patrick; Boulesteix, Matthieu

    2015-03-11

    Repetitive DNA, including transposable elements (TEs), is found throughout eukaryotic genomes. Annotating and assembling the "repeatome" during genome-wide analysis often poses a challenge. To address this problem, we present dnaPipeTE-a new bioinformatics pipeline that uses a sample of raw genomic reads. It produces precise estimates of repeated DNA content and TE consensus sequences, as well as the relative ages of TE families. We shows that dnaPipeTE performs well using very low coverage sequencing in different genomes, losing accuracy only with old TE families. We applied this pipeline to the genome of the Asian tiger mosquito Aedes albopictus, an invasive species of human health interest, for which the genome size is estimated to be over 1 Gbp. Using dnaPipeTE, we showed that this species harbors a large (50% of the genome) and potentially active repeatome with an overall TE class and order composition similar to that of Aedes aegypti, the yellow fever mosquito. However, intraorder dynamics show clear distinctions between the two species, with differences at the TE family level. Our pipeline's ability to manage the repeatome annotation problem will make it helpful for new or ongoing assembly projects, and our results will benefit future genomic studies of A. albopictus. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  8. SeqAnt: A web service to rapidly identify and annotate DNA sequence variations

    Directory of Open Access Journals (Sweden)

    Patel Viren

    2010-09-01

    Full Text Available Abstract Background The enormous throughput and low cost of second-generation sequencing platforms now allow research and clinical geneticists to routinely perform single experiments that identify tens of thousands to millions of variant sites. Existing methods to annotate variant sites using information from publicly available databases via web browsers are too slow to be useful for the large sequencing datasets being routinely generated by geneticists. Because sequence annotation of variant sites is required before functional characterization can proceed, the lack of a high-throughput pipeline to efficiently annotate variant sites can act as a significant bottleneck in genetics research. Results SeqAnt (Sequence Annotator is an open source web service and software package that rapidly annotates DNA sequence variants and identifies recessive or compound heterozygous loci in human, mouse, fly, and worm genome sequencing experiments. Variants are characterized with respect to their functional type, frequency, and evolutionary conservation. Annotated variants can be viewed on a web browser, downloaded in a tab-delimited text file, or directly uploaded in a BED format to the UCSC genome browser. To demonstrate the speed of SeqAnt, we annotated a series of publicly available datasets that ranged in size from 37 to 3,439,107 variant sites. The total time to completely annotate these data completely ranged from 0.17 seconds to 28 minutes 49.8 seconds. Conclusion SeqAnt is an open source web service and software package that overcomes a critical bottleneck facing research and clinical geneticists using second-generation sequencing platforms. SeqAnt will prove especially useful for those investigators who lack dedicated bioinformatics personnel or infrastructure in their laboratories.

  9. Supporting Student Differences in Listening Comprehension and Vocabulary Learning with Multimedia Annotations

    Science.gov (United States)

    Jones, Linda C.

    2009-01-01

    This article describes how effectively multimedia learning environments can assist second language (L2) students of different spatial and verbal abilities with listening comprehension and vocabulary learning. In particular, it explores how written and pictorial annotations interacted with high/low spatial and verbal ability learners and thus…

  10. Modelling and Simulation of Free Floating Pig for Different Pipeline Inclination Angles

    Directory of Open Access Journals (Sweden)

    Woldemichael Dereje Engida

    2016-01-01

    Full Text Available This paper presents a modelling and simulation of free floating pig to determine the flow parameters to avoid pig stalling in pigging operation. A free floating spherical shaped pig was design and equipped with necessary sensors to detect leak along the pipeline. The free floating pig does not have internal or external power supply to navigate through the pipeline. Instead, it is being driven by the flowing medium. In order to avoid stalling of the pig, it is essential to conduct simulation to determine the necessary flow parameters for different inclination angles. Accordingly, a pipeline section with inclination of 0°, 15°, 30°, 45°, 60°, 75°, and 90° were modelled and simulated using ANSYS FLUENT 15.0 with water and oil as working medium. For each case, the minimum velocity required to propel the free floating pig through the inclination were determined. In addition, the trajectory of the free floating pig has been visualized in the simulation.

  11. Fish the ChIPs: a pipeline for automated genomic annotation of ChIP-Seq data

    Directory of Open Access Journals (Sweden)

    Minucci Saverio

    2011-10-01

    Full Text Available Abstract Background High-throughput sequencing is generating massive amounts of data at a pace that largely exceeds the throughput of data analysis routines. Here we introduce Fish the ChIPs (FC, a computational pipeline aimed at a broad public of users and designed to perform complete ChIP-Seq data analysis of an unlimited number of samples, thus increasing throughput, reproducibility and saving time. Results Starting from short read sequences, FC performs the following steps: 1 quality controls, 2 alignment to a reference genome, 3 peak calling, 4 genomic annotation, 5 generation of raw signal tracks for visualization on the UCSC and IGV genome browsers. FC exploits some of the fastest and most effective tools today available. Installation on a Mac platform requires very basic computational skills while configuration and usage are supported by a user-friendly graphic user interface. Alternatively, FC can be compiled from the source code on any Unix machine and then run with the possibility of customizing each single parameter through a simple configuration text file that can be generated using a dedicated user-friendly web-form. Considering the execution time, FC can be run on a desktop machine, even though the use of a computer cluster is recommended for analyses of large batches of data. FC is perfectly suited to work with data coming from Illumina Solexa Genome Analyzers or ABI SOLiD and its usage can potentially be extended to any sequencing platform. Conclusions Compared to existing tools, FC has two main advantages that make it suitable for a broad range of users. First of all, it can be installed and run by wet biologists on a Mac machine. Besides it can handle an unlimited number of samples, being convenient for large analyses. In this context, computational biologists can increase reproducibility of their ChIP-Seq data analyses while saving time for downstream analyses. Reviewers This article was reviewed by Gavin Huttley, George

  12. Exploiting proteomic data for genome annotation and gene model validation in Aspergillus niger

    Directory of Open Access Journals (Sweden)

    Grigoriev Igor V

    2009-02-01

    Full Text Available Abstract Background Proteomic data is a potentially rich, but arguably unexploited, data source for genome annotation. Peptide identifications from tandem mass spectrometry provide prima facie evidence for gene predictions and can discriminate over a set of candidate gene models. Here we apply this to the recently sequenced Aspergillus niger fungal genome from the Joint Genome Institutes (JGI and another predicted protein set from another A.niger sequence. Tandem mass spectra (MS/MS were acquired from 1d gel electrophoresis bands and searched against all available gene models using Average Peptide Scoring (APS and reverse database searching to produce confident identifications at an acceptable false discovery rate (FDR. Results 405 identified peptide sequences were mapped to 214 different A.niger genomic loci to which 4093 predicted gene models clustered, 2872 of which contained the mapped peptides. Interestingly, 13 (6% of these loci either had no preferred predicted gene model or the genome annotators' chosen "best" model for that genomic locus was not found to be the most parsimonious match to the identified peptides. The peptides identified also boosted confidence in predicted gene structures spanning 54 introns from different gene models. Conclusion This work highlights the potential of integrating experimental proteomics data into genomic annotation pipelines much as expressed sequence tag (EST data has been. A comparison of the published genome from another strain of A.niger sequenced by DSM showed that a number of the gene models or proteins with proteomics evidence did not occur in both genomes, further highlighting the utility of the method.

  13. Exploiting proteomic data for genome annotation and gene model validation in Aspergillus niger.

    Science.gov (United States)

    Wright, James C; Sugden, Deana; Francis-McIntyre, Sue; Riba-Garcia, Isabel; Gaskell, Simon J; Grigoriev, Igor V; Baker, Scott E; Beynon, Robert J; Hubbard, Simon J

    2009-02-04

    Proteomic data is a potentially rich, but arguably unexploited, data source for genome annotation. Peptide identifications from tandem mass spectrometry provide prima facie evidence for gene predictions and can discriminate over a set of candidate gene models. Here we apply this to the recently sequenced Aspergillus niger fungal genome from the Joint Genome Institutes (JGI) and another predicted protein set from another A.niger sequence. Tandem mass spectra (MS/MS) were acquired from 1d gel electrophoresis bands and searched against all available gene models using Average Peptide Scoring (APS) and reverse database searching to produce confident identifications at an acceptable false discovery rate (FDR). 405 identified peptide sequences were mapped to 214 different A.niger genomic loci to which 4093 predicted gene models clustered, 2872 of which contained the mapped peptides. Interestingly, 13 (6%) of these loci either had no preferred predicted gene model or the genome annotators' chosen "best" model for that genomic locus was not found to be the most parsimonious match to the identified peptides. The peptides identified also boosted confidence in predicted gene structures spanning 54 introns from different gene models. This work highlights the potential of integrating experimental proteomics data into genomic annotation pipelines much as expressed sequence tag (EST) data has been. A comparison of the published genome from another strain of A.niger sequenced by DSM showed that a number of the gene models or proteins with proteomics evidence did not occur in both genomes, further highlighting the utility of the method.

  14. Contributions to In Silico Genome Annotation

    KAUST Repository

    Kalkatawi, Manal M.

    2017-11-30

    Genome annotation is an important topic since it provides information for the foundation of downstream genomic and biological research. It is considered as a way of summarizing part of existing knowledge about the genomic characteristics of an organism. Annotating different regions of a genome sequence is known as structural annotation, while identifying functions of these regions is considered as a functional annotation. In silico approaches can facilitate both tasks that otherwise would be difficult and timeconsuming. This study contributes to genome annotation by introducing several novel bioinformatics methods, some based on machine learning (ML) approaches. First, we present Dragon PolyA Spotter (DPS), a method for accurate identification of the polyadenylation signals (PAS) within human genomic DNA sequences. For this, we derived a novel feature-set able to characterize properties of the genomic region surrounding the PAS, enabling development of high accuracy optimized ML predictive models. DPS considerably outperformed the state-of-the-art results. The second contribution concerns developing generic models for structural annotation, i.e., the recognition of different genomic signals and regions (GSR) within eukaryotic DNA. We developed DeepGSR, a systematic framework that facilitates generating ML models to predict GSR with high accuracy. To the best of our knowledge, no available generic and automated method exists for such task that could facilitate the studies of newly sequenced organisms. The prediction module of DeepGSR uses deep learning algorithms to derive highly abstract features that depend mainly on proper data representation and hyperparameters calibration. DeepGSR, which was evaluated on recognition of PAS and translation initiation sites (TIS) in different organisms, yields a simpler and more precise representation of the problem under study, compared to some other hand-tailored models, while producing high accuracy prediction results. Finally

  15. BEACON: automated tool for Bacterial GEnome Annotation ComparisON

    KAUST Repository

    Kalkatawi, Manal M.

    2015-08-18

    Background Genome annotation is one way of summarizing the existing knowledge about genomic characteristics of an organism. There has been an increased interest during the last several decades in computer-based structural and functional genome annotation. Many methods for this purpose have been developed for eukaryotes and prokaryotes. Our study focuses on comparison of functional annotations of prokaryotic genomes. To the best of our knowledge there is no fully automated system for detailed comparison of functional genome annotations generated by different annotation methods (AMs). Results The presence of many AMs and development of new ones introduce needs to: a/ compare different annotations for a single genome, and b/ generate annotation by combining individual ones. To address these issues we developed an Automated Tool for Bacterial GEnome Annotation ComparisON (BEACON) that benefits both AM developers and annotation analysers. BEACON provides detailed comparison of gene function annotations of prokaryotic genomes obtained by different AMs and generates extended annotations through combination of individual ones. For the illustration of BEACON’s utility, we provide a comparison analysis of multiple different annotations generated for four genomes and show on these examples that the extended annotation can increase the number of genes annotated by putative functions up to 27 %, while the number of genes without any function assignment is reduced. Conclusions We developed BEACON, a fast tool for an automated and a systematic comparison of different annotations of single genomes. The extended annotation assigns putative functions to many genes with unknown functions. BEACON is available under GNU General Public License version 3.0 and is accessible at: http://www.cbrc.kaust.edu.sa/BEACON/

  16. BEACON: automated tool for Bacterial GEnome Annotation ComparisON.

    Science.gov (United States)

    Kalkatawi, Manal; Alam, Intikhab; Bajic, Vladimir B

    2015-08-18

    Genome annotation is one way of summarizing the existing knowledge about genomic characteristics of an organism. There has been an increased interest during the last several decades in computer-based structural and functional genome annotation. Many methods for this purpose have been developed for eukaryotes and prokaryotes. Our study focuses on comparison of functional annotations of prokaryotic genomes. To the best of our knowledge there is no fully automated system for detailed comparison of functional genome annotations generated by different annotation methods (AMs). The presence of many AMs and development of new ones introduce needs to: a/ compare different annotations for a single genome, and b/ generate annotation by combining individual ones. To address these issues we developed an Automated Tool for Bacterial GEnome Annotation ComparisON (BEACON) that benefits both AM developers and annotation analysers. BEACON provides detailed comparison of gene function annotations of prokaryotic genomes obtained by different AMs and generates extended annotations through combination of individual ones. For the illustration of BEACON's utility, we provide a comparison analysis of multiple different annotations generated for four genomes and show on these examples that the extended annotation can increase the number of genes annotated by putative functions up to 27%, while the number of genes without any function assignment is reduced. We developed BEACON, a fast tool for an automated and a systematic comparison of different annotations of single genomes. The extended annotation assigns putative functions to many genes with unknown functions. BEACON is available under GNU General Public License version 3.0 and is accessible at: http://www.cbrc.kaust.edu.sa/BEACON/ .

  17. A Novel Method to Enhance Pipeline Trajectory Determination Using Pipeline Junctions.

    Science.gov (United States)

    Sahli, Hussein; El-Sheimy, Naser

    2016-04-21

    Pipeline inspection gauges (pigs) have been used for many years to perform various maintenance operations in oil and gas pipelines. Different pipeline parameters can be inspected during the pig journey. Although pigs use many sensors to detect the required pipeline parameters, matching these data with the corresponding pipeline location is considered a very important parameter. High-end, tactical-grade inertial measurement units (IMUs) are used in pigging applications to locate the detected problems of pipeline using other sensors, and to reconstruct the trajectories of the pig. These IMUs are accurate; however, their high cost and large sizes limit their use in small diameter pipelines (8″ or less). This paper describes a new methodology for the use of MEMS-based IMUs using an extended Kalman filter (EKF) and the pipeline junctions to increase the position parameters' accuracy and to reduce the total RMS errors even during the unavailability of above ground markers (AGMs). The results of this new proposed method using a micro-electro-mechanical systems (MEMS)-based IMU revealed that the position RMS errors were reduced by approximately 85% compared to the standard EKF solution. Therefore, this approach will enable the mapping of small diameter pipelines, which was not possible before.

  18. DFAST and DAGA: web-based integrated genome annotation tools and resources.

    Science.gov (United States)

    Tanizawa, Yasuhiro; Fujisawa, Takatomo; Kaminuma, Eli; Nakamura, Yasukazu; Arita, Masanori

    2016-01-01

    Quality assurance and correct taxonomic affiliation of data submitted to public sequence databases have been an everlasting problem. The DDBJ Fast Annotation and Submission Tool (DFAST) is a newly developed genome annotation pipeline with quality and taxonomy assessment tools. To enable annotation of ready-to-submit quality, we also constructed curated reference protein databases tailored for lactic acid bacteria. DFAST was developed so that all the procedures required for DDBJ submission could be done seamlessly online. The online workspace would be especially useful for users not familiar with bioinformatics skills. In addition, we have developed a genome repository, DFAST Archive of Genome Annotation (DAGA), which currently includes 1,421 genomes covering 179 species and 18 subspecies of two genera, Lactobacillus and Pediococcus , obtained from both DDBJ/ENA/GenBank and Sequence Read Archive (SRA). All the genomes deposited in DAGA were annotated consistently and assessed using DFAST. To assess the taxonomic position based on genomic sequence information, we used the average nucleotide identity (ANI), which showed high discriminative power to determine whether two given genomes belong to the same species. We corrected mislabeled or misidentified genomes in the public database and deposited the curated information in DAGA. The repository will improve the accessibility and reusability of genome resources for lactic acid bacteria. By exploiting the data deposited in DAGA, we found intraspecific subgroups in Lactobacillus gasseri and Lactobacillus jensenii , whose variation between subgroups is larger than the well-accepted ANI threshold of 95% to differentiate species. DFAST and DAGA are freely accessible at https://dfast.nig.ac.jp.

  19. CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline

    OpenAIRE

    Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S.; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M.; Tettelin, Herv?; White, Owen; Angiuoli, Samuel V.; Mahurkar, Anup; Fricke, W. Florian

    2017-01-01

    Background The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. Results CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. ...

  20. Corrosion Behavior of Pipeline Carbon Steel under Different Iron Oxide Deposits in the District Heating System

    Directory of Open Access Journals (Sweden)

    Yong-Sang Kim

    2017-05-01

    Full Text Available The corrosion behavior of pipeline steel covered by iron oxides (α-FeOOH; Fe3O4 and Fe2O3 was investigated in simulated district heating water. In potentiodynamic polarization tests; the corrosion rate of pipeline steel is increased under the iron oxide but the increaseing rate is different due to the differnet chemical reactions of the covered iron oxides. Pitting corrosion was only observed on the α-FeOOH-covered specimen; which is caused by the crevice corrosion under the α-FeOOH. From Mott-Schottky and X-ray diffraction results; the surface reaction and oxide layer were dependent on the kind of iron oxides. The iron oxides deposit increases the failure risk of the pipeline and localized corrosion can be occurred under the α-FeOOH-covered region of the pipeline. Thus, prevention methods for the iron oxide deposit in the district pipeline system such as filtering or periodic chemical cleaning are needed.

  1. Analysis and comparison of very large metagenomes with fast clustering and functional annotation

    Directory of Open Access Journals (Sweden)

    Li Weizhong

    2009-10-01

    Full Text Available Abstract Background The remarkable advance of metagenomics presents significant new challenges in data analysis. Metagenomic datasets (metagenomes are large collections of sequencing reads from anonymous species within particular environments. Computational analyses for very large metagenomes are extremely time-consuming, and there are often many novel sequences in these metagenomes that are not fully utilized. The number of available metagenomes is rapidly increasing, so fast and efficient metagenome comparison methods are in great demand. Results The new metagenomic data analysis method Rapid Analysis of Multiple Metagenomes with a Clustering and Annotation Pipeline (RAMMCAP was developed using an ultra-fast sequence clustering algorithm, fast protein family annotation tools, and a novel statistical metagenome comparison method that employs a unique graphic interface. RAMMCAP processes extremely large datasets with only moderate computational effort. It identifies raw read clusters and protein clusters that may include novel gene families, and compares metagenomes using clusters or functional annotations calculated by RAMMCAP. In this study, RAMMCAP was applied to the two largest available metagenomic collections, the "Global Ocean Sampling" and the "Metagenomic Profiling of Nine Biomes". Conclusion RAMMCAP is a very fast method that can cluster and annotate one million metagenomic reads in only hundreds of CPU hours. It is available from http://tools.camera.calit2.net/camera/rammcap/.

  2. Water level detection pipeline

    International Nuclear Information System (INIS)

    Koshikawa, Yukinobu; Imanishi, Masatoshi; Niizato, Masaru; Takagi, Masahiro

    1998-01-01

    In the present invention, water levels of a feedwater heater and a drain tank in a nuclear power plant are detected at high accuracy. Detection pipeline headers connected to the upper and lower portions of a feedwater heater or a drain tank are connected with each other. The connection line is branched at appropriate two positions and an upper detection pipeline and a lower detection pipeline are connected thereto, and a gauge entrance valve is disposed to each of the detection pipelines. A diaphragm of a pressure difference generator is connected to a flange formed to the end portion. When detecting the change of water level in the feedwater heater or the drain tank as a change of pressure difference, gauge entrance valves on the exit side of the upper and lower detection pipelines are connected by a connection pipe. The gauge entrance valve is closed, a tube is connected to the lower detection pipe to inject water to the diaphragm of the pressure difference generator passing through the connection pipe thereby enabling to calibrate the pressure difference generator. The accuracy of the calibration of instruments is improved and workability thereof upon flange maintenance is also improved. (I.S.)

  3. A Protocol for Annotating Parser Differences. Research Report. ETS RR-16-02

    Science.gov (United States)

    Bruno, James V.; Cahill, Aoife; Gyawali, Binod

    2016-01-01

    We present an annotation scheme for classifying differences in the outputs of syntactic constituency parsers when a gold standard is unavailable or undesired, as in the case of texts written by nonnative speakers of English. We discuss its automated implementation and the results of a case study that uses the scheme to choose a parser best suited…

  4. The Eimeria Transcript DB: an integrated resource for annotated transcripts of protozoan parasites of the genus Eimeria

    Science.gov (United States)

    Rangel, Luiz Thibério; Novaes, Jeniffer; Durham, Alan M.; Madeira, Alda Maria B. N.; Gruber, Arthur

    2013-01-01

    Parasites of the genus Eimeria infect a wide range of vertebrate hosts, including chickens. We have recently reported a comparative analysis of the transcriptomes of Eimeria acervulina, Eimeria maxima and Eimeria tenella, integrating ORESTES data produced by our group and publicly available Expressed Sequence Tags (ESTs). All cDNA reads have been assembled, and the reconstructed transcripts have been submitted to a comprehensive functional annotation pipeline. Additional studies included orthology assignment across apicomplexan parasites and clustering analyses of gene expression profiles among different developmental stages of the parasites. To make all this body of information publicly available, we constructed the Eimeria Transcript Database (EimeriaTDB), a web repository that provides access to sequence data, annotation and comparative analyses. Here, we describe the web interface, available sequence data sets and query tools implemented on the site. The main goal of this work is to offer a public repository of sequence and functional annotation data of reconstructed transcripts of parasites of the genus Eimeria. We believe that EimeriaTDB will represent a valuable and complementary resource for the Eimeria scientific community and for those researchers interested in comparative genomics of apicomplexan parasites. Database URL: http://www.coccidia.icb.usp.br/eimeriatdb/ PMID:23411718

  5. Pipelines 'R' us

    International Nuclear Information System (INIS)

    Thomas, P.

    1997-01-01

    The geopolitical background to the export of oil and gas from Kazakhstan by pipeline is explored with particular reference to the sensitivities of the USA. There are now a number of pipeline proposals which would enable Kazakhstan to get its hydrocarbons to world markets. The construction of two of these formed part of a major oil deal signed recently with China in the face of stiff competition from major US companies. The most convenient and cost effective route, connecting up with Iran's existing pipeline network to the Gulf, is unlikely to be developed given continuing US sanctions against Iran. Equally unlikely seems to be the Turkmenistan to Pakistan pipeline in the light of the political volatility of Afghanistan. US companies continue to face limits on export capacity via the existing Russian pipelines from Kazakhstan. A temporary solution could be to carry some oil in the existing pipeline from Azerbaijan to Georgia which has been upgraded and is due to become operational soon, and later in a second proposed pipeline on this route. The Caspian Pipeline Consortium, consisting of three countries and eleven international companies, is building a 1500 km pipeline from the Tergiz field to Novorossiysk on the Black Sea with a view to completion in 2000. An undersea pipeline crossing the Caspian from Azerbaijan is being promoted by Turkey. There is an international perception that within the next five years Kazakhstan could be in a position to export its oil via as many as half a dozen different routes. (UK)

  6. @TOME-2: a new pipeline for comparative modeling of protein–ligand complexes

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-01-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein–protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein–ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/ PMID:19443448

  7. @TOME-2: a new pipeline for comparative modeling of protein-ligand complexes.

    Science.gov (United States)

    Pons, Jean-Luc; Labesse, Gilles

    2009-07-01

    @TOME 2.0 is new web pipeline dedicated to protein structure modeling and small ligand docking based on comparative analyses. @TOME 2.0 allows fold recognition, template selection, structural alignment editing, structure comparisons, 3D-model building and evaluation. These tasks are routinely used in sequence analyses for structure prediction. In our pipeline the necessary software is efficiently interconnected in an original manner to accelerate all the processes. Furthermore, we have also connected comparative docking of small ligands that is performed using protein-protein superposition. The input is a simple protein sequence in one-letter code with no comment. The resulting 3D model, protein-ligand complexes and structural alignments can be visualized through dedicated Web interfaces or can be downloaded for further studies. These original features will aid in the functional annotation of proteins and the selection of templates for molecular modeling and virtual screening. Several examples are described to highlight some of the new functionalities provided by this pipeline. The server and its documentation are freely available at http://abcis.cbs.cnrs.fr/AT2/

  8. Estimating the annotation error rate of curated GO database sequence annotations

    Directory of Open Access Journals (Sweden)

    Brown Alfred L

    2007-05-01

    Full Text Available Abstract Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO sequence database (GOSeqLite. This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006 at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information.

  9. High-throughput proteogenomics of Ruegeria pomeroyi: seeding a better genomic annotation for the whole marine Roseobacter clade

    Directory of Open Access Journals (Sweden)

    Christie-Oleza Joseph A

    2012-02-01

    Full Text Available Abstract Background The structural and functional annotation of genomes is now heavily based on data obtained using automated pipeline systems. The key for an accurate structural annotation consists of blending similarities between closely related genomes with biochemical evidence of the genome interpretation. In this work we applied high-throughput proteogenomics to Ruegeria pomeroyi, a member of the Roseobacter clade, an abundant group of marine bacteria, as a seed for the annotation of the whole clade. Results A large dataset of peptides from R. pomeroyi was obtained after searching over 1.1 million MS/MS spectra against a six-frame translated genome database. We identified 2006 polypeptides, of which thirty-four were encoded by open reading frames (ORFs that had not previously been annotated. From the pool of 'one-hit-wonders', i.e. those ORFs specified by only one peptide detected by tandem mass spectrometry, we could confirm the probable existence of five additional new genes after proving that the corresponding RNAs were transcribed. We also identified the most-N-terminal peptide of 486 polypeptides, of which sixty-four had originally been wrongly annotated. Conclusions By extending these re-annotations to the other thirty-six Roseobacter isolates sequenced to date (twenty different genera, we propose the correction of the assigned start codons of 1082 homologous genes in the clade. In addition, we also report the presence of novel genes within operons encoding determinants of the important tricarboxylic acid cycle, a feature that seems to be characteristic of some Roseobacter genomes. The detection of their corresponding products in large amounts raises the question of their function. Their discoveries point to a possible theory for protein evolution that will rely on high expression of orphans in bacteria: their putative poor efficiency could be counterbalanced by a higher level of expression. Our proteogenomic analysis will increase

  10. A software pipeline for processing and identification of fungal ITS sequences

    Directory of Open Access Journals (Sweden)

    Kristiansson Erik

    2009-01-01

    Full Text Available Abstract Background Fungi from environmental samples are typically identified to species level through DNA sequencing of the nuclear ribosomal internal transcribed spacer (ITS region for use in BLAST-based similarity searches in the International Nucleotide Sequence Databases. These searches are time-consuming and regularly require a significant amount of manual intervention and complementary analyses. We here present software – in the form of an identification pipeline for large sets of fungal ITS sequences – developed to automate the BLAST process and several additional analysis steps. The performance of the pipeline was evaluated on a dataset of 350 ITS sequences from fungi growing as epiphytes on building material. Results The pipeline was written in Perl and uses a local installation of NCBI-BLAST for the similarity searches of the query sequences. The variable subregion ITS2 of the ITS region is extracted from the sequences and used for additional searches of higher sensitivity. Multiple alignments of each query sequence and its closest matches are computed, and query sequences sharing at least 50% of their best matches are clustered to facilitate the evaluation of hypothetically conspecific groups. The pipeline proved to speed up the processing, as well as enhance the resolution, of the evaluation dataset considerably, and the fungi were found to belong chiefly to the Ascomycota, with Penicillium and Aspergillus as the two most common genera. The ITS2 was found to indicate a different taxonomic affiliation than did the complete ITS region for 10% of the query sequences, though this figure is likely to vary with the taxonomic scope of the query sequences. Conclusion The present software readily assigns large sets of fungal query sequences to their respective best matches in the international sequence databases and places them in a larger biological context. The output is highly structured to be easy to process, although it still needs

  11. Comparison of a semi-automatic annotation tool and a natural language processing application for the generation of clinical statement entries.

    Science.gov (United States)

    Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming

    2015-01-01

    Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, pgenerating entry-level interoperable clinical documents. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliation see end of article.

  12. Pipeline engineering

    CERN Document Server

    Liu, Henry

    2003-01-01

    PART I: PIPE FLOWSINTRODUCTIONDefinition and Scope Brief History of PipelinesExisting Major PipelinesImportance of PipelinesFreight (Solids) Transport by PipelinesTypes of PipelinesComponents of PipelinesAdvantages of PipelinesReferencesSINGLE-PHASE INCOMPRESSIBLE NEWTONIAN FLUIDIntroductionFlow RegimesLocal Mean Velocity and Its Distribution (Velocity Profile)Flow Equations for One-Dimensional AnalysisHydraulic and Energy Grade LinesCavitation in Pipeline SystemsPipe in Series and ParallelInterconnected ReservoirsPipe NetworkUnsteady Flow in PipeSINGLE-PHASE COMPRESSIBLE FLOW IN PIPEFlow Ana

  13. PeakAnalyzer: Genome-wide annotation of chromatin binding and modification loci

    Directory of Open Access Journals (Sweden)

    Tammoja Kairi

    2010-08-01

    Full Text Available Abstract Background Functional genomic studies involving high-throughput sequencing and tiling array applications, such as ChIP-seq and ChIP-chip, generate large numbers of experimentally-derived signal peaks across the genome under study. In analyzing these loci to determine their potential regulatory functions, areas of signal enrichment must be considered relative to proximal genes and regulatory elements annotated throughout the target genome Regions of chromatin association by transcriptional regulators should be distinguished as individual binding sites in order to enhance downstream analyses, such as the identification of known and novel consensus motifs. Results PeakAnalyzer is a set of high-performance utilities for the automated processing of experimentally-derived peak regions and annotation of genomic loci. The programs can accurately subdivide multimodal regions of signal enrichment into distinct subpeaks corresponding to binding sites or chromatin modifications, retrieve genomic sequences encompassing the computed subpeak summits, and identify positional features of interest such as intersection with exon/intron gene components, proximity to up- or downstream transcriptional start sites and cis-regulatory elements. The software can be configured to run either as a pipeline component for high-throughput analyses, or as a cross-platform desktop application with an intuitive user interface. Conclusions PeakAnalyzer comprises a number of utilities essential for ChIP-seq and ChIP-chip data analysis. High-performance implementations are provided for Unix pipeline integration along with a GUI version for interactive use. Source code in C++ and Java is provided, as are native binaries for Linux, Mac OS X and Windows systems.

  14. sRNAnalyzer-a flexible and customizable small RNA sequencing data analysis pipeline.

    Science.gov (United States)

    Wu, Xiaogang; Kim, Taek-Kyun; Baxter, David; Scherler, Kelsey; Gordon, Aaron; Fong, Olivia; Etheridge, Alton; Galas, David J; Wang, Kai

    2017-12-01

    Although many tools have been developed to analyze small RNA sequencing (sRNA-Seq) data, it remains challenging to accurately analyze the small RNA population, mainly due to multiple sequence ID assignment caused by short read length. Additional issues in small RNA analysis include low consistency of microRNA (miRNA) measurement results across different platforms, miRNA mapping associated with miRNA sequence variation (isomiR) and RNA editing, and the origin of those unmapped reads after screening against all endogenous reference sequence databases. To address these issues, we built a comprehensive and customizable sRNA-Seq data analysis pipeline-sRNAnalyzer, which enables: (i) comprehensive miRNA profiling strategies to better handle isomiRs and summarization based on each nucleotide position to detect potential SNPs in miRNAs, (ii) different sequence mapping result assignment approaches to simulate results from microarray/qRT-PCR platforms and a local probabilistic model to assign mapping results to the most-likely IDs, (iii) comprehensive ribosomal RNA filtering for accurate mapping of exogenous RNAs and summarization based on taxonomy annotation. We evaluated our pipeline on both artificial samples (including synthetic miRNA and Escherichia coli cultures) and biological samples (human tissue and plasma). sRNAnalyzer is implemented in Perl and available at: http://srnanalyzer.systemsbiology.net/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. A nuclear magnetic resonance based approach to accurate functional annotation of putative enzymes in the methanogen Methanosarcina acetivorans

    Directory of Open Access Journals (Sweden)

    Nikolau Basil J

    2011-06-01

    Full Text Available Abstract Background Correct annotation of function is essential if one is to take full advantage of the vast amounts of genomic sequence data. The accuracy of sequence-based functional annotations is often variable, particularly if the sequence homology to a known function is low. Indeed recent work has shown that even proteins with very high sequence identity can have different folds and functions, and therefore caution is needed in assigning functions by sequence homology in the absence of experimental validation. Experimental methods are therefore needed to efficiently evaluate annotations in a way that complements current high throughput technologies. Here, we describe the use of nuclear magnetic resonance (NMR-based ligand screening as a tool for testing functional assignments of putative enzymes that may be of variable reliability. Results The target genes for this study are putative enzymes from the methanogenic archaeon Methanosarcina acetivorans (MA that have been selected after manual genome re-annotation and demonstrate detectable in vivo expression at the level of the transcriptome. The experimental approach begins with heterologous E. coli expression and purification of individual MA gene products. An NMR-based ligand screen of the purified protein then identifies possible substrates or products from a library of candidate compounds chosen from the putative pathway and other related pathways. These data are used to determine if the current sequence-based annotation is likely to be correct. For a number of case studies, additional experiments (such as in vivo genetic complementation were performed to determine function so that the reliability of the NMR screen could be independently assessed. Conclusions In all examples studied, the NMR screen was indicative of whether the functional annotation was correct. Thus, the case studies described demonstrate that NMR-based ligand screening is an effective and rapid tool for confirming or

  16. annot8r: GO, EC and KEGG annotation of EST datasets

    Directory of Open Access Journals (Sweden)

    Schmid Ralf

    2008-04-01

    Full Text Available Abstract Background The expressed sequence tag (EST methodology is an attractive option for the generation of sequence data for species for which no completely sequenced genome is available. The annotation and comparative analysis of such datasets poses a formidable challenge for research groups that do not have the bioinformatics infrastructure of major genome sequencing centres. Therefore, there is a need for user-friendly tools to facilitate the annotation of non-model species EST datasets with well-defined ontologies that enable meaningful cross-species comparisons. To address this, we have developed annot8r, a platform for the rapid annotation of EST datasets with GO-terms, EC-numbers and KEGG-pathways. Results annot8r automatically downloads all files relevant for the annotation process and generates a reference database that stores UniProt entries, their associated Gene Ontology (GO, Enzyme Commission (EC and Kyoto Encyclopaedia of Genes and Genomes (KEGG annotation and additional relevant data. For each of GO, EC and KEGG, annot8r extracts a specific sequence subset from the UniProt dataset based on the information stored in the reference database. These three subsets are then formatted for BLAST searches. The user provides the protein or nucleotide sequences to be annotated and annot8r runs BLAST searches against these three subsets. The BLAST results are parsed and the corresponding annotations retrieved from the reference database. The annotations are saved both as flat files and also in a relational postgreSQL results database to facilitate more advanced searches within the results. annot8r is integrated with the PartiGene suite of EST analysis tools. Conclusion annot8r is a tool that assigns GO, EC and KEGG annotations for data sets resulting from EST sequencing projects both rapidly and efficiently. The benefits of an underlying relational database, flexibility and the ease of use of the program make it ideally suited for non

  17. CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines.

    Science.gov (United States)

    Soysal, Ergin; Wang, Jingqi; Jiang, Min; Wu, Yonghui; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua

    2017-11-24

    Existing general clinical natural language processing (NLP) systems such as MetaMap and Clinical Text Analysis and Knowledge Extraction System have been successfully applied to information extraction from clinical text. However, end users often have to customize existing systems for their individual tasks, which can require substantial NLP skills. Here we present CLAMP (Clinical Language Annotation, Modeling, and Processing), a newly developed clinical NLP toolkit that provides not only state-of-the-art NLP components, but also a user-friendly graphic user interface that can help users quickly build customized NLP pipelines for their individual applications. Our evaluation shows that the CLAMP default pipeline achieved good performance on named entity recognition and concept encoding. We also demonstrate the efficiency of the CLAMP graphic user interface in building customized, high-performance NLP pipelines with 2 use cases, extracting smoking status and lab test values. CLAMP is publicly available for research use, and we believe it is a unique asset for the clinical NLP community. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Same yet different : a comparison of pipeline industries in Canada and Australia

    Energy Technology Data Exchange (ETDEWEB)

    Jaques, S. [Egis Consulting (Australia)

    2000-07-01

    A comparative evaluation of the pipeline industries in Canada and Australia was presented. The two countries are very similar in geography, politics, native land issues and populations. This paper focused on the similarities with respect to the pipeline industry and the long-distance, cross-country gas transmission pipelines. It addressed issues regarding pipeline industry organization, historical aspects and the infrastructure in both countries. The design practices of each country were compared and contrasted with attention to how construction techniques are affected by the terrain, climate and water crossing in each country. The author discussed issues such as grade, trench, string pipe, weld pipe, coat welds, hydrostatic testing, lowering, backfill and clean up. The issue of remote location factors was also presented. The future opportunities for Canada lies in Arctic gas development. For Australia, future opportunities lie in the expansion of the infrastructure to include gas from PNG to Queensland. Another possible opportunity is the development of an offshore distribution system to Tasmania. The author also noted that the aging pipeline system in both Canada and Australia will require careful consideration with respect to corrosion. 18 refs., 2 tabs., 2 figs.

  19. Rupture detection device for pipeline in reactor

    International Nuclear Information System (INIS)

    Murakoshi, Toshinori; Kanamori, Shigeru; Shirasawa, Hirofumi.

    1991-01-01

    A difference between each of the pressures in a plurality of pipelines disposed in a shroud a reactor container and a pressure outside of the shroud is detected, thereby enabling safety and reliable detection even for simultaneous rapture and leakage of the pipelines. That is, a difference between the pressure of a steam phase outside of the shroud and a pressure in each of a plurality of low pressure injection pipelines in an emergency core cooling system opened to the inside of the shroud in the reactor container is detected by a difference pressure detector for each of them. Then, an average value for each of the pressure difference is determined, which is compared with the difference pressure obtained from each of the detectors in a comparator. Then, if openings should be caused by rupture, leakage or the like in any of the pipelines, the pressure in that pipeline is lowered to a vicinity of an atmospheric pressure and at the vapor phase pressure at the lowest. If the pressure is compared with the average value by the comparator, a negative difference is caused. Accordingly, an alarming unit generates an alarm based on the pressure difference signal, thereby enabling to specify the failed pipeline and provide an announce of the failure. (I.S.)

  20. Supplementary Material for: BEACON: automated tool for Bacterial GEnome Annotation ComparisON

    KAUST Repository

    Kalkatawi, Manal M.; Alam, Intikhab; Bajic, Vladimir B.

    2015-01-01

    Abstract Background Genome annotation is one way of summarizing the existing knowledge about genomic characteristics of an organism. There has been an increased interest during the last several decades in computer-based structural and functional genome annotation. Many methods for this purpose have been developed for eukaryotes and prokaryotes. Our study focuses on comparison of functional annotations of prokaryotic genomes. To the best of our knowledge there is no fully automated system for detailed comparison of functional genome annotations generated by different annotation methods (AMs). Results The presence of many AMs and development of new ones introduce needs to: a/ compare different annotations for a single genome, and b/ generate annotation by combining individual ones. To address these issues we developed an Automated Tool for Bacterial GEnome Annotation ComparisON (BEACON) that benefits both AM developers and annotation analysers. BEACON provides detailed comparison of gene function annotations of prokaryotic genomes obtained by different AMs and generates extended annotations through combination of individual ones. For the illustration of BEACONâ s utility, we provide a comparison analysis of multiple different annotations generated for four genomes and show on these examples that the extended annotation can increase the number of genes annotated by putative functions up to 27 %, while the number of genes without any function assignment is reduced. Conclusions We developed BEACON, a fast tool for an automated and a systematic comparison of different annotations of single genomes. The extended annotation assigns putative functions to many genes with unknown functions. BEACON is available under GNU General Public License version 3.0 and is accessible at: http://www.cbrc.kaust.edu.sa/BEACON/ .

  1. Pipelines. Economy's veins; Pipelines. Adern der Wirtschaft

    Energy Technology Data Exchange (ETDEWEB)

    Feizlmayr, Adolf; Goestl, Stefan [ILF Beratende Ingenieure, Muenchen (Germany)

    2011-02-15

    According to the existing prognoses more than 1 million km of gas pipelines, oil pipelines and water pipelines are built up to the year 2030. The predominant portion is from gas pipelines. The safe continued utilization of the aging pipelines is a large challenge. In addition, the diagnostic technology, the evaluation and risk assessment have to be developed further. With the design of new oil pipelines and gas pipelines, aspects of environmental protection, the energy efficiency of transport and thus the emission reduction of carbon dioxide, the public acceptance and the market strategy of the exporters gain in importance. With the offshore pipelines one soon will exceed the present border of 2,000 m depth of water and penetrate into larger sea depths.

  2. 78 FR 70623 - Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory...

    Science.gov (United States)

    2013-11-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0203] Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory Committee AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. [[Page...

  3. Semantic annotation of consumer health questions.

    Science.gov (United States)

    Kilicoglu, Halil; Ben Abacha, Asma; Mrabet, Yassine; Shooshan, Sonya E; Rodriguez, Laritza; Masterton, Kate; Demner-Fushman, Dina

    2018-02-06

    Consumers increasingly use online resources for their health information needs. While current search engines can address these needs to some extent, they generally do not take into account that most health information needs are complex and can only fully be expressed in natural language. Consumer health question answering (QA) systems aim to fill this gap. A major challenge in developing consumer health QA systems is extracting relevant semantic content from the natural language questions (question understanding). To develop effective question understanding tools, question corpora semantically annotated for relevant question elements are needed. In this paper, we present a two-part consumer health question corpus annotated with several semantic categories: named entities, question triggers/types, question frames, and question topic. The first part (CHQA-email) consists of relatively long email requests received by the U.S. National Library of Medicine (NLM) customer service, while the second part (CHQA-web) consists of shorter questions posed to MedlinePlus search engine as queries. Each question has been annotated by two annotators. The annotation methodology is largely the same between the two parts of the corpus; however, we also explain and justify the differences between them. Additionally, we provide information about corpus characteristics, inter-annotator agreement, and our attempts to measure annotation confidence in the absence of adjudication of annotations. The resulting corpus consists of 2614 questions (CHQA-email: 1740, CHQA-web: 874). Problems are the most frequent named entities, while treatment and general information questions are the most common question types. Inter-annotator agreement was generally modest: question types and topics yielded highest agreement, while the agreement for more complex frame annotations was lower. Agreement in CHQA-web was consistently higher than that in CHQA-email. Pairwise inter-annotator agreement proved most

  4. Comprehensive investigation into historical pipeline construction costs and engineering economic analysis of Alaska in-state gas pipeline

    Science.gov (United States)

    Rui, Zhenhua

    This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average

  5. DomSign: a top-down annotation pipeline to enlarge enzyme space in the protein universe.

    Science.gov (United States)

    Wang, Tianmin; Mori, Hiroshi; Zhang, Chong; Kurokawa, Ken; Xing, Xin-Hui; Yamada, Takuji

    2015-03-21

    Computational predictions of catalytic function are vital for in-depth understanding of enzymes. Because several novel approaches performing better than the common BLAST tool are rarely applied in research, we hypothesized that there is a large gap between the number of known annotated enzymes and the actual number in the protein universe, which significantly limits our ability to extract additional biologically relevant functional information from the available sequencing data. To reliably expand the enzyme space, we developed DomSign, a highly accurate domain signature-based enzyme functional prediction tool to assign Enzyme Commission (EC) digits. DomSign is a top-down prediction engine that yields results comparable, or superior, to those from many benchmark EC number prediction tools, including BLASTP, when a homolog with an identity >30% is not available in the database. Performance tests showed that DomSign is a highly reliable enzyme EC number annotation tool. After multiple tests, the accuracy is thought to be greater than 90%. Thus, DomSign can be applied to large-scale datasets, with the goal of expanding the enzyme space with high fidelity. Using DomSign, we successfully increased the percentage of EC-tagged enzymes from 12% to 30% in UniProt-TrEMBL. In the Kyoto Encyclopedia of Genes and Genomes bacterial database, the percentage of EC-tagged enzymes for each bacterial genome could be increased from 26.0% to 33.2% on average. Metagenomic mining was also efficient, as exemplified by the application of DomSign to the Human Microbiome Project dataset, recovering nearly one million new EC-labeled enzymes. Our results offer preliminarily confirmation of the existence of the hypothesized huge number of "hidden enzymes" in the protein universe, the identification of which could substantially further our understanding of the metabolisms of diverse organisms and also facilitate bioengineering by providing a richer enzyme resource. Furthermore, our results

  6. Current and future trends in marine image annotation software

    Science.gov (United States)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images

  7. Validation of pig operations through pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Tolmasquim, Sueli Tiomno [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil); Nieckele, Angela O. [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Mecanica

    2005-07-01

    In the oil industry, pigging operations in pipelines have been largely applied for different purposes: pipe cleaning, inspection, liquid removal and product separation, among others. An efficient and safe pigging operation requires that a number of operational parameters, such as maximum and minimum pressures in the pipeline and pig velocity, to be well evaluated during the planning stage and maintained within stipulated limits while the operation is accomplished. With the objective of providing an efficient tool to assist in the control and design of pig operations through pipelines, a numerical code was developed, based on a finite difference scheme, which allows the simulation of two fluid transient flow, like liquid-liquid, gas-gas or liquid-gas products in the pipeline. Modules to automatically control process variables were included to employ different strategies to reach an efficient operation. Different test cases were investigated, to corroborate the robustness of the methodology. To validate the methodology, the results obtained with the code were compared with a real liquid displacement operation of a section of the OSPAR oil pipeline, belonging to PETROBRAS, with 30'' diameter and 60 km length, presenting good agreement. (author)

  8. Annotate-it: a Swiss-knife approach to annotation, analysis and interpretation of single nucleotide variation in human disease.

    Science.gov (United States)

    Sifrim, Alejandro; Van Houdt, Jeroen Kj; Tranchevent, Leon-Charles; Nowakowska, Beata; Sakai, Ryo; Pavlopoulos, Georgios A; Devriendt, Koen; Vermeesch, Joris R; Moreau, Yves; Aerts, Jan

    2012-01-01

    The increasing size and complexity of exome/genome sequencing data requires new tools for clinical geneticists to discover disease-causing variants. Bottlenecks in identifying the causative variation include poor cross-sample querying, constantly changing functional annotation and not considering existing knowledge concerning the phenotype. We describe a methodology that facilitates exploration of patient sequencing data towards identification of causal variants under different genetic hypotheses. Annotate-it facilitates handling, analysis and interpretation of high-throughput single nucleotide variant data. We demonstrate our strategy using three case studies. Annotate-it is freely available and test data are accessible to all users at http://www.annotate-it.org.

  9. Pipelines : moving biomass and energy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, A. [Alberta Univ., Edmonton, AB (Canada). Dept. of Mechanical Engineering

    2006-07-01

    Moving biomass and energy through pipelines was presented. Field sourced biomass utilization for fuel was discussed in terms of competing cost factors; economies of scale; and differing fuel plant sizes. The cost versus scale in a bioenergy facility was illustrated in chart format. The transportation cost of biomass was presented as it is a major component of total biomass processing cost and is in the typical range of 25-45 per cent of total processing costs for truck transport of biomass. Issues in large scale biomass utilization, scale effects in transportation, and components of transport cost were identified. Other topics related to transportation issues included approaches to pipeline transport; cost of wood chips in pipeline transport; and distance variable cost of transporting wood chips by pipeline. Practical applications were also offered. In addition, the presentation provided and illustrated a model for an ethanol plant supplied by truck transport as well as a sample configuration for 19 truck based ethanol plants versus one large facility supplied by truck plus 18 pipelines. Last, pipeline transport of bio-oil and pipeline transport of syngas was discussed. It was concluded that pipeline transport can help in reducing congestion issues in large scale biomass utilization and that it can offer a means to achieve large plant size. Some current research at the University of Alberta on pipeline transport of raw biomass, bio-oil and hydrogen production from biomass for oil sands and pipeline transport was also presented. tabs., figs.

  10. CGKB: an annotation knowledge base for cowpea (Vigna unguiculata L. methylation filtered genomic genespace sequences

    Directory of Open Access Journals (Sweden)

    Spraggins Thomas A

    2007-04-01

    potential domains on annotated GSS were analyzed using the HMMER package against the Pfam database. The annotated GSS were also assigned with Gene Ontology annotation terms and integrated with 228 curated plant metabolic pathways from the Arabidopsis Information Resource (TAIR knowledge base. The UniProtKB-Swiss-Prot ENZYME database was used to assign putative enzymatic function to each GSS. Each GSS was also analyzed with the Tandem Repeat Finder (TRF program in order to identify potential SSRs for molecular marker discovery. The raw sequence data, processed annotation, and SSR results were stored in relational tables designed in key-value pair fashion using a PostgreSQL relational database management system. The biological knowledge derived from the sequence data and processed results are represented as views or materialized views in the relational database management system. All materialized views are indexed for quick data access and retrieval. Data processing and analysis pipelines were implemented using the Perl programming language. The web interface was implemented in JavaScript and Perl CGI running on an Apache web server. The CPU intensive data processing and analysis pipelines were run on a computer cluster of more than 30 dual-processor Apple XServes. A job management system called Vela was created as a robust way to submit large numbers of jobs to the Portable Batch System (PBS. Conclusion CGKB is an integrated and annotated resource for cowpea GSS with features of homology-based and HMM-based annotations, enzyme and pathway annotations, GO term annotation, toolkits, and a large number of other facilities to perform complex queries. The cowpea GSS, chloroplast sequences, mitochondrial sequences, retroelements, and SSR sequences are available as FASTA formatted files and downloadable at CGKB. This database and web interface are publicly accessible at http://cowpeagenomics.med.virginia.edu/CGKB/.

  11. MPEG-7 based video annotation and browsing

    Science.gov (United States)

    Hoeynck, Michael; Auweiler, Thorsten; Wellhausen, Jens

    2003-11-01

    The huge amount of multimedia data produced worldwide requires annotation in order to enable universal content access and to provide content-based search-and-retrieval functionalities. Since manual video annotation can be time consuming, automatic annotation systems are required. We review recent approaches to content-based indexing and annotation of videos for different kind of sports and describe our approach to automatic annotation of equestrian sports videos. We especially concentrate on MPEG-7 based feature extraction and content description, where we apply different visual descriptors for cut detection. Further, we extract the temporal positions of single obstacles on the course by analyzing MPEG-7 edge information. Having determined single shot positions as well as the visual highlights, the information is jointly stored with meta-textual information in an MPEG-7 description scheme. Based on this information, we generate content summaries which can be utilized in a user-interface in order to provide content-based access to the video stream, but further for media browsing on a streaming server.

  12. Snpdat: Easy and rapid annotation of results from de novo snp discovery projects for model and non-model organisms

    Directory of Open Access Journals (Sweden)

    Doran Anthony G

    2013-02-01

    Full Text Available Abstract Background Single nucleotide polymorphisms (SNPs are the most abundant genetic variant found in vertebrates and invertebrates. SNP discovery has become a highly automated, robust and relatively inexpensive process allowing the identification of many thousands of mutations for model and non-model organisms. Annotating large numbers of SNPs can be a difficult and complex process. Many tools available are optimised for use with organisms densely sampled for SNPs, such as humans. There are currently few tools available that are species non-specific or support non-model organism data. Results Here we present SNPdat, a high throughput analysis tool that can provide a comprehensive annotation of both novel and known SNPs for any organism with a draft sequence and annotation. Using a dataset of 4,566 SNPs identified in cattle using high-throughput DNA sequencing we demonstrate the annotations performed and the statistics that can be generated by SNPdat. Conclusions SNPdat provides users with a simple tool for annotation of genomes that are either not supported by other tools or have a small number of annotated SNPs available. SNPdat can also be used to analyse datasets from organisms which are densely sampled for SNPs. As a command line tool it can easily be incorporated into existing SNP discovery pipelines and fills a niche for analyses involving non-model organisms that are not supported by many available SNP annotation tools. SNPdat will be of great interest to scientists involved in SNP discovery and analysis projects, particularly those with limited bioinformatics experience.

  13. GRN2SBML: automated encoding and annotation of inferred gene regulatory networks complying with SBML.

    Science.gov (United States)

    Vlaic, Sebastian; Hoffmann, Bianca; Kupfer, Peter; Weber, Michael; Dräger, Andreas

    2013-09-01

    GRN2SBML automatically encodes gene regulatory networks derived from several inference tools in systems biology markup language. Providing a graphical user interface, the networks can be annotated via the simple object access protocol (SOAP)-based application programming interface of BioMart Central Portal and minimum information required in the annotation of models registry. Additionally, we provide an R-package, which processes the output of supported inference algorithms and automatically passes all required parameters to GRN2SBML. Therefore, GRN2SBML closes a gap in the processing pipeline between the inference of gene regulatory networks and their subsequent analysis, visualization and storage. GRN2SBML is freely available under the GNU Public License version 3 and can be downloaded from http://www.hki-jena.de/index.php/0/2/490. General information on GRN2SBML, examples and tutorials are available at the tool's web page.

  14. Pipeline operators training and certification using thermohydraulic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Barreto, Claudio V.; Plasencia C, Jose [Pontificia Universidade Catolica (PUC-Rio), Rio de Janeiro, RJ (Brazil). Nucleo de Simulacao Termohidraulica de Dutos (SIMDUT); Montalvao, Filipe; Costa, Luciano [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The continuous pipeline operators training and certification of the TRANSPETRO's Pipeline National Operations Control Center (CNCO) is an essential task aiming the efficiency and safety of the oil and derivatives transport operations through the Brazilian pipeline network. For this objective, a hydraulic simulator is considered an excellent tool that allows the creation of different operational scenarios for training the pipeline hydraulic behavior as well as for testing the operator's responses to normal and abnormal real time operational conditions. The hydraulic simulator is developed based on a pipeline simulation software that supplies the hydraulic responses normally acquired from the pipeline remote units in the field. The pipeline simulation software has a communication interface system that sends and receives data to the SCADA supervisory system database. Using the SCADA graphical interface to create and to customize human machine interfaces (HMI) from which the operator/instructor has total control of the pipeline/system and instrumentation by sending commands. Therefore, it is possible to have realistic training outside of the real production systems, while acquiring experience during training hours with the operation of a real pipeline. A pilot Project was initiated at TRANSPETRO - CNCO targeting to evaluate the hydraulic simulators advantages in pipeline operators training and certification programs. The first part of the project was the development of three simulators for different pipelines. The excellent results permitted the project expansion for a total of twenty different pipelines, being implemented in training programs for pipelines presently operated by CNCO as well as for the new ones that are being migrated. The main objective of this paper is to present an overview of the implementation process and the development of a training environment through a pipe simulation environment using commercial software. This paper also presents

  15. Deformably registering and annotating whole CLARITY brains to an atlas via masked LDDMM

    Science.gov (United States)

    Kutten, Kwame S.; Vogelstein, Joshua T.; Charon, Nicolas; Ye, Li; Deisseroth, Karl; Miller, Michael I.

    2016-04-01

    The CLARITY method renders brains optically transparent to enable high-resolution imaging in the structurally intact brain. Anatomically annotating CLARITY brains is necessary for discovering which regions contain signals of interest. Manually annotating whole-brain, terabyte CLARITY images is difficult, time-consuming, subjective, and error-prone. Automatically registering CLARITY images to a pre-annotated brain atlas offers a solution, but is difficult for several reasons. Removal of the brain from the skull and subsequent storage and processing cause variable non-rigid deformations, thus compounding inter-subject anatomical variability. Additionally, the signal in CLARITY images arises from various biochemical contrast agents which only sparsely label brain structures. This sparse labeling challenges the most commonly used registration algorithms that need to match image histogram statistics to the more densely labeled histological brain atlases. The standard method is a multiscale Mutual Information B-spline algorithm that dynamically generates an average template as an intermediate registration target. We determined that this method performs poorly when registering CLARITY brains to the Allen Institute's Mouse Reference Atlas (ARA), because the image histogram statistics are poorly matched. Therefore, we developed a method (Mask-LDDMM) for registering CLARITY images, that automatically finds the brain boundary and learns the optimal deformation between the brain and atlas masks. Using Mask-LDDMM without an average template provided better results than the standard approach when registering CLARITY brains to the ARA. The LDDMM pipelines developed here provide a fast automated way to anatomically annotate CLARITY images; our code is available as open source software at http://NeuroData.io.

  16. 76 FR 53086 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Science.gov (United States)

    2011-08-25

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2011-0023] RIN 2137-AE72 Pipeline Safety: Safety of Gas Transmission Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), Department of Transportation (DOT...

  17. 76 FR 70953 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Science.gov (United States)

    2011-11-16

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket ID PHMSA-2011-0023] RIN 2137-AE72 Pipeline Safety: Safety of Gas Transmission Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Advance notice of...

  18. A Parallel Software Pipeline for DMET Microarray Genotyping Data Analysis

    Directory of Open Access Journals (Sweden)

    Giuseppe Agapito

    2018-06-01

    Full Text Available Personalized medicine is an aspect of the P4 medicine (predictive, preventive, personalized and participatory based precisely on the customization of all medical characters of each subject. In personalized medicine, the development of medical treatments and drugs is tailored to the individual characteristics and needs of each subject, according to the study of diseases at different scales from genotype to phenotype scale. To make concrete the goal of personalized medicine, it is necessary to employ high-throughput methodologies such as Next Generation Sequencing (NGS, Genome-Wide Association Studies (GWAS, Mass Spectrometry or Microarrays, that are able to investigate a single disease from a broader perspective. A side effect of high-throughput methodologies is the massive amount of data produced for each single experiment, that poses several challenges (e.g., high execution time and required memory to bioinformatic software. Thus a main requirement of modern bioinformatic softwares, is the use of good software engineering methods and efficient programming techniques, able to face those challenges, that include the use of parallel programming and efficient and compact data structures. This paper presents the design and the experimentation of a comprehensive software pipeline, named microPipe, for the preprocessing, annotation and analysis of microarray-based Single Nucleotide Polymorphism (SNP genotyping data. A use case in pharmacogenomics is presented. The main advantages of using microPipe are: the reduction of errors that may happen when trying to make data compatible among different tools; the possibility to analyze in parallel huge datasets; the easy annotation and integration of data. microPipe is available under Creative Commons license, and is freely downloadable for academic and not-for-profit institutions.

  19. Coreference annotation and resolution in the Colorado Richly Annotated Full Text (CRAFT) corpus of biomedical journal articles.

    Science.gov (United States)

    Cohen, K Bretonnel; Lanfranchi, Arrick; Choi, Miji Joo-Young; Bada, Michael; Baumgartner, William A; Panteleyeva, Natalya; Verspoor, Karin; Palmer, Martha; Hunter, Lawrence E

    2017-08-17

    Coreference resolution is the task of finding strings in text that have the same referent as other strings. Failures of coreference resolution are a common cause of false negatives in information extraction from the scientific literature. In order to better understand the nature of the phenomenon of coreference in biomedical publications and to increase performance on the task, we annotated the Colorado Richly Annotated Full Text (CRAFT) corpus with coreference relations. The corpus was manually annotated with coreference relations, including identity and appositives for all coreferring base noun phrases. The OntoNotes annotation guidelines, with minor adaptations, were used. Interannotator agreement ranges from 0.480 (entity-based CEAF) to 0.858 (Class-B3), depending on the metric that is used to assess it. The resulting corpus adds nearly 30,000 annotations to the previous release of the CRAFT corpus. Differences from related projects include a much broader definition of markables, connection to extensive annotation of several domain-relevant semantic classes, and connection to complete syntactic annotation. Tool performance was benchmarked on the data. A publicly available out-of-the-box, general-domain coreference resolution system achieved an F-measure of 0.14 (B3), while a simple domain-adapted rule-based system achieved an F-measure of 0.42. An ensemble of the two reached F of 0.46. Following the IDENTITY chains in the data would add 106,263 additional named entities in the full 97-paper corpus, for an increase of 76% percent in the semantic classes of the eight ontologies that have been annotated in earlier versions of the CRAFT corpus. The project produced a large data set for further investigation of coreference and coreference resolution in the scientific literature. The work raised issues in the phenomenon of reference in this domain and genre, and the paper proposes that many mentions that would be considered generic in the general domain are not

  20. MIPS bacterial genomes functional annotation benchmark dataset.

    Science.gov (United States)

    Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen

    2005-05-15

    Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab

  1. Revegetation assessment on different areas of a methane pipeline in Tuscany

    Directory of Open Access Journals (Sweden)

    Staglianò N

    2012-12-01

    Full Text Available Revegetation assessment on different areas of methane pipeline in Tuscany. Degraded areas due to extra-agricultural activity (such as quarries, dumps, ski runs, methane tracks, etc. or to natural events (such as landslides are present in a wide part of Italian territory and for this reason is extremely necessary an effective restoration in order to reduce erosion risks and to permit their better integration in the surrounding landscape. Revegetation is usually performed using commercial mixtures constituted by species with a forage aptitude. The aim of this work was to analyse the evolution of revegetation performed on different areas of methane pipeline in Tuscany (central Italy, both in Mediterranean environments and in mountain areas. Knowledge of mixture used during revegetation and time of intervention allowed, on one hand, to discriminate species introduced by revegetation from those coming out from native recolonisation of tracks, on the other hand, to know the age of canopies at time of botanical analysis. The following variables were assessed on the studied herbaceous resources: ground cover, floristic composition, biodiversity, level of recolonization by native species, similarity with natural areas. Data collection permitted the evaluation of efficiency of studied revegetation, the awareness of the role played by sown and native species, the estimation of the level of native species in order to integrate the restoration with the environment and the analysis of most important parameters that affect vegetal evolution in these peculiar settings.

  2. Evaluation of three automated genome annotations for Halorhabdus utahensis.

    Directory of Open Access Journals (Sweden)

    Peter Bakke

    2009-07-01

    Full Text Available Genome annotations are accumulating rapidly and depend heavily on automated annotation systems. Many genome centers offer annotation systems but no one has compared their output in a systematic way to determine accuracy and inherent errors. Errors in the annotations are routinely deposited in databases such as NCBI and used to validate subsequent annotation errors. We submitted the genome sequence of halophilic archaeon Halorhabdus utahensis to be analyzed by three genome annotation services. We have examined the output from each service in a variety of ways in order to compare the methodology and effectiveness of the annotations, as well as to explore the genes, pathways, and physiology of the previously unannotated genome. The annotation services differ considerably in gene calls, features, and ease of use. We had to manually identify the origin of replication and the species-specific consensus ribosome-binding site. Additionally, we conducted laboratory experiments to test H. utahensis growth and enzyme activity. Current annotation practices need to improve in order to more accurately reflect a genome's biological potential. We make specific recommendations that could improve the quality of microbial annotation projects.

  3. Towards Viral Genome Annotation Standards, Report from the 2010 NCBI Annotation Workshop.

    Science.gov (United States)

    Brister, James Rodney; Bao, Yiming; Kuiken, Carla; Lefkowitz, Elliot J; Le Mercier, Philippe; Leplae, Raphael; Madupu, Ramana; Scheuermann, Richard H; Schobel, Seth; Seto, Donald; Shrivastava, Susmita; Sterk, Peter; Zeng, Qiandong; Klimke, William; Tatusova, Tatiana

    2010-10-01

    Improvements in DNA sequencing technologies portend a new era in virology and could possibly lead to a giant leap in our understanding of viral evolution and ecology. Yet, as viral genome sequences begin to fill the world's biological databases, it is critically important to recognize that the scientific promise of this era is dependent on consistent and comprehensive genome annotation. With this in mind, the NCBI Genome Annotation Workshop recently hosted a study group tasked with developing sequence, function, and metadata annotation standards for viral genomes. This report describes the issues involved in viral genome annotation and reviews policy recommendations presented at the NCBI Annotation Workshop.

  4. Towards Viral Genome Annotation Standards, Report from the 2010 NCBI Annotation Workshop

    Directory of Open Access Journals (Sweden)

    Qiandong Zeng

    2010-10-01

    Full Text Available Improvements in DNA sequencing technologies portend a new era in virology and could possibly lead to a giant leap in our understanding of viral evolution and ecology. Yet, as viral genome sequences begin to fill the world’s biological databases, it is critically important to recognize that the scientific promise of this era is dependent on consistent and comprehensive genome annotation. With this in mind, the NCBI Genome Annotation Workshop recently hosted a study group tasked with developing sequence, function, and metadata annotation standards for viral genomes. This report describes the issues involved in viral genome annotation and reviews policy recommendations presented at the NCBI Annotation Workshop.

  5. Pipeline rehabilitation planning

    Energy Technology Data Exchange (ETDEWEB)

    Palmer-Jones, Roland; Hopkins, Phil; Eyre, David [PENSPEN (United Kingdom)

    2005-07-01

    An operator faced with an onshore pipeline that has extensive damage must consider the need for rehabilitation, the sort of rehabilitation to be used, and the rehabilitation schedule. This paper will consider pipeline rehabilitation based on the authors' experiences from recent projects, and recommend a simple strategy for planning pipeline rehabilitation. It will also consider rehabilitation options: external re-coating; internal lining; internal painting; programmed repairs. The main focus will be external re-coating. Consideration will be given to rehabilitation coating types, including tape wraps, epoxy, and polyurethane. Finally it will discuss different options for scheduling the rehabilitation of corrosion damage including: the statistical comparison of signals from inspection pigs; statistical comparison of selected measurements from inspection pigs and other inspections; the use of corrosion rates estimated for the mechanisms and conditions; expert judgement. (author)

  6. Experimental and Numerical Analysis of a Water Emptying Pipeline Using Different Air Valves

    Directory of Open Access Journals (Sweden)

    Oscar E. Coronado-Hernández

    2017-02-01

    Full Text Available The emptying procedure is a common operation that engineers have to face in pipelines. This generates subatmospheric pressure caused by the expansion of air pockets, which can produce the collapse of the system depending on the conditions of the installation. To avoid this problem, engineers have to install air valves in pipelines. However, if air valves are not adequately designed, then the risk in pipelines continues. In this research, a mathematical model is developed to simulate an emptying process in pipelines that can be used for planning this type of operation. The one-dimensional proposed model analyzes the water phase propagation by a new rigid model and the air pockets effect using thermodynamic formulations. The proposed model is validated through measurements of the air pocket absolute pressure, the water velocity and the length of the emptying columns in an experimental facility. Results show that the proposed model can accurately predict the hydraulic characteristic variables.

  7. Worldwide natural gas pipeline situation. Sekai no tennen gas pipeline jokyo

    Energy Technology Data Exchange (ETDEWEB)

    Arimoto, T [Osaka Gas Co. Ltd., Osaka (Japan)

    1993-03-01

    Constructing natural gas pipelines in wide areas requires investments of a huge amount. Many countries are building natural gas supply infrastructures under public support as nations' basic policy of promoting use of natural gas. This paper describes the present conditions of building pipelines in Western Europe, the U.S.A., Korea and Taiwan. In Western Europe, transporting companies established in line with the national policy own trunk pipelines and storage facilities, and import and distribute natural gas. The U.S.A. has 2300 small and large pipeline companies bearing transportation business. Pipelines extend about 1.9 million kilometers in total, with trunk pipelines accounting for about 440,000 kilometers. The companies are given eminent domain for the right of way. Korea has a plan to build a pipeline network with a distance of 1600 kilometers in around 2000. Taiwan has completed trunk pipelines extending 330 kilometers in two years. In Japan, the industry is preparing draft plans for wide area pipeline construction. 5 figs., 1 tab.

  8. UniProtKB/Swiss-Prot, the Manually Annotated Section of the UniProt KnowledgeBase: How to Use the Entry View.

    Science.gov (United States)

    Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Schneider, Michel; Bansal, Parit; Bridge, Alan J; Poux, Sylvain; Bougueleret, Lydie; Xenarios, Ioannis

    2016-01-01

    The Universal Protein Resource (UniProt, http://www.uniprot.org ) consortium is an initiative of the SIB Swiss Institute of Bioinformatics (SIB), the European Bioinformatics Institute (EBI) and the Protein Information Resource (PIR) to provide the scientific community with a central resource for protein sequences and functional information. The UniProt consortium maintains the UniProt KnowledgeBase (UniProtKB), updated every 4 weeks, and several supplementary databases including the UniProt Reference Clusters (UniRef) and the UniProt Archive (UniParc).The Swiss-Prot section of the UniProt KnowledgeBase (UniProtKB/Swiss-Prot) contains publicly available expertly manually annotated protein sequences obtained from a broad spectrum of organisms. Plant protein entries are produced in the frame of the Plant Proteome Annotation Program (PPAP), with an emphasis on characterized proteins of Arabidopsis thaliana and Oryza sativa. High level annotations provided by UniProtKB/Swiss-Prot are widely used to predict annotation of newly available proteins through automatic pipelines.The purpose of this chapter is to present a guided tour of a UniProtKB/Swiss-Prot entry. We will also present some of the tools and databases that are linked to each entry.

  9. Structure Annotation and Quantification of Wheat Seed Oxidized Lipids by High-Resolution LC-MS/MS.

    Science.gov (United States)

    Riewe, David; Wiebach, Janine; Altmann, Thomas

    2017-10-01

    Lipid oxidation is a process ubiquitous in life, but the direct and comprehensive analysis of oxidized lipids has been limited by available analytical methods. We applied high-resolution liquid chromatography-mass spectrometry (LC-MS) and tandem mass spectrometry (MS/MS) to quantify oxidized lipids (glycerides, fatty acids, phospholipids, lysophospholipids, and galactolipids) and implemented a platform-independent high-throughput-amenable analysis pipeline for the high-confidence annotation and acyl composition analysis of oxidized lipids. Lipid contents of 90 different naturally aged wheat ( Triticum aestivum ) seed stocks were quantified in an untargeted high-resolution LC-MS experiment, resulting in 18,556 quantitative mass-to-charge ratio features. In a posthoc liquid chromatography-tandem mass spectrometry experiment, high-resolution MS/MS spectra (5 mD accuracy) were recorded for 8,957 out of 12,080 putatively monoisotopic features of the LC-MS data set. A total of 353 nonoxidized and 559 oxidized lipids with up to four additional oxygen atoms were annotated based on the accurate mass recordings (1.5 ppm tolerance) of the LC-MS data set and filtering procedures. MS/MS spectra available for 828 of these annotations were analyzed by translating experimentally known fragmentation rules of lipids into the fragmentation of oxidized lipids. This led to the identification of 259 nonoxidized and 365 oxidized lipids by both accurate mass and MS/MS spectra and to the determination of acyl compositions for 221 nonoxidized and 295 oxidized lipids. Analysis of 15-year aged wheat seeds revealed increased lipid oxidation and hydrolysis in seeds stored in ambient versus cold conditions. © 2017 The author(s). All Rights Reserved.

  10. optimization for trenchless reconstruction of pipelines

    Directory of Open Access Journals (Sweden)

    Zhmakov Gennadiy Nikolaevich

    2015-01-01

    Full Text Available Today the technologies of trenchless reconstruction of pipelines are becoming and more widely used in Russia and abroad. One of the most perspective is methods is shock-free destruction of the old pipeline being replaced with the help of hydraulic installations with working mechanism representing a cutting unit with knife disks and a conic expander. A construction of a working mechanism, which allows making trenchless reconstruction of pipelines of different diameters, is optimized and patented and its developmental prototype is manufactured. The dependence of pipeline cutting force from knifes obtusion of the working mechanisms. The cutting force of old steel pipelines with obtuse knife increases proportional to the value of its obtusion. Two stands for endurance tests of the knifes in laboratory environment are offered and patented.

  11. Protein sequence annotation in the genome era: the annotation concept of SWISS-PROT+TREMBL.

    Science.gov (United States)

    Apweiler, R; Gateau, A; Contrino, S; Martin, M J; Junker, V; O'Donovan, C; Lang, F; Mitaritonna, N; Kappus, S; Bairoch, A

    1997-01-01

    SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation, a minimal level of redundancy and high level of integration with other databases. Ongoing genome sequencing projects have dramatically increased the number of protein sequences to be incorporated into SWISS-PROT. Since we do not want to dilute the quality standards of SWISS-PROT by incorporating sequences without proper sequence analysis and annotation, we cannot speed up the incorporation of new incoming data indefinitely. However, as we also want to make the sequences available as fast as possible, we introduced TREMBL (TRanslation of EMBL nucleotide sequence database), a supplement to SWISS-PROT. TREMBL consists of computer-annotated entries in SWISS-PROT format derived from the translation of all coding sequences (CDS) in the EMBL nucleotide sequence database, except for CDS already included in SWISS-PROT. While TREMBL is already of immense value, its computer-generated annotation does not match the quality of SWISS-PROTs. The main difference is in the protein functional information attached to sequences. With this in mind, we are dedicating substantial effort to develop and apply computer methods to enhance the functional information attached to TREMBL entries.

  12. Leak detection systems as a central component of pipeline safety concepts; Leckueberwachungssysteme als zentrale Bestandteile von Pipeline-Sicherheitskonzepten

    Energy Technology Data Exchange (ETDEWEB)

    Vogt, Daniel [KROHNE Oil and Gas B.V., Breda (Netherlands)

    2013-03-15

    The transport of materials in pipelines is continuously increasing worldwide. Pipelines are one of the most economic and safe transport systems in all directions. In order to ensure this, not only new pipelines but also existing pipelines have to be kept up to date technically. Leakages are a possible safety risk. Leaks are manifold and range from earth quakes, corrosion or material fatigue up to open-up by drilling by thieves. A specific leakage detection often is used in order to limit the risks. The minimization of the consequences of accidents, downtimes and product losses as well as regulatory procedures is the reason for the detection of leakages. Leaks in pipelines can be detected on different kinds - from a simple visual inspection during the inspection up to computer-assisted systems monitoring certain states also in underground and submarine pipeline.

  13. Facilitating functional annotation of chicken microarray data

    Directory of Open Access Journals (Sweden)

    Gresham Cathy R

    2009-10-01

    Full Text Available Abstract Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO. However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and

  14. Semi-Semantic Annotation: A guideline for the URDU.KON-TB treebank POS annotation

    Directory of Open Access Journals (Sweden)

    Qaiser ABBAS

    2016-12-01

    Full Text Available This work elaborates the semi-semantic part of speech annotation guidelines for the URDU.KON-TB treebank: an annotated corpus. A hierarchical annotation scheme was designed to label the part of speech and then applied on the corpus. This raw corpus was collected from the Urdu Wikipedia and the Jang newspaper and then annotated with the proposed semi-semantic part of speech labels. The corpus contains text of local & international news, social stories, sports, culture, finance, religion, traveling, etc. This exercise finally contributed a part of speech annotation to the URDU.KON-TB treebank. Twenty-two main part of speech categories are divided into subcategories, which conclude the morphological, and semantical information encoded in it. This article reports the annotation guidelines in major; however, it also briefs the development of the URDU.KON-TB treebank, which includes the raw corpus collection, designing & employment of annotation scheme and finally, its statistical evaluation and results. The guidelines presented as follows, will be useful for linguistic community to annotate the sentences not only for the national language Urdu but for the other indigenous languages like Punjab, Sindhi, Pashto, etc., as well.

  15. Diagnosing plant pipeline system performance using radiotracer techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kasban, H.; Ali, Elsayed H.; Arafa, H. [Engineering Department, Nuclear Research Center, Atomic Energy Authority, Inshas (Egypt)

    2017-02-15

    This study presents an experimental work in a petrochemical company for scanning a buried pipeline using Tc{sup 99m} radiotracer based on the measured velocity changes, in order to determine the flow reduction along a pipeline. In this work, Tc{sup 99m} radiotracer was injected into the pipeline and monitored by sodium iodide scintillation detectors located at several positions along the pipeline. The flow velocity has been calculated between every two consecutive detectors along the pipeline. Practically, six experiments have been carried out using two different data acquisition systems, each of them being connected to four detectors. During the fifth experiment, a bypass was discovered between the scanned pipeline and another buried parallel pipeline connected after the injection point. The results indicate that the bypass had a bad effect on the volumetric flow rate in the scanned pipeline.

  16. Assembly, Annotation, and Analysis of Multiple Mycorrhizal Fungal Genomes

    Energy Technology Data Exchange (ETDEWEB)

    Initiative Consortium, Mycorrhizal Genomics; Kuo, Alan; Grigoriev, Igor; Kohler, Annegret; Martin, Francis

    2013-03-08

    Mycorrhizal fungi play critical roles in host plant health, soil community structure and chemistry, and carbon and nutrient cycling, all areas of intense interest to the US Dept. of Energy (DOE) Joint Genome Institute (JGI). To this end we are building on our earlier sequencing of the Laccaria bicolor genome by partnering with INRA-Nancy and the mycorrhizal research community in the MGI to sequence and analyze dozens of mycorrhizal genomes of all Basidiomycota and Ascomycota orders and multiple ecological types (ericoid, orchid, and ectomycorrhizal). JGI has developed and deployed high-throughput sequencing techniques, and Assembly, RNASeq, and Annotation Pipelines. In 2012 alone we sequenced, assembled, and annotated 12 draft or improved genomes of mycorrhizae, and predicted ~;;232831 genes and ~;;15011 multigene families, All of this data is publicly available on JGI MycoCosm (http://jgi.doe.gov/fungi/), which provides access to both the genome data and tools with which to analyze the data. Preliminary comparisons of the current total of 14 public mycorrhizal genomes suggest that 1) short secreted proteins potentially involved in symbiosis are more enriched in some orders than in others amongst the mycorrhizal Agaricomycetes, 2) there are wide ranges of numbers of genes involved in certain functional categories, such as signal transduction and post-translational modification, and 3) novel gene families are specific to some ecological types.

  17. Analysis of high-throughput sequencing and annotation strategies for phage genomes.

    Directory of Open Access Journals (Sweden)

    Matthew R Henn

    Full Text Available BACKGROUND: Bacterial viruses (phages play a critical role in shaping microbial populations as they influence both host mortality and horizontal gene transfer. As such, they have a significant impact on local and global ecosystem function and human health. Despite their importance, little is known about the genomic diversity harbored in phages, as methods to capture complete phage genomes have been hampered by the lack of knowledge about the target genomes, and difficulties in generating sufficient quantities of genomic DNA for sequencing. Of the approximately 550 phage genomes currently available in the public domain, fewer than 5% are marine phage. METHODOLOGY/PRINCIPAL FINDINGS: To advance the study of phage biology through comparative genomic approaches we used marine cyanophage as a model system. We compared DNA preparation methodologies (DNA extraction directly from either phage lysates or CsCl purified phage particles, and sequencing strategies that utilize either Sanger sequencing of a linker amplification shotgun library (LASL or of a whole genome shotgun library (WGSL, or 454 pyrosequencing methods. We demonstrate that genomic DNA sample preparation directly from a phage lysate, combined with 454 pyrosequencing, is best suited for phage genome sequencing at scale, as this method is capable of capturing complete continuous genomes with high accuracy. In addition, we describe an automated annotation informatics pipeline that delivers high-quality annotation and yields few false positives and negatives in ORF calling. CONCLUSIONS/SIGNIFICANCE: These DNA preparation, sequencing and annotation strategies enable a high-throughput approach to the burgeoning field of phage genomics.

  18. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    Science.gov (United States)

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  19. Annotated chemical patent corpus: a gold standard for text mining.

    Directory of Open Access Journals (Sweden)

    Saber A Akhondi

    Full Text Available Exploring the chemical and biological space covered by patent applications is crucial in early-stage medicinal chemistry activities. Patent analysis can provide understanding of compound prior art, novelty checking, validation of biological assays, and identification of new starting points for chemical exploration. Extracting chemical and biological entities from patents through manual extraction by expert curators can take substantial amount of time and resources. Text mining methods can help to ease this process. To validate the performance of such methods, a manually annotated patent corpus is essential. In this study we have produced a large gold standard chemical patent corpus. We developed annotation guidelines and selected 200 full patents from the World Intellectual Property Organization, United States Patent and Trademark Office, and European Patent Office. The patents were pre-annotated automatically and made available to four independent annotator groups each consisting of two to ten annotators. The annotators marked chemicals in different subclasses, diseases, targets, and modes of action. Spelling mistakes and spurious line break due to optical character recognition errors were also annotated. A subset of 47 patents was annotated by at least three annotator groups, from which harmonized annotations and inter-annotator agreement scores were derived. One group annotated the full set. The patent corpus includes 400,125 annotations for the full set and 36,537 annotations for the harmonized set. All patents and annotated entities are publicly available at www.biosemantics.org.

  20. Northern pipelines : backgrounder

    International Nuclear Information System (INIS)

    2002-04-01

    Most analysts agree that demand for natural gas in North America will continue to grow. Favourable market conditions created by rising demand and declining production have sparked renewed interest in northern natural gas development. The 2002 Annual Energy Outlook forecasted U.S. consumption to increase at an annual average rate of 2 per cent from 22.8 trillion cubic feet to 33.8 TCF by 2020, mostly due to rapid growth in demand for electric power generation. Natural gas prices are also expected to increase at an annual average rate of 1.6 per cent, reaching $3.26 per thousand cubic feet in 2020. There are currently 3 proposals for pipelines to move northern gas to US markets. They include a stand-alone Mackenzie Delta Project, the Alaska Highway Pipeline Project, and an offshore route that would combine Alaskan and Canadian gas in a pipeline across the floor of the Beaufort Sea. Current market conditions and demand suggest that the projects are not mutually exclusive, but complimentary. The factors that differentiate northern pipeline proposals are reserves, preparedness for market, costs, engineering, and environmental differences. Canada has affirmed its role to provide the regulatory and fiscal certainty needed by industry to make investment decisions. The Government of the Yukon does not believe that the Alaska Highway Project will shut in Mackenzie Delta gas, but will instead pave the way for development of a new northern natural gas industry. The Alaska Highway Pipeline Project will bring significant benefits for the Yukon, the Northwest Territories and the rest of Canada. Unresolved land claims are one of the challenges that has to be addressed for both Yukon and the Northwest Territories, as the proposed Alaska Highway Pipeline will travel through traditional territories of several Yukon first Nations. 1 tab., 4 figs

  1. BioAnnote: a software platform for annotating biomedical documents with application in medical learning environments.

    Science.gov (United States)

    López-Fernández, H; Reboiro-Jato, M; Glez-Peña, D; Aparicio, F; Gachet, D; Buenaga, M; Fdez-Riverola, F

    2013-07-01

    Automatic term annotation from biomedical documents and external information linking are becoming a necessary prerequisite in modern computer-aided medical learning systems. In this context, this paper presents BioAnnote, a flexible and extensible open-source platform for automatically annotating biomedical resources. Apart from other valuable features, the software platform includes (i) a rich client enabling users to annotate multiple documents in a user friendly environment, (ii) an extensible and embeddable annotation meta-server allowing for the annotation of documents with local or remote vocabularies and (iii) a simple client/server protocol which facilitates the use of our meta-server from any other third-party application. In addition, BioAnnote implements a powerful scripting engine able to perform advanced batch annotations. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Chado controller: advanced annotation management with a community annotation system.

    Science.gov (United States)

    Guignon, Valentin; Droc, Gaëtan; Alaux, Michael; Baurens, Franc-Christophe; Garsmeur, Olivier; Poiron, Claire; Carver, Tim; Rouard, Mathieu; Bocs, Stéphanie

    2012-04-01

    We developed a controller that is compliant with the Chado database schema, GBrowse and genome annotation-editing tools such as Artemis and Apollo. It enables the management of public and private data, monitors manual annotation (with controlled vocabularies, structural and functional annotation controls) and stores versions of annotation for all modified features. The Chado controller uses PostgreSQL and Perl. The Chado Controller package is available for download at http://www.gnpannot.org/content/chado-controller and runs on any Unix-like operating system, and documentation is available at http://www.gnpannot.org/content/chado-controller-doc The system can be tested using the GNPAnnot Sandbox at http://www.gnpannot.org/content/gnpannot-sandbox-form valentin.guignon@cirad.fr; stephanie.sidibe-bocs@cirad.fr Supplementary data are available at Bioinformatics online.

  3. Annotation-Based Whole Genomic Prediction and Selection

    DEFF Research Database (Denmark)

    Kadarmideen, Haja; Do, Duy Ngoc; Janss, Luc

    Genomic selection is widely used in both animal and plant species, however, it is performed with no input from known genomic or biological role of genetic variants and therefore is a black box approach in a genomic era. This study investigated the role of different genomic regions and detected QTLs...... in their contribution to estimated genomic variances and in prediction of genomic breeding values by applying SNP annotation approaches to feed efficiency. Ensembl Variant Predictor (EVP) and Pig QTL database were used as the source of genomic annotation for 60K chip. Genomic prediction was performed using the Bayes...... classes. Predictive accuracy was 0.531, 0.532, 0.302, and 0.344 for DFI, RFI, ADG and BF, respectively. The contribution per SNP to total genomic variance was similar among annotated classes across different traits. Predictive performance of SNP classes did not significantly differ from randomized SNP...

  4. United States petroleum pipelines: An empirical analysis of pipeline sizing

    Science.gov (United States)

    Coburn, L. L.

    1980-12-01

    The undersizing theory hypothesizes that integrated oil companies have a strong economic incentive to size the petroleum pipelines they own and ship over in a way that means that some of the demand must utilize higher cost alternatives. The DOJ theory posits that excess or monopoly profits are earned due to the natural monopoly characteristics of petroleum pipelines and the existence of market power in some pipelines at either the upstream or downstream market. The theory holds that independent petroleum pipelines owned by companies not otherwise affiliated with the petroleum industry (independent pipelines) do not have these incentives and all the efficiencies of pipeline transportation are passed to the ultimate consumer. Integrated oil companies on the other hand, keep these cost efficiencies for themselves in the form of excess profits.

  5. Underground pipeline corrosion

    CERN Document Server

    Orazem, Mark

    2014-01-01

    Underground pipelines transporting liquid petroleum products and natural gas are critical components of civil infrastructure, making corrosion prevention an essential part of asset-protection strategy. Underground Pipeline Corrosion provides a basic understanding of the problems associated with corrosion detection and mitigation, and of the state of the art in corrosion prevention. The topics covered in part one include: basic principles for corrosion in underground pipelines, AC-induced corrosion of underground pipelines, significance of corrosion in onshore oil and gas pipelines, n

  6. Interaction of Buried Pipeline with Soil Under Different Loading Cases

    Directory of Open Access Journals (Sweden)

    Magura Martin

    2016-09-01

    Full Text Available Gas pipelines pass through different topographies. Their stress level is influenced not only by gas pressure, but also by the adjacent soil, the thickness of any covering layers, and soil movements (sinking, landslides. The stress level may be unevenly spread over a pipe due to these causes. When evaluating experimental measurements, errors may occur. The value of the resistance reserve of steel can be adjusted by a detailed analysis of any loading. This reserve can be used in the assessment of a pipeline’s actual state or in reconstructions. A detailed analysis of such loading and its comparison with the simple theory of elasticity is shown in this article.

  7. Search Engine for Antimicrobial Resistance: A Cloud Compatible Pipeline and Web Interface for Rapidly Detecting Antimicrobial Resistance Genes Directly from Sequence Data.

    Science.gov (United States)

    Rowe, Will; Baker, Kate S; Verner-Jeffreys, David; Baker-Austin, Craig; Ryan, Jim J; Maskell, Duncan; Pearce, Gareth

    2015-01-01

    Antimicrobial resistance remains a growing and significant concern in human and veterinary medicine. Current laboratory methods for the detection and surveillance of antimicrobial resistant bacteria are limited in their effectiveness and scope. With the rapidly developing field of whole genome sequencing beginning to be utilised in clinical practice, the ability to interrogate sequencing data quickly and easily for the presence of antimicrobial resistance genes will become increasingly important and useful for informing clinical decisions. Additionally, use of such tools will provide insight into the dynamics of antimicrobial resistance genes in metagenomic samples such as those used in environmental monitoring. Here we present the Search Engine for Antimicrobial Resistance (SEAR), a pipeline and web interface for detection of horizontally acquired antimicrobial resistance genes in raw sequencing data. The pipeline provides gene information, abundance estimation and the reconstructed sequence of antimicrobial resistance genes; it also provides web links to additional information on each gene. The pipeline utilises clustering and read mapping to annotate full-length genes relative to a user-defined database. It also uses local alignment of annotated genes to a range of online databases to provide additional information. We demonstrate SEAR's application in the detection and abundance estimation of antimicrobial resistance genes in two novel environmental metagenomes, 32 human faecal microbiome datasets and 126 clinical isolates of Shigella sonnei. We have developed a pipeline that contributes to the improved capacity for antimicrobial resistance detection afforded by next generation sequencing technologies, allowing for rapid detection of antimicrobial resistance genes directly from sequencing data. SEAR uses raw sequencing data via an intuitive interface so can be run rapidly without requiring advanced bioinformatic skills or resources. Finally, we show that SEAR

  8. Emptying of large-scale pipeline by pressurized air

    NARCIS (Netherlands)

    Laanearu, J.; Annus, I.; Koppel, T.; Bergant, A.; Vuckovic, S.; Hou, Q.; Tijsseling, A.S.; Anderson, A.; Gale, J.; Westende, van 't J.M.C.

    2012-01-01

    Emptying of an initially water-filled horizontal PVC pipeline driven by different upstream compressed air pressures and with different outflow restriction conditions, with motion of an air-water front through the pressurized pipeline, is investigated experimentally. Simple numerical modeling is used

  9. Removable pipeline plug

    International Nuclear Information System (INIS)

    Vassalotti, M.; Anastasi, F.

    1984-01-01

    A removable plugging device for a pipeline, and particularly for pressure testing a steam pipeline in a boiling water reactor, wherein an inflatable annular sealing member seals off the pipeline and characterized by radially movable shoes for holding the plug in place, each shoe being pivotally mounted for self-adjusting engagement with even an out-of-round pipeline interior

  10. Pipeline integrity management

    Energy Technology Data Exchange (ETDEWEB)

    Guyt, J.; Macara, C.

    1997-12-31

    This paper focuses on some of the issues necessary for pipeline operators to consider when addressing the challenge of managing the integrity of their systems. Topics are: Definition; business justification; creation and safeguarding of technical integrity; control and deviation from technical integrity; pipelines; pipeline failure assessment; pipeline integrity assessment; leak detection; emergency response. 6 figs., 3 tabs.

  11. 75 FR 13342 - Pipeline Safety: Workshop on Distribution Pipeline Construction

    Science.gov (United States)

    2010-03-19

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... natural gas distribution construction. Natural gas distribution pipelines are subject to a unique subset... distribution pipeline construction practices. This workshop will focus solely on natural gas distribution...

  12. The BioC-BioGRID corpus: full text articles annotated for curation of protein–protein and genetic interactions

    Science.gov (United States)

    Kim, Sun; Chatr-aryamontri, Andrew; Chang, Christie S.; Oughtred, Rose; Rust, Jennifer; Wilbur, W. John; Comeau, Donald C.; Dolinski, Kara; Tyers, Mike

    2017-01-01

    A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein–protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future

  13. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1998-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  14. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J. [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1997-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  15. Diverse Image Annotation

    KAUST Repository

    Wu, Baoyuan

    2017-11-09

    In this work we study the task of image annotation, of which the goal is to describe an image using a few tags. Instead of predicting the full list of tags, here we target for providing a short list of tags under a limited number (e.g., 3), to cover as much information as possible of the image. The tags in such a short list should be representative and diverse. It means they are required to be not only corresponding to the contents of the image, but also be different to each other. To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly. We further explore the semantic hierarchy and synonyms among the candidate tags, and require that two tags in a semantic hierarchy or in a pair of synonyms should not be selected simultaneously. This requirement is then embedded into the sampling algorithm according to the learned conditional DPP model. Besides, we find that traditional metrics for image annotation (e.g., precision, recall and F1 score) only consider the representation, but ignore the diversity. Thus we propose new metrics to evaluate the quality of the selected subset (i.e., the tag list), based on the semantic hierarchy and synonyms. Human study through Amazon Mechanical Turk verifies that the proposed metrics are more close to the humans judgment than traditional metrics. Experiments on two benchmark datasets show that the proposed method can produce more representative and diverse tags, compared with existing image annotation methods.

  16. Diverse Image Annotation

    KAUST Repository

    Wu, Baoyuan; Jia, Fan; Liu, Wei; Ghanem, Bernard

    2017-01-01

    In this work we study the task of image annotation, of which the goal is to describe an image using a few tags. Instead of predicting the full list of tags, here we target for providing a short list of tags under a limited number (e.g., 3), to cover as much information as possible of the image. The tags in such a short list should be representative and diverse. It means they are required to be not only corresponding to the contents of the image, but also be different to each other. To this end, we treat the image annotation as a subset selection problem based on the conditional determinantal point process (DPP) model, which formulates the representation and diversity jointly. We further explore the semantic hierarchy and synonyms among the candidate tags, and require that two tags in a semantic hierarchy or in a pair of synonyms should not be selected simultaneously. This requirement is then embedded into the sampling algorithm according to the learned conditional DPP model. Besides, we find that traditional metrics for image annotation (e.g., precision, recall and F1 score) only consider the representation, but ignore the diversity. Thus we propose new metrics to evaluate the quality of the selected subset (i.e., the tag list), based on the semantic hierarchy and synonyms. Human study through Amazon Mechanical Turk verifies that the proposed metrics are more close to the humans judgment than traditional metrics. Experiments on two benchmark datasets show that the proposed method can produce more representative and diverse tags, compared with existing image annotation methods.

  17. Pollution from pipelines

    International Nuclear Information System (INIS)

    1991-01-01

    During the 1980s, over 3,900 spills from land-based pipelines released nearly 20 million gallons of oil into U.S. waters-almost twice as much as was released by the March 1989 Exxon Valdez oil spill. Although the Department of Transportation is responsible for preventing water pollution from petroleum pipelines, GAO found that it has not established a program to prevent such pollution. DOT has instead delegated this responsibility to the Coast Guard, which has a program to stop water pollution from ships, but not from pipelines. This paper reports that, in the absence of any federal program to prevent water pollution from pipelines, both the Coast Guard and the Environmental Protection Agency have taken steps to plan for and respond to oil spills, including those from pipelines, as required by the Clean Water Act. The Coast Guard cannot, however, adequately plan for or ensure a timely response to pipeline spills because it generally is unaware of specific locations and operators of pipelines

  18. 75 FR 63774 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Science.gov (United States)

    2010-10-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), Department of... Gas Pipeline Safety Act of 1968, Public Law 90-481, delegated to DOT the authority to develop...

  19. 77 FR 61825 - Pipeline Safety: Notice of Public Meeting on Pipeline Data

    Science.gov (United States)

    2012-10-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... program performance measures for gas distribution, gas transmission, and hazardous liquids pipelines. The... distribution pipelines (49 CFR 192.1007(e)), gas transmission pipelines (49 CFR 192.945) and hazardous liquids...

  20. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    Science.gov (United States)

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  1. 76 FR 44985 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding

    Science.gov (United States)

    2011-07-27

    .... PHMSA-2011-0177] Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding AGENCY... liquid pipelines to communicate the potential for damage to pipeline facilities caused by severe flooding... pipelines in case of flooding. ADDRESSES: This document can be viewed on the Office of Pipeline Safety home...

  2. Diagnostics and reliability of pipeline systems

    CERN Document Server

    Timashev, Sviatoslav

    2016-01-01

    The book contains solutions to fundamental problems which arise due to the logic of development of specific branches of science, which are related to pipeline safety, but mainly are subordinate to the needs of pipeline transportation.          The book deploys important but not yet solved aspects of reliability and safety assurance of pipeline systems, which are vital aspects not only for the oil and gas industry and, in general, fuel and energy industries , but also to virtually all contemporary industries and technologies. The volume will be useful to specialists and experts in the field of diagnostics/ inspection, monitoring, reliability and safety of critical infrastructures. First and foremost, it will be useful to the decision making persons —operators of different types of pipelines, pipeline diagnostics/inspection vendors, and designers of in-line –inspection (ILI) tools, industrial and ecological safety specialists, as well as to researchers and graduate students.

  3. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  4. Pipeline technology. Petroleum oil - long-distance pipelines. Pipelinetechnik. Mineraloelfernleitungen

    Energy Technology Data Exchange (ETDEWEB)

    Krass, W; Kittel, A; Uhde, A

    1979-01-01

    All questions and concerns of pipeline technique are dealt with in detail. Some chapters can be applied for petroleum pipelines only or partly, for example the importance of petroleum pipelines, projecting, calculation, and operation. The sections of pipes and formings, laying, rights of way, and corrosion protection, accessories and remote effect technique, however, are of general interest, for example also for gas pipelines. In the chapter on working material, a very good summary of today's pipe working material including the thermomechanically treated steels is given. Besides methods of improving the toughness, the problems of the corrosion caused by strain cracking and the ways of avoiding it are pointed out. The pipe producing methods and, in the end of the chapter, the tests in the factory are explained. The section of laying deals with the laying methods being applied for years in pipeline construction, a big part referring to welding methods and tests. Active and passive corrosion protection are explained with all details. In addition to the solidity calculation presented with special regard to concerns of petroleum pipelines, theoretical fundaments and calculation methods for pressure are dealt with. Beside general questions of pumps, accessories, and drives, there is a section dealing with measurement and control techniques. Furthermore, remote effect and transmission techniques and news systems are explained in detail. Here, problems are referred to which are applicable not only to the operation of mineral oil pipelines. The book is completed by indications as to pipeline operation emphasizing general operation control, maintenance, repair methods and damage and their elimination. The last chapter contains a collection of the legal fundaments and the technical rules.

  5. Gathering pipeline methane emissions in Fayetteville shale pipelines and scoping guidelines for future pipeline measurement campaigns

    Directory of Open Access Journals (Sweden)

    Daniel J. Zimmerle

    2017-11-01

    Full Text Available Gathering pipelines, which transport gas from well pads to downstream processing, are a sector of the natural gas supply chain for which little measured methane emissions data are available. This study performed leak detection and measurement on 96 km of gathering pipeline and the associated 56 pigging facilities and 39 block valves. The study found one underground leak accounting for 83% (4.0 kg CH4/hr of total measured emissions. Methane emissions for the 4684 km of gathering pipeline in the study area were estimated at 402 kg CH4/hr [95 to 1065 kg CH4/hr, 95% CI], or 1% [0.2% to 2.6%] of all methane emissions measured during a prior aircraft study of the same area. Emissions estimated by this study fall within the uncertainty range of emissions estimated using emission factors from EPA’s 2015 Greenhouse Inventory and study activity estimates. While EPA’s current inventory is based upon emission factors from distribution mains measured in the 1990s, this study indicates that using emission factors from more recent distribution studies could significantly underestimate emissions from gathering pipelines. To guide broader studies of pipeline emissions, we also estimate the fraction of the pipeline length within a basin that must be measured to constrain uncertainty of pipeline emissions estimates to within 1% of total basin emissions. The study provides both substantial insight into the mix of emission sources and guidance for future gathering pipeline studies, but since measurements were made in a single basin, the results are not sufficiently representative to provide methane emission factors at the regional or national level.

  6. Improved cost models for optimizing CO2 pipeline configuration for point-to-point pipelines and simple networks

    NARCIS (Netherlands)

    Knoope, M. M. J.|info:eu-repo/dai/nl/364248149; Guijt, W.; Ramirez, A.|info:eu-repo/dai/nl/284852414; Faaij, A. P. C.

    In this study, a new cost model is developed for CO2 pipeline transport, which starts with the physical properties of CO2 transport and includes different kinds of steel grades and up-to-date material and construction costs. This pipeline cost model is used for a new developed tool to determine the

  7. A Set of Annotation Interfaces for Alignment of Parallel Corpora

    Directory of Open Access Journals (Sweden)

    Singh Anil Kumar

    2014-09-01

    Full Text Available Annotation interfaces for parallel corpora which fit in well with other tools can be very useful. We describe a set of annotation interfaces which fulfill this criterion. This set includes a sentence alignment interface, two different word or word group alignment interfaces and an initial version of a parallel syntactic annotation alignment interface. These tools can be used for manual alignment, or they can be used to correct automatic alignments. Manual alignment can be performed in combination with certain kinds of linguistic annotation. Most of these interfaces use a representation called the Shakti Standard Format that has been found to be very robust and has been used for large and successful projects. It ties together the different interfaces, so that the data created by them is portable across all tools which support this representation. The existence of a query language for data stored in this representation makes it possible to build tools that allow easy search and modification of annotated parallel data.

  8. 77 FR 34123 - Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines

    Science.gov (United States)

    2012-06-08

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0100] Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines AGENCY: Office of Pipeline Safety, Pipeline and Hazardous Materials Safety Administration, DOT. ACTION...

  9. StructRNAfinder: an automated pipeline and web server for RNA families prediction.

    Science.gov (United States)

    Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius

    2018-02-17

    The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.

  10. The Effects of Multimedia Annotations on Iranian EFL Learners’ L2 Vocabulary Learning

    Directory of Open Access Journals (Sweden)

    Saeideh Ahangari

    2010-05-01

    Full Text Available In our modern technological world, Computer-Assisted Language learning (CALL is a new realm towards learning a language in general, and learning L2 vocabulary in particular. It is assumed that the use of multimedia annotations promotes language learners’ vocabulary acquisition. Therefore, this study set out to investigate the effects of different multimedia annotations (still picture annotations, dynamic picture annotations, and written annotations on L2 vocabulary learning. To fulfill this objective, the researchers selected sixty four EFL learners as the participants of this study. The participants were randomly assigned to one of the four groups: a control group that received no annotations and three experimental groups that received:  still picture annotations, dynamic picture annotations, and written annotations. Each participant was required to take a pre-test. A vocabulary post- test was also designed and administered to the participants in order to assess the efficacy of each annotation. First for each group a paired t-test was conducted between their pre and post test scores in order to observe their improvement; then through an ANCOVA test the performance of four groups was compared. The results showed that using multimedia annotations resulted in a significant difference in the participants’ vocabulary learning. Based on the results of the present study, multimedia annotations are suggested as a vocabulary teaching strategy.

  11. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  12. MixtureTree annotator: a program for automatic colorization and visual annotation of MixtureTree.

    Directory of Open Access Journals (Sweden)

    Shu-Chuan Chen

    Full Text Available The MixtureTree Annotator, written in JAVA, allows the user to automatically color any phylogenetic tree in Newick format generated from any phylogeny reconstruction program and output the Nexus file. By providing the ability to automatically color the tree by sequence name, the MixtureTree Annotator provides a unique advantage over any other programs which perform a similar function. In addition, the MixtureTree Annotator is the only package that can efficiently annotate the output produced by MixtureTree with mutation information and coalescent time information. In order to visualize the resulting output file, a modified version of FigTree is used. Certain popular methods, which lack good built-in visualization tools, for example, MEGA, Mesquite, PHY-FI, TreeView, treeGraph and Geneious, may give results with human errors due to either manually adding colors to each node or with other limitations, for example only using color based on a number, such as branch length, or by taxonomy. In addition to allowing the user to automatically color any given Newick tree by sequence name, the MixtureTree Annotator is the only method that allows the user to automatically annotate the resulting tree created by the MixtureTree program. The MixtureTree Annotator is fast and easy-to-use, while still allowing the user full control over the coloring and annotating process.

  13. 75 FR 5244 - Pipeline Safety: Integrity Management Program for Gas Distribution Pipelines; Correction

    Science.gov (United States)

    2010-02-02

    ... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management...

  14. Strength analysis of copper gas pipeline span

    OpenAIRE

    Ianevski, Philipp

    2016-01-01

    The purpose of the study was to analyze the stresses in a gas pipeline. While analyzing piping systems located inside building were used. Calculation of the strength of a gas pipeline is done by using information of the thickness of pipe walls, by choosing the suitable material, inner and outer diameter for the pipeline. Data for this thesis was collected through various internet sources and different books. From the study and research, the final results were reached and calculations were ...

  15. Investigation on potential SCC in gas transmission pipeline in China

    Energy Technology Data Exchange (ETDEWEB)

    Jian, S. [Petroleum Univ., Beijing (China); Zupei, Y.; Yunxin, M. [China Petroleum Pipeline Corp., Beijing (China). Science and Technology Center

    2004-07-01

    Stress corrosion cracking (SCC) is a common phenomenon that occurs on the outer surfaces of buried pipelines. This paper investigated aspects of SCC on 3 transmission pipelines on the West-East Gas Pipeline Project in China. The study was comprised of 3 different investigations: (1) an investigation of SCC cases on constructed pipelines; (2) an evaluation of SCC sensitivity of pipeline steels in typical soil environments; and (3) an analysis of soil environments and operation conditions of western pipelines. The study included a review of pipeline corrosion investigations, as well as an examination of pipeline failure cases. Investigative digs were conducted at 21 sites to test soil chemistries. Slow strain rate stress were conducted to evaluate SCC sensitivity of steel pipelines used in China. Potentiodynamic polarization tests were conducted to characterize the electrochemical behaviour of the X70 line pipe steel in different soil environments. Results of the study showed that the environmental conditions in many locations in China contributed to SCC in pipelines. SCC was observed on the surface of X70 steel pipe specimens in both marsh and saline environments. Seasonal temperature changes also contributed additional stress on pipelines. The movement of soil bodies in mountainous areas also contributed to stress and coating damage. It was concluded that proper cathodic protection can alleviate concentrations of local solutions under disbanded coatings. Overprotection of SCC will accelerate the growth of cracks and the degradation of coatings. Samples gathered from the solutions found under the disbanded coatings of pipelines will be used to form part of a reference database for predicting SCC in oil and gas pipelines in the future. 2 refs., 4 tabs., 5 figs.

  16. Overview of interstate hydrogen pipeline systems

    International Nuclear Information System (INIS)

    Gillette, J.L.; Kolpa, R.L.

    2008-01-01

    . The following discussion will focus on the similarities and differences between the two pipeline networks. Hydrogen production is currently concentrated in refining centers along the Gulf Coast and in the Farm Belt. These locations have ready access to natural gas, which is used in the steam methane reduction process to make bulk hydrogen in this country. Production centers could possibly change to lie along coastlines, rivers, lakes, or rail lines, should nuclear power or coal become a significant energy source for hydrogen production processes. Should electrolysis become a dominant process for hydrogen production, water availability would be an additional factor in the location of production facilities. Once produced, hydrogen must be transported to markets. A key obstacle to making hydrogen fuel widely available is the scale of expansion needed to serve additional markets. Developing a hydrogen transmission and distribution infrastructure would be one of the challenges to be faced if the United States is to move toward a hydrogen economy. Initial uses of hydrogen are likely to involve a variety of transmission and distribution methods. Smaller users would probably use truck transport, with the hydrogen being in either the liquid or gaseous form. Larger users, however, would likely consider using pipelines. This option would require specially constructed pipelines and the associated infrastructure. Pipeline transmission of hydrogen dates back to late 1930s. These pipelines have generally operated at less than 1,000 pounds per square inch (psi), with a good safety record. Estimates of the existing hydrogen transmission system in the United States range from about 450 to 800 miles. Estimates for Europe range from about 700 to 1,100 miles (Mohipour et al. 2004; Amos 1998). These seemingly large ranges result from using differing criteria in determining pipeline distances. For example, some analysts consider only pipelines above a certain diameter as transmission lines

  17. Overview of interstate hydrogen pipeline systems.

    Energy Technology Data Exchange (ETDEWEB)

    Gillette, J .L.; Kolpa, R. L

    2008-02-01

    . The following discussion will focus on the similarities and differences between the two pipeline networks. Hydrogen production is currently concentrated in refining centers along the Gulf Coast and in the Farm Belt. These locations have ready access to natural gas, which is used in the steam methane reduction process to make bulk hydrogen in this country. Production centers could possibly change to lie along coastlines, rivers, lakes, or rail lines, should nuclear power or coal become a significant energy source for hydrogen production processes. Should electrolysis become a dominant process for hydrogen production, water availability would be an additional factor in the location of production facilities. Once produced, hydrogen must be transported to markets. A key obstacle to making hydrogen fuel widely available is the scale of expansion needed to serve additional markets. Developing a hydrogen transmission and distribution infrastructure would be one of the challenges to be faced if the United States is to move toward a hydrogen economy. Initial uses of hydrogen are likely to involve a variety of transmission and distribution methods. Smaller users would probably use truck transport, with the hydrogen being in either the liquid or gaseous form. Larger users, however, would likely consider using pipelines. This option would require specially constructed pipelines and the associated infrastructure. Pipeline transmission of hydrogen dates back to late 1930s. These pipelines have generally operated at less than 1,000 pounds per square inch (psi), with a good safety record. Estimates of the existing hydrogen transmission system in the United States range from about 450 to 800 miles. Estimates for Europe range from about 700 to 1,100 miles (Mohipour et al. 2004; Amos 1998). These seemingly large ranges result from using differing criteria in determining pipeline distances. For example, some analysts consider only pipelines above a certain diameter as transmission lines

  18. CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline.

    Science.gov (United States)

    Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M; Tettelin, Hervé; White, Owen; Angiuoli, Samuel V; Mahurkar, Anup; Fricke, W Florian

    2017-04-27

    The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. CloVR-Comparative runs reference-free multiple whole-genome alignments to determine unique, shared and core coding sequences (CDSs) and single nucleotide polymorphisms (SNPs). Output includes short summary reports and detailed text-based results files, graphical visualizations (phylogenetic trees, circular figures), and a database file linked to the Sybil comparative genome browser. Data up- and download, pipeline configuration and monitoring, and access to Sybil are managed through CloVR-Comparative web interface. CloVR-Comparative and Sybil are distributed as part of the CloVR virtual appliance, which runs on local computers or the Amazon EC2 cloud. Representative datasets (e.g. 40 draft and complete Escherichia coli genomes) are processed in genomics projects, while eliminating the need for on-site computational resources and expertise.

  19. Impedance Method for Leak Detection in Zigzag Pipelines

    Science.gov (United States)

    Lay-Ekuakille, A.; Vergallo, P.; Trotta, A.

    2010-01-01

    Transportation of liquids is a primary aspect of human life. The most important infrastructure used accordingly is the pipeline. It serves as an asset for transporting different liquids and strategic goods. The latter are for example: chemical substances, oil, gas and water. Thus, it is necessary to monitor such infrastructures by means of specific tools. Leakage detection methods are used to reveal liquid leaks in pipelines for many applications, namely, waterworks, oil pipelines, industry heat exchangers, etc. The configuration of pipelines is a key issue because it impacts on the effectiveness of the method to be used and, consequently, on the results to be counterchecked. This research illustrated an improvement of the impedance method for zigzag pipeline by carrying out an experimental frequency analysis that has been compared with other methods based on frequency response. Hence, the impedance method is generally used for simple (straight) pipeline configurations because complicated pipelines with many curves introduce difficulties and major uncertainties in the calculation of characteristic impedance and in the statement of boundary conditions. The paper illustrates the case of a water pipeline where the leakage is acquired thanks to pressure transducers.

  20. Pipeline modeling and assessment in unstable slopes

    Energy Technology Data Exchange (ETDEWEB)

    Caceres, Carlos Nieves [Oleoducto Central S.A., Bogota, Cundinamarca (Colombia); Ordonez, Mauricio Pereira [SOLSIN S.A.S, Bogota, Cundinamarca (Colombia)

    2010-07-01

    The OCENSA pipeline system is vulnerable to geotechnical problems such as faults, landslides or creeping slopes, which are well-known in the Andes Mountains and tropical countries like Colombia. This paper proposes a methodology to evaluate the pipe behaviour during the soil displacements of slow landslides. Three different cases of analysis are examined, according to site characteristics. The process starts with a simplified analytical model and develops into 3D finite element numerical simulations applied to the on-site geometry of soil and pipe. Case 1 should be used when the unstable site is subject to landslides impacting significant lengths of pipeline, pipeline is straight, and landslide is simple from the geotechnical perspective. Case 2 should be used when pipeline is straight and landslide is complex (creeping slopes and non-conventional stabilization solutions). Case 3 should be used if the pipeline presents vertical or horizontal bends.

  1. Improving Microbial Genome Annotations in an Integrated Database Context

    Science.gov (United States)

    Chen, I-Min A.; Markowitz, Victor M.; Chu, Ken; Anderson, Iain; Mavromatis, Konstantinos; Kyrpides, Nikos C.; Ivanova, Natalia N.

    2013-01-01

    Effective comparative analysis of microbial genomes requires a consistent and complete view of biological data. Consistency regards the biological coherence of annotations, while completeness regards the extent and coverage of functional characterization for genomes. We have developed tools that allow scientists to assess and improve the consistency and completeness of microbial genome annotations in the context of the Integrated Microbial Genomes (IMG) family of systems. All publicly available microbial genomes are characterized in IMG using different functional annotation and pathway resources, thus providing a comprehensive framework for identifying and resolving annotation discrepancies. A rule based system for predicting phenotypes in IMG provides a powerful mechanism for validating functional annotations, whereby the phenotypic traits of an organism are inferred based on the presence of certain metabolic reactions and pathways and compared to experimentally observed phenotypes. The IMG family of systems are available at http://img.jgi.doe.gov/. PMID:23424620

  2. Improving microbial genome annotations in an integrated database context.

    Directory of Open Access Journals (Sweden)

    I-Min A Chen

    Full Text Available Effective comparative analysis of microbial genomes requires a consistent and complete view of biological data. Consistency regards the biological coherence of annotations, while completeness regards the extent and coverage of functional characterization for genomes. We have developed tools that allow scientists to assess and improve the consistency and completeness of microbial genome annotations in the context of the Integrated Microbial Genomes (IMG family of systems. All publicly available microbial genomes are characterized in IMG using different functional annotation and pathway resources, thus providing a comprehensive framework for identifying and resolving annotation discrepancies. A rule based system for predicting phenotypes in IMG provides a powerful mechanism for validating functional annotations, whereby the phenotypic traits of an organism are inferred based on the presence of certain metabolic reactions and pathways and compared to experimentally observed phenotypes. The IMG family of systems are available at http://img.jgi.doe.gov/.

  3. Community annotation and bioinformatics workforce development in concert--Little Skate Genome Annotation Workshops and Jamborees.

    Science.gov (United States)

    Wang, Qinghua; Arighi, Cecilia N; King, Benjamin L; Polson, Shawn W; Vincent, James; Chen, Chuming; Huang, Hongzhan; Kingham, Brewster F; Page, Shallee T; Rendino, Marc Farnum; Thomas, William Kelley; Udwary, Daniel W; Wu, Cathy H

    2012-01-01

    Recent advances in high-throughput DNA sequencing technologies have equipped biologists with a powerful new set of tools for advancing research goals. The resulting flood of sequence data has made it critically important to train the next generation of scientists to handle the inherent bioinformatic challenges. The North East Bioinformatics Collaborative (NEBC) is undertaking the genome sequencing and annotation of the little skate (Leucoraja erinacea) to promote advancement of bioinformatics infrastructure in our region, with an emphasis on practical education to create a critical mass of informatically savvy life scientists. In support of the Little Skate Genome Project, the NEBC members have developed several annotation workshops and jamborees to provide training in genome sequencing, annotation and analysis. Acting as a nexus for both curation activities and dissemination of project data, a project web portal, SkateBase (http://skatebase.org) has been developed. As a case study to illustrate effective coupling of community annotation with workforce development, we report the results of the Mitochondrial Genome Annotation Jamborees organized to annotate the first completely assembled element of the Little Skate Genome Project, as a culminating experience for participants from our three prior annotation workshops. We are applying the physical/virtual infrastructure and lessons learned from these activities to enhance and streamline the genome annotation workflow, as we look toward our continuing efforts for larger-scale functional and structural community annotation of the L. erinacea genome.

  4. Community annotation and bioinformatics workforce development in concert—Little Skate Genome Annotation Workshops and Jamborees

    Science.gov (United States)

    Wang, Qinghua; Arighi, Cecilia N.; King, Benjamin L.; Polson, Shawn W.; Vincent, James; Chen, Chuming; Huang, Hongzhan; Kingham, Brewster F.; Page, Shallee T.; Farnum Rendino, Marc; Thomas, William Kelley; Udwary, Daniel W.; Wu, Cathy H.

    2012-01-01

    Recent advances in high-throughput DNA sequencing technologies have equipped biologists with a powerful new set of tools for advancing research goals. The resulting flood of sequence data has made it critically important to train the next generation of scientists to handle the inherent bioinformatic challenges. The North East Bioinformatics Collaborative (NEBC) is undertaking the genome sequencing and annotation of the little skate (Leucoraja erinacea) to promote advancement of bioinformatics infrastructure in our region, with an emphasis on practical education to create a critical mass of informatically savvy life scientists. In support of the Little Skate Genome Project, the NEBC members have developed several annotation workshops and jamborees to provide training in genome sequencing, annotation and analysis. Acting as a nexus for both curation activities and dissemination of project data, a project web portal, SkateBase (http://skatebase.org) has been developed. As a case study to illustrate effective coupling of community annotation with workforce development, we report the results of the Mitochondrial Genome Annotation Jamborees organized to annotate the first completely assembled element of the Little Skate Genome Project, as a culminating experience for participants from our three prior annotation workshops. We are applying the physical/virtual infrastructure and lessons learned from these activities to enhance and streamline the genome annotation workflow, as we look toward our continuing efforts for larger-scale functional and structural community annotation of the L. erinacea genome. PMID:22434832

  5. Reasoning with Annotations of Texts

    OpenAIRE

    Ma , Yue; Lévy , François; Ghimire , Sudeep

    2011-01-01

    International audience; Linguistic and semantic annotations are important features for text-based applications. However, achieving and maintaining a good quality of a set of annotations is known to be a complex task. Many ad hoc approaches have been developed to produce various types of annotations, while comparing those annotations to improve their quality is still rare. In this paper, we propose a framework in which both linguistic and domain information can cooperate to reason with annotat...

  6. PSPP: a protein structure prediction pipeline for computing clusters.

    Directory of Open Access Journals (Sweden)

    Michael S Lee

    2009-07-01

    Full Text Available Protein structures are critical for understanding the mechanisms of biological systems and, subsequently, for drug and vaccine design. Unfortunately, protein sequence data exceed structural data by a factor of more than 200 to 1. This gap can be partially filled by using computational protein structure prediction. While structure prediction Web servers are a notable option, they often restrict the number of sequence queries and/or provide a limited set of prediction methodologies. Therefore, we present a standalone protein structure prediction software package suitable for high-throughput structural genomic applications that performs all three classes of prediction methodologies: comparative modeling, fold recognition, and ab initio. This software can be deployed on a user's own high-performance computing cluster.The pipeline consists of a Perl core that integrates more than 20 individual software packages and databases, most of which are freely available from other research laboratories. The query protein sequences are first divided into domains either by domain boundary recognition or Bayesian statistics. The structures of the individual domains are then predicted using template-based modeling or ab initio modeling. The predicted models are scored with a statistical potential and an all-atom force field. The top-scoring ab initio models are annotated by structural comparison against the Structural Classification of Proteins (SCOP fold database. Furthermore, secondary structure, solvent accessibility, transmembrane helices, and structural disorder are predicted. The results are generated in text, tab-delimited, and hypertext markup language (HTML formats. So far, the pipeline has been used to study viral and bacterial proteomes.The standalone pipeline that we introduce here, unlike protein structure prediction Web servers, allows users to devote their own computing assets to process a potentially unlimited number of queries as well as perform

  7. The BioC-BioGRID corpus: full text articles annotated for curation of protein-protein and genetic interactions.

    Science.gov (United States)

    Islamaj Dogan, Rezarta; Kim, Sun; Chatr-Aryamontri, Andrew; Chang, Christie S; Oughtred, Rose; Rust, Jennifer; Wilbur, W John; Comeau, Donald C; Dolinski, Kara; Tyers, Mike

    2017-01-01

    A great deal of information on the molecular genetics and biochemistry of model organisms has been reported in the scientific literature. However, this data is typically described in free text form and is not readily amenable to computational analyses. To this end, the BioGRID database systematically curates the biomedical literature for genetic and protein interaction data. This data is provided in a standardized computationally tractable format and includes structured annotation of experimental evidence. BioGRID curation necessarily involves substantial human effort by expert curators who must read each publication to extract the relevant information. Computational text-mining methods offer the potential to augment and accelerate manual curation. To facilitate the development of practical text-mining strategies, a new challenge was organized in BioCreative V for the BioC task, the collaborative Biocurator Assistant Task. This was a non-competitive, cooperative task in which the participants worked together to build BioC-compatible modules into an integrated pipeline to assist BioGRID curators. As an integral part of this task, a test collection of full text articles was developed that contained both biological entity annotations (gene/protein and organism/species) and molecular interaction annotations (protein-protein and genetic interactions (PPIs and GIs)). This collection, which we call the BioC-BioGRID corpus, was annotated by four BioGRID curators over three rounds of annotation and contains 120 full text articles curated in a dataset representing two major model organisms, namely budding yeast and human. The BioC-BioGRID corpus contains annotations for 6409 mentions of genes and their Entrez Gene IDs, 186 mentions of organism names and their NCBI Taxonomy IDs, 1867 mentions of PPIs and 701 annotations of PPI experimental evidence statements, 856 mentions of GIs and 399 annotations of GI evidence statements. The purpose, characteristics and possible future

  8. Ten steps to get started in Genome Assembly and Annotation

    Science.gov (United States)

    Dominguez Del Angel, Victoria; Hjerde, Erik; Sterck, Lieven; Capella-Gutierrez, Salvadors; Notredame, Cederic; Vinnere Pettersson, Olga; Amselem, Joelle; Bouri, Laurent; Bocs, Stephanie; Klopp, Christophe; Gibrat, Jean-Francois; Vlasova, Anna; Leskosek, Brane L.; Soler, Lucile; Binzer-Panchal, Mahesh; Lantz, Henrik

    2018-01-01

    As a part of the ELIXIR-EXCELERATE efforts in capacity building, we present here 10 steps to facilitate researchers getting started in genome assembly and genome annotation. The guidelines given are broadly applicable, intended to be stable over time, and cover all aspects from start to finish of a general assembly and annotation project. Intrinsic properties of genomes are discussed, as is the importance of using high quality DNA. Different sequencing technologies and generally applicable workflows for genome assembly are also detailed. We cover structural and functional annotation and encourage readers to also annotate transposable elements, something that is often omitted from annotation workflows. The importance of data management is stressed, and we give advice on where to submit data and how to make your results Findable, Accessible, Interoperable, and Reusable (FAIR). PMID:29568489

  9. 78 FR 41991 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding

    Science.gov (United States)

    2013-07-12

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice; Issuance of Advisory... Gas and Hazardous Liquid Pipeline Systems. Subject: Potential for Damage to Pipeline Facilities Caused...

  10. CFD analysis of onshore oil pipelines in permafrost

    Science.gov (United States)

    Nardecchia, Fabio; Gugliermetti, Luca; Gugliermetti, Franco

    2017-07-01

    Underground pipelines are built all over the world and the knowledge of their thermal interaction with the soil is crucial for their design. This paper studies the "thermal influenced zone" produced by a buried pipeline and the parameters that can influence its extension by 2D-steady state CFD simulations with the aim to improve the design of new pipelines in permafrost. In order to represent a real case, the study is referred to the Eastern Siberia-Pacific Ocean Oil Pipeline at the three stations of Mo'he, Jiagedaqi and Qiqi'har. Different burial depth sand diameters of the pipe are analyzed; the simulation results show that the effect of the oil pipeline diameter on the thermal field increases with the increase of the distance from the starting station.

  11. 78 FR 41496 - Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees

    Science.gov (United States)

    2013-07-10

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0156] Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of advisory committee...

  12. Overview of slurry pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Gandhi, R L

    1982-01-01

    Slurry pipelines have proven to be a technically feasible, environmentally attractive and economic method of transporting finely divided particles over long distances. A pipeline system normally consists of preparation, pipeline and utilization facilities and requires optimization of all three components taken together. A considerable amount of research work has been done to develop hydraulic design of a slurry pipeline. Equipment selection and estimation of corrosion-erosion are considered to be as important as the hydraulic design. Future applications are expected to be for the large-scale transport of coal and for the exploitation of remotely located mineral deposits such as iron ore and copper. Application of slurry pipelines for the exploitation of remotely located mineral deposits is illustrated by the Kudremukh iron concentrate slurry pipeline in India.

  13. NoGOA: predicting noisy GO annotations using evidences and sparse representation.

    Science.gov (United States)

    Yu, Guoxian; Lu, Chang; Wang, Jun

    2017-07-21

    Gene Ontology (GO) is a community effort to represent functional features of gene products. GO annotations (GOA) provide functional associations between GO terms and gene products. Due to resources limitation, only a small portion of annotations are manually checked by curators, and the others are electronically inferred. Although quality control techniques have been applied to ensure the quality of annotations, the community consistently report that there are still considerable noisy (or incorrect) annotations. Given the wide application of annotations, however, how to identify noisy annotations is an important but yet seldom studied open problem. We introduce a novel approach called NoGOA to predict noisy annotations. NoGOA applies sparse representation on the gene-term association matrix to reduce the impact of noisy annotations, and takes advantage of sparse representation coefficients to measure the semantic similarity between genes. Secondly, it preliminarily predicts noisy annotations of a gene based on aggregated votes from semantic neighborhood genes of that gene. Next, NoGOA estimates the ratio of noisy annotations for each evidence code based on direct annotations in GOA files archived on different periods, and then weights entries of the association matrix via estimated ratios and propagates weights to ancestors of direct annotations using GO hierarchy. Finally, it integrates evidence-weighted association matrix and aggregated votes to predict noisy annotations. Experiments on archived GOA files of six model species (H. sapiens, A. thaliana, S. cerevisiae, G. gallus, B. Taurus and M. musculus) demonstrate that NoGOA achieves significantly better results than other related methods and removing noisy annotations improves the performance of gene function prediction. The comparative study justifies the effectiveness of integrating evidence codes with sparse representation for predicting noisy GO annotations. Codes and datasets are available at http://mlda.swu.edu.cn/codes.php?name=NoGOA .

  14. Pipeline Drag Reducers

    International Nuclear Information System (INIS)

    Marawan, H.

    2004-01-01

    Pipeline drag reducers have proven to be an extremely powerful tool in fluid transportation. High molecular weight polymers are used to reduce the frictional pressure loss ratio in crude oil pipelines, refined fuel and aqueous pipelines. Chemical structure of the main used pipeline drag reducers is one of the following polymers and copolymers classified according to the type of fluid to ; low density polyethylene, copolymer of I-hexane cross linked with divinyl benzene, polyacrylamide, polyalkylene oxide polymers and their copolymers, fluorocarbons, polyalkyl methacrylates and terpolymer of styrene, alkyl acrylate and acrylic acid. Drag reduction is the increase in pump ability of a fluid caused by the addition of small amounts of an additive to the fluid. The effectiveness of a drag reducer is normally expressed in terms of percent drag reduction. Frictional pressure loss in a pipeline system is a waste of energy and it costly. The drag reducing additive minimizes the flow turbulence, increases throughput and reduces the energy costs. The Flow can be increased by more than 80 % with existing assets. The effectiveness of the injected drag reducer in Mostorod to Tanta crude oil pipeline achieved 35.4 % drag reduction and 23.2 % flow increase of the actual performance The experimental application of DRA on Arab Petroleum Pipeline Company (Summed) achieved a flow increase ranging from 9-32 %

  15. BEACON: automated tool for Bacterial GEnome Annotation ComparisON

    KAUST Repository

    Kalkatawi, Manal M.; Alam, Intikhab; Bajic, Vladimir B.

    2015-01-01

    We developed BEACON, a fast tool for an automated and a systematic comparison of different annotations of single genomes. The extended annotation assigns putative functions to many genes with unknown functions. BEACON is available under GNU General Public License version 3.0 and is accessible at: http://www.cbrc.kaust.edu.sa/BEACON/

  16. Putative drug and vaccine target protein identification using comparative genomic analysis of KEGG annotated metabolic pathways of Mycoplasma hyopneumoniae.

    Science.gov (United States)

    Damte, Dereje; Suh, Joo-Won; Lee, Seung-Jin; Yohannes, Sileshi Belew; Hossain, Md Akil; Park, Seung-Chun

    2013-07-01

    In the present study, a computational comparative and subtractive genomic/proteomic analysis aimed at the identification of putative therapeutic target and vaccine candidate proteins from Kyoto Encyclopedia of Genes and Genomes (KEGG) annotated metabolic pathways of Mycoplasma hyopneumoniae was performed for drug design and vaccine production pipelines against M.hyopneumoniae. The employed comparative genomic and metabolic pathway analysis with a predefined computational systemic workflow extracted a total of 41 annotated metabolic pathways from KEGG among which five were unique to M. hyopneumoniae. A total of 234 proteins were identified to be involved in these metabolic pathways. Although 125 non homologous and predicted essential proteins were found from the total that could serve as potential drug targets and vaccine candidates, additional prioritizing parameters characterize 21 proteins as vaccine candidate while druggability of each of the identified proteins evaluated by the DrugBank database prioritized 42 proteins suitable for drug targets. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Design of Submarine Pipeline With Respect to Corrosion and Material Selection

    OpenAIRE

    El-Mogi, Hossam

    2016-01-01

    Master's thesis in Offshore technology : subsea technology Pipelines are an essential part of the oil and gas industry as they are the main means of transportation. As the offshore technology advances, subsea pipelines are being operated in more demanding environments. For the pipelines to operate efficiently, they have to be carefully designed. One of the main threats to the integrity of the pipeline is corrosion, which has caused many failures. Corrosion in subsea pipelines has different...

  18. Seismic response of buried pipelines: a state-of-the-art review

    International Nuclear Information System (INIS)

    Datta, T.K.

    1999-01-01

    A state-of-the-art review of the seismic response of buried pipelines is presented. The review includes modeling of soil-pipe system and seismic excitation, methods of response analysis of buried pipelines, seismic behavior of buried pipelines under different parametric variations, seismic stresses at the bends and intersections of network of pipelines. pipe damage in earthquakes and seismic risk analysis of buried pipelines. Based on the review, the future scope of work on the subject is outlined. (orig.)

  19. Predicting word sense annotation agreement

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector; Johannsen, Anders Trærup; Lopez de Lacalle, Oier

    2015-01-01

    High agreement is a common objective when annotating data for word senses. However, a number of factors make perfect agreement impossible, e.g. the limitations of the sense inventories, the difficulty of the examples or the interpretation preferences of the annotations. Estimating potential...... agreement is thus a relevant task to supplement the evaluation of sense annotations. In this article we propose two methods to predict agreement on word-annotation instances. We experiment with a continuous representation and a three-way discretization of observed agreement. In spite of the difficulty...

  20. Cooling pipeline disposing structure for large-scaled cryogenic structure

    International Nuclear Information System (INIS)

    Takahashi, Hiroyuki.

    1996-01-01

    The present invention concerns an electromagnetic force supporting structure for superconductive coils. As the size of a cryogenic structure is increased, since it takes much cooling time, temperature difference between cooling pipelines and the cryogenic structure is increased over a wide range, and difference of heat shrinkage is increased to increase thermal stresses. Then, in the cooling pipelines for a large scaled cryogenic structure, the cooling pipelines and the structure are connected by way of a thin metal plate made of a material having a heat conductivity higher than that of the material of the structure by one digit or more, and the thin metal plate is bent. The displacement between the cryogenic structure and the cooling pipelines caused by heat shrinkage is absorbed by the elongation/shrinkage of the bent structure of the thin metal plate, and the thermal stresses due to the displacement is reduced. In addition, the heat of the cryogenic structures is transferred by way of the thin metal plate. Then, the cooling pipelines can be secured to the cryogenic structure such that cooling by heat transfer is enabled by absorbing a great deviation or three dimensional displacement due to the difference of the temperature distribution between the cryogenic structure enlarged in the scale and put into the three dimensional shape, and the cooling pipelines. (N.H.)

  1. North America pipeline map

    International Nuclear Information System (INIS)

    Anon.

    2005-01-01

    This map presents details of pipelines currently in place throughout North America. Fifty-nine natural gas pipelines are presented, as well as 16 oil pipelines. The map also identifies six proposed natural gas pipelines. Major cities, roads and highways are included as well as state and provincial boundaries. The National Petroleum Reserve is identified, as well as the Arctic National Wildlife Refuge. The following companies placed advertisements on the map with details of the services they provide relating to pipeline management and construction: Ferus Gas Industries Trust; Proline; SulfaTreat Direct Oxidation; and TransGas. 1 map

  2. 76 FR 303 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Science.gov (United States)

    2011-01-04

    ... leak detection requirements for all pipelines; whether to require the installation of emergency flow... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 195 [Docket ID PHMSA-2010-0229] RIN 2137-AE66 Pipeline Safety: Safety of On-Shore Hazardous Liquid...

  3. Alignment-Annotator web server: rendering and annotating sequence alignments.

    Science.gov (United States)

    Gille, Christoph; Fähling, Michael; Weyand, Birgit; Wieland, Thomas; Gille, Andreas

    2014-07-01

    Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Allele Workbench: transcriptome pipeline and interactive graphics for allele-specific expression.

    Directory of Open Access Journals (Sweden)

    Carol A Soderlund

    Full Text Available Sequencing the transcriptome can answer various questions such as determining the transcripts expressed in a given species for a specific tissue or condition, evaluating differential expression, discovering variants, and evaluating allele-specific expression. Differential expression evaluates the expression differences between different strains, tissues, and conditions. Allele-specific expression evaluates expression differences between parental alleles. Both differential expression and allele-specific expression have been studied for heterosis (hybrid vigor, where the hybrid has improved performance over the parents for one or more traits. The Allele Workbench software was developed for a heterosis study that evaluated allele-specific expression for a mouse F1 hybrid using libraries from multiple tissues with biological replicates. This software has been made into a distributable package, which includes a pipeline, a Java interface to build the database, and a Java interface for query and display of the results. The required input is a reference genome, annotation file, and one or more RNA-Seq libraries with optional replicates. It evaluates allelic imbalance at the SNP and transcript level and flags transcripts with significant opposite directional allele-specific expression. The Java interface allows the user to view data from libraries, replicates, genes, transcripts, exons, and variants, including queries on allele imbalance for selected libraries. To determine the impact of allele-specific SNPs on protein folding, variants are annotated with their effect (e.g., missense, and the parental protein sequences may be exported for protein folding analysis. The Allele Workbench processing results in transcript files and read counts that can be used as input to the previously published Transcriptome Computational Workbench, which has a new algorithm for determining a trimmed set of gene ontology terms. The software with demo files is available

  5. Leadership Pipeline

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård

    2013-01-01

    I artiklen undersøges det empiriske grundlag for Leader- ship Pipeline. Først beskrives Leadership Pipeline modellen om le- delsesbaner og skilleveje i opadgående transitioner mellem orga- nisatoriske ledelsesniveauer (Freedman, 1998; Charan, Drotter and Noel, 2001). Dernæst sættes fokus på det...... forholdet mellem kontinuitet- og diskontinuitet i ledel- seskompetencer på tværs af organisatoriske niveauer præsenteres og diskuteres. Afslutningsvis diskuteres begrænsningerne i en kompetencebaseret tilgang til Leadership Pipeline, og det foreslås, at succesfuld ledelse i ligeså høj grad afhænger af...

  6. Which future for conventional pipeline laying barges?; Quel avenir pour les barges de pose de pipelines conventionnelles ?

    Energy Technology Data Exchange (ETDEWEB)

    Borelli, A.; Perinet, D. [ETPM International (International organizations without location)

    1997-05-01

    The aim of this paper is to study the evolution of conventional pipeline laying barges. The past and todays capacities of some barges are presented in order to follow the evolution of their equipments with time to answer the market needs. The second part outlines the main characteristics of todays market needs. Different analyses are made according to the different means of pipeline laying: conventional, flexible pipes and rigid pipes using unrolling technique. Market trends in these 3 domains show a sensible growth from 1996 to 1997 and are assumed to keep a sustained level during the following years. However, the tendency shows an evolution towards smaller diameter pipes and greater depths. The last part concerns the evolution of laying barges. The most important improvements in pipeline laying industry concern the dynamic positioning, the laying techniques (`S` laying technique), and the rate of laying using real-time control techniques. (J.S.)

  7. 76 FR 29333 - Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical...

    Science.gov (United States)

    2011-05-20

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Technical Hazardous Liquid Pipeline Safety Standards Committee AGENCY: Pipeline and Hazardous Materials... for natural gas pipelines and for hazardous liquid pipelines. Both committees were established under...

  8. Genome Wide Re-Annotation of Caldicellulosiruptor saccharolyticus with New Insights into Genes Involved in Biomass Degradation and Hydrogen Production.

    Science.gov (United States)

    Chowdhary, Nupoor; Selvaraj, Ashok; KrishnaKumaar, Lakshmi; Kumar, Gopal Ramesh

    2015-01-01

    Caldicellulosiruptor saccharolyticus has proven itself to be an excellent candidate for biological hydrogen (H2) production, but still it has major drawbacks like sensitivity to high osmotic pressure and low volumetric H2 productivity, which should be considered before it can be used industrially. A whole genome re-annotation work has been carried out as an attempt to update the incomplete genome information that causes gap in the knowledge especially in the area of metabolic engineering, to improve the H2 producing capabilities of C. saccharolyticus. Whole genome re-annotation was performed through manual means for 2,682 Coding Sequences (CDSs). Bioinformatics tools based on sequence similarity, motif search, phylogenetic analysis and fold recognition were employed for re-annotation. Our methodology could successfully add functions for 409 hypothetical proteins (HPs), 46 proteins previously annotated as putative and assigned more accurate functions for the known protein sequences. Homology based gene annotation has been used as a standard method for assigning function to novel proteins, but over the past few years many non-homology based methods such as genomic context approaches for protein function prediction have been developed. Using non-homology based functional prediction methods, we were able to assign cellular processes or physical complexes for 249 hypothetical sequences. Our re-annotation pipeline highlights the addition of 231 new CDSs generated from MicroScope Platform, to the original genome with functional prediction for 49 of them. The re-annotation of HPs and new CDSs is stored in the relational database that is available on the MicroScope web-based platform. In parallel, a comparative genome analyses were performed among the members of genus Caldicellulosiruptor to understand the function and evolutionary processes. Further, with results from integrated re-annotation studies (homology and genomic context approach), we strongly suggest that Csac

  9. Genome Wide Re-Annotation of Caldicellulosiruptor saccharolyticus with New Insights into Genes Involved in Biomass Degradation and Hydrogen Production.

    Directory of Open Access Journals (Sweden)

    Nupoor Chowdhary

    Full Text Available Caldicellulosiruptor saccharolyticus has proven itself to be an excellent candidate for biological hydrogen (H2 production, but still it has major drawbacks like sensitivity to high osmotic pressure and low volumetric H2 productivity, which should be considered before it can be used industrially. A whole genome re-annotation work has been carried out as an attempt to update the incomplete genome information that causes gap in the knowledge especially in the area of metabolic engineering, to improve the H2 producing capabilities of C. saccharolyticus. Whole genome re-annotation was performed through manual means for 2,682 Coding Sequences (CDSs. Bioinformatics tools based on sequence similarity, motif search, phylogenetic analysis and fold recognition were employed for re-annotation. Our methodology could successfully add functions for 409 hypothetical proteins (HPs, 46 proteins previously annotated as putative and assigned more accurate functions for the known protein sequences. Homology based gene annotation has been used as a standard method for assigning function to novel proteins, but over the past few years many non-homology based methods such as genomic context approaches for protein function prediction have been developed. Using non-homology based functional prediction methods, we were able to assign cellular processes or physical complexes for 249 hypothetical sequences. Our re-annotation pipeline highlights the addition of 231 new CDSs generated from MicroScope Platform, to the original genome with functional prediction for 49 of them. The re-annotation of HPs and new CDSs is stored in the relational database that is available on the MicroScope web-based platform. In parallel, a comparative genome analyses were performed among the members of genus Caldicellulosiruptor to understand the function and evolutionary processes. Further, with results from integrated re-annotation studies (homology and genomic context approach, we strongly

  10. 77 FR 16471 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Science.gov (United States)

    2012-03-21

    ... Registry of Pipeline and Liquefied Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Register (75 FR 72878) titled: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting...

  11. ATLAS (Automatic Tool for Local Assembly Structures) - A Comprehensive Infrastructure for Assembly, Annotation, and Genomic Binning of Metagenomic and Metaranscripomic Data

    Energy Technology Data Exchange (ETDEWEB)

    White, Richard A.; Brown, Joseph M.; Colby, Sean M.; Overall, Christopher C.; Lee, Joon-Yong; Zucker, Jeremy D.; Glaesemann, Kurt R.; Jansson, Georg C.; Jansson, Janet K.

    2017-03-02

    ATLAS (Automatic Tool for Local Assembly Structures) is a comprehensive multiomics data analysis pipeline that is massively parallel and scalable. ATLAS contains a modular analysis pipeline for assembly, annotation, quantification and genome binning of metagenomics and metatranscriptomics data and a framework for reference metaproteomic database construction. ATLAS transforms raw sequence data into functional and taxonomic data at the microbial population level and provides genome-centric resolution through genome binning. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS provides robust taxonomy based on majority voting of protein coding open reading frames rolled-up at the contig level using modified lowest common ancestor (LCA) analysis. ATLAS is user-friendly, easy install through bioconda maintained as open-source on GitHub, and is implemented in Snakemake for modular customizable workflows.

  12. Objective-guided image annotation.

    Science.gov (United States)

    Mao, Qi; Tsang, Ivor Wai-Hung; Gao, Shenghua

    2013-04-01

    Automatic image annotation, which is usually formulated as a multi-label classification problem, is one of the major tools used to enhance the semantic understanding of web images. Many multimedia applications (e.g., tag-based image retrieval) can greatly benefit from image annotation. However, the insufficient performance of image annotation methods prevents these applications from being practical. On the other hand, specific measures are usually designed to evaluate how well one annotation method performs for a specific objective or application, but most image annotation methods do not consider optimization of these measures, so that they are inevitably trapped into suboptimal performance of these objective-specific measures. To address this issue, we first summarize a variety of objective-guided performance measures under a unified representation. Our analysis reveals that macro-averaging measures are very sensitive to infrequent keywords, and hamming measure is easily affected by skewed distributions. We then propose a unified multi-label learning framework, which directly optimizes a variety of objective-specific measures of multi-label learning tasks. Specifically, we first present a multilayer hierarchical structure of learning hypotheses for multi-label problems based on which a variety of loss functions with respect to objective-guided measures are defined. And then, we formulate these loss functions as relaxed surrogate functions and optimize them by structural SVMs. According to the analysis of various measures and the high time complexity of optimizing micro-averaging measures, in this paper, we focus on example-based measures that are tailor-made for image annotation tasks but are seldom explored in the literature. Experiments show consistency with the formal analysis on two widely used multi-label datasets, and demonstrate the superior performance of our proposed method over state-of-the-art baseline methods in terms of example-based measures on four

  13. Data integration for management of pipeline system integrity

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Anne A. de; Miranda, Ivan Vicente Janvrot; Silva, Jose Augusto Pereira da [Pipeway Engenharia, Rio de Janeiro, RJ (Brazil); Guimaraes, Frederico S; Magalhaes, Joao Alfredo P [Minds at Work, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The need to easily access pipeline information and the increasing number of inspections using distinct techniques from different vendors makes the use of systems in the Integrity Management Program indispensable. For this reason, MOPI has been developed by two Brazilian companies, Pipeway Engenharia in partnership with Minds at Work. This tool allows data recording from the design, construction and operation of the pipeline, the storage of documents, the comparison between the results of different inspections, the planning inspections, contracts and maintenance of the pipeline. Furthermore, the information registered in the system can be accessed by a network user without limitation of time or number of users. This work presents the main details and features of MOPI. (author)

  14. Pressure Transient Model of Water-Hydraulic Pipelines with Cavitation

    Directory of Open Access Journals (Sweden)

    Dan Jiang

    2018-03-01

    Full Text Available Transient pressure investigation of water-hydraulic pipelines is a challenge in the fluid transmission field, since the flow continuity equation and momentum equation are partial differential, and the vaporous cavitation has high dynamics; the frictional force caused by fluid viscosity is especially uncertain. In this study, due to the different transient pressure dynamics in upstream and downstream pipelines, the finite difference method (FDM is adopted to handle pressure transients with and without cavitation, as well as steady friction and frequency-dependent unsteady friction. Different from the traditional method of characteristics (MOC, the FDM is advantageous in terms of the simple and convenient computation. Furthermore, the mechanism of cavitation growth and collapse are captured both upstream and downstream of the water-hydraulic pipeline, i.e., the cavitation start time, the end time, the duration, the maximum volume, and the corresponding time points. By referring to the experimental results of two previous works, the comparative simulation results of two computation methods are verified in experimental water-hydraulic pipelines, which indicates that the finite difference method shows better data consistency than the MOC.

  15. 75 FR 45591 - Pipeline Safety: Notice of Technical Pipeline Safety Advisory Committee Meetings

    Science.gov (United States)

    2010-08-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Committee Meetings AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION... safety standards, risk assessments, and safety policies for natural gas pipelines and for hazardous...

  16. MG-Digger: an automated pipeline to search for giant virus-related sequences in metagenomes

    Directory of Open Access Journals (Sweden)

    Jonathan eVerneau

    2016-03-01

    Full Text Available The number of metagenomic studies conducted each year is growing dramatically. Storage and analysis of such big data is difficult and time-consuming. Interestingly, analysis shows that environmental and human metagenomes include a significant amount of non-annotated sequences, representing a ‘dark matter’. We established a bioinformatics pipeline that automatically detects metagenome reads matching query sequences from a given set and applied this tool to the detection of sequences matching large and giant DNA viral members of the proposed order Megavirales or virophages. A total of 1,045 environmental and human metagenomes (≈ 1 Terabase pairs were collected, processed and stored on our bioinformatics server. In addition, nucleotide and protein sequences from 93 Megavirales representatives, including 19 giant viruses of amoeba, and five virophages, were collected. The pipeline was generated by scripts written in Python language and entitled MG-Digger. Metagenomes previously found to contain megavirus-like sequences were tested as controls. MG-Digger was able to annotate hundreds of metagenome sequences as best matching those of giant viruses. These sequences were most often found to be similar to phycodnavirus or mimivirus sequences, but included reads related to recently available pandoraviruses, Pithovirus sibericum, and faustoviruses. Compared to other tools, MG-Digger combined stand-alone use on Linux or Windows operating systems through a user-friendly interface, implementation of ready-to-use customized metagenome databases and query sequence databases, adjustable parameters for BLAST searches, and creation of output files containing selected reads with best match identification. Compared to Metavir 2, a reference tool in viral metagenome analysis, MG-Digger detected 8% more true positive Megavirales-related reads in a control metagenome. The present work shows that massive, automated and recurrent analyses of metagenomes are

  17. Decontamination device for pipeline

    International Nuclear Information System (INIS)

    Harashina, Heihachi.

    1994-01-01

    Pipelines to be decontaminated are parts of pipelines contaminated with radioactive materials, and they are connected to a fluid transfer means (for example, a bladeless pump) and a ball collector by way of a connector. The fluid of a mixture of chemical decontaminating liquid and spheres is sent into pipelines to be decontaminated. The spheres are, for example, heat resistant porous hard or soft rubber spheres. The fluid discharged from the pipelines to be decontaminated are circulated by way of bypassing means. The inner surface of the pipelines is decontaminated by the circulation of the fluid. When the bypass means is closed, the fluid discharged from the pipelines to be decontaminated is sent to the ball collector, and the spheres are captured by a hopper. Further, the liquid is sent to the filtrating means to filter the chemical contaminating liquid, and sludges contained in the liquid are captured. (I.N.)

  18. Commentary on CSA Standard Z662-99, oil and gas pipeline systems : Special Publication Z622.1-01

    Energy Technology Data Exchange (ETDEWEB)

    Kalra, S; Burford, G; Adragna, M; Coyle, S; Hawryn, S; Martin, A; McConnell, J [eds.; Kumaralagan, I; Hassan, H

    2001-04-01

    CSA Standard Z662 presents the requirements for oil and gas pipeline systems and describes what has been accepted as good safety practice. This special publication is not part of the Standard and has not been formally reviewed by the Technical Committee responsible for the Standard, therefore it does not provide formal interpretations of the Standard. This special publication should be considered as an informal annotation of portions of the Standard. This paper presents background information regarding certain clauses and requirements in the Standard. Revisions to this first edition of CSA Special Publication Z662.1 are likely to be made whenever new information needs to be released. This publication discussed some of the standards for oil and gas pipelines in terms of design, materials, installation, joining, pressure testing, corrosion control, as well as operation, maintenance and upgrading. The report also referred to offshore gas and oil distribution systems including steel, plastic and aluminum piping. refs., figs.

  19. High temperature pipeline design

    Energy Technology Data Exchange (ETDEWEB)

    Greenslade, J.G. [Colt Engineering, Calgary, AB (Canada). Pipelines Dept.; Nixon, J.F. [Nixon Geotech Ltd., Calgary, AB (Canada); Dyck, D.W. [Stress Tech Engineering Inc., Calgary, AB (Canada)

    2004-07-01

    It is impractical to transport bitumen and heavy oil by pipelines at ambient temperature unless diluents are added to reduce the viscosity. A diluted bitumen pipeline is commonly referred to as a dilbit pipeline. The diluent routinely used is natural gas condensate. Since natural gas condensate is limited in supply, it must be recovered and reused at high cost. This paper presented an alternative to the use of diluent to reduce the viscosity of heavy oil or bitumen. The following two basic design issues for a hot bitumen (hotbit) pipeline were presented: (1) modelling the restart problem, and, (2) establishing the maximum practical operating temperature. The transient behaviour during restart of a high temperature pipeline carrying viscous fluids was modelled using the concept of flow capacity. Although the design conditions were hypothetical, they could be encountered in the Athabasca oilsands. It was shown that environmental disturbances occur when the fluid is cooled during shut down because the ground temperature near the pipeline rises. This can change growing conditions, even near deeply buried insulated pipelines. Axial thermal loads also constrain the design and operation of a buried pipeline as higher operating temperatures are considered. As such, strain based design provides the opportunity to design for higher operating temperature than allowable stress based design methods. Expansion loops can partially relieve the thermal stress at a given temperature. As the design temperature increase, there is a point at which above grade pipelines become attractive options, although the materials and welding procedures must be suitable for low temperature service. 3 refs., 1 tab., 10 figs.

  20. 77 FR 19799 - Pipeline Safety: Pipeline Damage Prevention Programs

    Science.gov (United States)

    2012-04-02

    ... noted ``when the oil pipeline industry developed the survey for its voluntary spill reporting system...) [cir] The American Public Gas Association (APGA) [cir] The Association of Oil Pipelines (AOPL) [cir... the contrary, all 50 states in the United States have a law designed to prevent excavation damage to...

  1. Evaluation of web-based annotation of ophthalmic images for multicentric clinical trials.

    Science.gov (United States)

    Chalam, K V; Jain, P; Shah, V A; Shah, Gaurav Y

    2006-06-01

    An Internet browser-based annotation system can be used to identify and describe features in digitalized retinal images, in multicentric clinical trials, in real time. In this web-based annotation system, the user employs a mouse to draw and create annotations on a transparent layer, that encapsulates the observations and interpretations of a specific image. Multiple annotation layers may be overlaid on a single image. These layers may correspond to annotations by different users on the same image or annotations of a temporal sequence of images of a disease process, over a period of time. In addition, geometrical properties of annotated figures may be computed and measured. The annotations are stored in a central repository database on a server, which can be retrieved by multiple users in real time. This system facilitates objective evaluation of digital images and comparison of double-blind readings of digital photographs, with an identifiable audit trail. Annotation of ophthalmic images allowed clinically feasible and useful interpretation to track properties of an area of fundus pathology. This provided an objective method to monitor properties of pathologies over time, an essential component of multicentric clinical trials. The annotation system also allowed users to view stereoscopic images that are stereo pairs. This web-based annotation system is useful and valuable in monitoring patient care, in multicentric clinical trials, telemedicine, teaching and routine clinical settings.

  2. Evidence-based gene models for structural and functional annotations of the oil palm genome.

    Science.gov (United States)

    Chan, Kuang-Lim; Tatarinova, Tatiana V; Rosli, Rozana; Amiruddin, Nadzirah; Azizi, Norazah; Halim, Mohd Amin Ab; Sanusi, Nik Shazana Nik Mohd; Jayanthi, Nagappan; Ponomarenko, Petr; Triska, Martin; Solovyev, Victor; Firdaus-Raih, Mohd; Sambanthamurthi, Ravigadevi; Murphy, Denis; Low, Eng-Ti Leslie

    2017-09-08

    Oil palm is an important source of edible oil. The importance of the crop, as well as its long breeding cycle (10-12 years) has led to the sequencing of its genome in 2013 to pave the way for genomics-guided breeding. Nevertheless, the first set of gene predictions, although useful, had many fragmented genes. Classification and characterization of genes associated with traits of interest, such as those for fatty acid biosynthesis and disease resistance, were also limited. Lipid-, especially fatty acid (FA)-related genes are of particular interest for the oil palm as they specify oil yields and quality. This paper presents the characterization of the oil palm genome using different gene prediction methods and comparative genomics analysis, identification of FA biosynthesis and disease resistance genes, and the development of an annotation database and bioinformatics tools. Using two independent gene-prediction pipelines, Fgenesh++ and Seqping, 26,059 oil palm genes with transcriptome and RefSeq support were identified from the oil palm genome. These coding regions of the genome have a characteristic broad distribution of GC 3 (fraction of cytosine and guanine in the third position of a codon) with over half the GC 3 -rich genes (GC 3  ≥ 0.75286) being intronless. In comparison, only one-seventh of the oil palm genes identified are intronless. Using comparative genomics analysis, characterization of conserved domains and active sites, and expression analysis, 42 key genes involved in FA biosynthesis in oil palm were identified. For three of them, namely EgFABF, EgFABH and EgFAD3, segmental duplication events were detected. Our analysis also identified 210 candidate resistance genes in six classes, grouped by their protein domain structures. We present an accurate and comprehensive annotation of the oil palm genome, focusing on analysis of important categories of genes (GC 3 -rich and intronless), as well as those associated with important functions, such as FA

  3. Economic evaluation of CO2 pipeline transport in China

    International Nuclear Information System (INIS)

    Zhang Dongjie; Wang Zhe; Sun Jining; Zhang Lili; Li Zheng

    2012-01-01

    Highlights: ► We build a static hydrodynamic model of CO 2 pipeline for CCS application. ► We study the impact on pressure drop of pipeline by viscosity, density and elevation. ► We point out that density has a bigger impact on pressure drop than viscosity. ► We suggest dense phase transport is preferred than supercritical state. ► We present cost-optimal pipeline diameters for different flowrates and distances. - Abstract: Carbon capture and sequestration (CCS) is an important option for CO 2 mitigation and an optimized CO 2 pipeline transport system is necessary for large scale CCS implementation. In the present work, a hydrodynamic model for CO 2 pipeline transport was built up and the hydrodynamic performances of CO 2 pipeline as well as the impacts of multiple factors on pressure drop behavior along the pipeline were studied. Based on the model, an economic model was established to optimize the CO 2 pipeline transport system economically and to evaluate the unit transport cost of CO 2 pipeline in China. The hydrodynamic model results show that pipe diameter, soil temperature, and pipeline elevation change have significant influence on the pressure drop behavior of CO 2 in the pipeline. The design of pipeline system, including pipeline diameter and number of boosters etc., was optimized to achieve a lowest unit CO 2 transport cost. In regarding to the unit cost, when the transport flow rate and distance are between 1–5 MtCO 2 /year and 100–500 km, respectively, the unit CO 2 transport cost mainly lies between 0.1–0.6 RMB/(tCO 2 km) and electricity consumption cost of the pipeline inlet compressor was found to take more than 60% of the total cost. The present work provides reference for CO 2 transport pipeline design and for feasibility evaluation of potential CCS projects in China.

  4. Natural gas transport with the aid of pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Volk, A

    1978-01-01

    After giving a brief explanation on the term natural gas and the chemical composition of natural gases of different origin, the natural gas supply in the FRG and in Western Europe is discussed. Other discussions are included on: (1) planning, construction, and operation of the pipelines; (2) the equipment for pressure increase and the telecommunication equipment which are urgently necessary for gas transport through pipelines; (3) the problem of safety both in connection with the supply and protection of man and material; and (4) problems of profitability of natural gas transport through pipelines.

  5. Recent developments in pipeline welding practice

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    Fourteen chapters are included: overview of pipeline welding systems and quality assurance, CRC automatic welding system, H.C. Price Co. automatic welding system, semi-automatic MIG-welding process, partial penetration welding of steel pipes for gas distribution, construction procedures and quality control in offshore pipeline construction, welding in repair and maintenance of gas transmission pipelines, British Gas studies of welding on pressurized gas transmission pipelines, hot tapping pipelines, underwater welding for offshore pipelines and associated equipment, radial friction welding, material composition vs weld properties, review of NDT of pipeline welds, and safety assurance in pipeline construction. A bibliography of approximately 150 references is included, arranged according to subject and year.

  6. "Annotated Lectures": Student-Instructor Interaction in Large-Scale Global Education

    Directory of Open Access Journals (Sweden)

    Roger Diehl

    2009-10-01

    Full Text Available We describe an "Annotated Lectures" system, which will be used in a global virtual teaching and student collaboration event on embodied intelligence presented by the University of Zurich. The lectures will be broadcasted via video-conference to lecture halls of different universities around the globe. Among other collaboration features, an "Annotated Lectures" system will be implemented in a 3D collaborative virtual environment and used by the participating students to make annotations to the video-recorded lectures, which will be sent to and answered by their supervisors, and forwarded to the lecturers in an aggregated way. The "Annotated Lectures" system aims to overcome the issues of limited studentinstructor interaction in large-scale education, and to foster an intercultural and multidisciplinary discourse among students who review the lectures in a group. After presenting the concept of the "Annotated Lectures" system, we discuss a prototype version including a description of the technical components and its expected benefit for large-scale global education.

  7. Leadership Pipeline

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård

    2012-01-01

    Artiklen analyserer grundlaget for Leadership Pipeline modellen med henblik på en vurdering af substansen bag modellen, og perspektiverne for generalisering af modellen til en dansk organisatorisk kontekst.......Artiklen analyserer grundlaget for Leadership Pipeline modellen med henblik på en vurdering af substansen bag modellen, og perspektiverne for generalisering af modellen til en dansk organisatorisk kontekst....

  8. Concept annotation in the CRAFT corpus.

    Science.gov (United States)

    Bada, Michael; Eckert, Miriam; Evans, Donald; Garcia, Kristin; Shipley, Krista; Sitnikov, Dmitry; Baumgartner, William A; Cohen, K Bretonnel; Verspoor, Karin; Blake, Judith A; Hunter, Lawrence E

    2012-07-09

    Manually annotated corpora are critical for the training and evaluation of automated methods to identify concepts in biomedical text. This paper presents the concept annotations of the Colorado Richly Annotated Full-Text (CRAFT) Corpus, a collection of 97 full-length, open-access biomedical journal articles that have been annotated both semantically and syntactically to serve as a research resource for the biomedical natural-language-processing (NLP) community. CRAFT identifies all mentions of nearly all concepts from nine prominent biomedical ontologies and terminologies: the Cell Type Ontology, the Chemical Entities of Biological Interest ontology, the NCBI Taxonomy, the Protein Ontology, the Sequence Ontology, the entries of the Entrez Gene database, and the three subontologies of the Gene Ontology. The first public release includes the annotations for 67 of the 97 articles, reserving two sets of 15 articles for future text-mining competitions (after which these too will be released). Concept annotations were created based on a single set of guidelines, which has enabled us to achieve consistently high interannotator agreement. As the initial 67-article release contains more than 560,000 tokens (and the full set more than 790,000 tokens), our corpus is among the largest gold-standard annotated biomedical corpora. Unlike most others, the journal articles that comprise the corpus are drawn from diverse biomedical disciplines and are marked up in their entirety. Additionally, with a concept-annotation count of nearly 100,000 in the 67-article subset (and more than 140,000 in the full collection), the scale of conceptual markup is also among the largest of comparable corpora. The concept annotations of the CRAFT Corpus have the potential to significantly advance biomedical text mining by providing a high-quality gold standard for NLP systems. The corpus, annotation guidelines, and other associated resources are freely available at http://bionlp-corpora.sourceforge.net/CRAFT/index.shtml.

  9. What, me worry? Pipeline certification at the FERC

    International Nuclear Information System (INIS)

    Schneider, J.D.

    1997-01-01

    Some of the new major pipeline projects that will bring Canadian gas into Chicago and the United States northeastern market were described. The seven projects discussed were: Independence Pipeline Company, Columbia's Millennium Project, Alliance Pipeline, Viking Voyageur, Pan Energy's Spectrum, Tennessee's Eastern Express, and National Fuel Expansion. The need for all this new capacity was questioned. The US FERC Commissioners are willing to let the market decide the need for this capacity. It was noted that the serious regulatory issues pipelines will face lie in the radically different rate treatment for pipelines, depending on the support the market shows for their respective projects. Various past decisions of the Federal Energy Regulatory Commission (FERC) were reviewed to illustrate the direction of the Commission's thinking in terms of regulatory approval, and to examine the question of whether FERC's approval policy makes good sense. The author's view is that the 'at risk' conditions of approval, coupled with incremental rate treatment by the Commission does, in fact, provide protection for consumers

  10. Sustainability of social-environmental programs along pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Doebereiner, Christian [Shell Southern Cone Gas and Power (Brazil); Herrera, Brigitte [Transredes S.A. (Bolivia)

    2005-07-01

    The sustainability of Social and Environmental programs along pipelines, have shown to be a major challenge. Gas pipelines in Bolivia and Brazil operate in a diversity of environments and communities with different cultures, values and expectations. However, the pipeline network can also provide opportunities for contributing to regional development and working with local populations on topics of mutual interest. Many of these are quite strategic because they arise from topics of mutual interest for both the company and neighboring populations, and because they provide opportunities for achieving results of mutual benefit. These opportunities could include helping to make gas available to local communities, contributions to urban planning, hiring local services and other initiatives. Sustainable and integrated Social and Environmental programs are therefore key to a successful pipeline operation. These opportunities are often missed or under valued. Some successful examples are presented from Transredes S.A., Bolivia. (author)

  11. Essential Requirements for Digital Annotation Systems

    Directory of Open Access Journals (Sweden)

    ADRIANO, C. M.

    2012-06-01

    Full Text Available Digital annotation systems are usually based on partial scenarios and arbitrary requirements. Accidental and essential characteristics are usually mixed in non explicit models. Documents and annotations are linked together accidentally according to the current technology, allowing for the development of disposable prototypes, but not to the support of non-functional requirements such as extensibility, robustness and interactivity. In this paper we perform a careful analysis on the concept of annotation, studying the scenarios supported by digital annotation tools. We also derived essential requirements based on a classification of annotation systems applied to existing tools. The analysis performed and the proposed classification can be applied and extended to other type of collaborative systems.

  12. 49 CFR 195.210 - Pipeline location.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Pipeline location. 195.210 Section 195.210 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY... PIPELINE Construction § 195.210 Pipeline location. (a) Pipeline right-of-way must be selected to avoid, as...

  13. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation.

    Science.gov (United States)

    Zhou, Jie; Lamichhane, Santosh; Sterne, Gabriella; Ye, Bing; Peng, Hanchuan

    2013-10-04

    Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be "chained" in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological

  14. Incidental electric heating of pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Sonninskii, A V; Sirotin, A M; Vasiliev, Y N

    1981-04-01

    VNIIgaz has improved the conventional Japanese SECT pipeline-heating system, which uses a small steel tube that contains an insulated heater/conductor and is welded to the top of the pipeline. The improved version has two insulated electric heaters - one on the top and the other on the bottom of the pipeline - located inside steel angle irons that are welded to the pipeline. A comparison of experimental results from heating a 200-ft pipeline with both systems at currents of up to 470 A clearly demonstrated the better heating efficiency of the VNIIgaz unit. The improved SECT system would be suitable for various types of pipelines, including gas lines, in the USSR's far north regions.

  15. Pipeline flow of heavy oil with temperature-dependent viscosity

    Energy Technology Data Exchange (ETDEWEB)

    Maza Quinones, Danmer; Carvalho, Marcio da Silveira [Pontifical Catholic University of Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. of Mechanical Engineering], E-mail: msc@puc-rio.br

    2010-07-01

    The heavy oil produced offshore needs to be transported through pipelines between different facilities. The pipelines are usually laid down on the seabed and are submitted to low temperatures. Although heavy oils usually present Newtonian behavior, its viscosity is a strong function of temperature. Therefore, the prediction of pressure drops along the pipelines should include the solution of the energy equation and the dependence of viscosity to temperature. In this work, an asymptotic model is developed to study this problem. The flow is considered laminar and the viscosity varies exponentially with temperature. The model includes one-dimensional equations for the temperature and pressure distribution along the pipeline at a prescribed flow rate. The solution of the coupled differential equation is obtained by second-order finite difference. Results show a nonlinear behavior as a result of coupled interaction between the velocity, temperature, and temperature dependent material properties. (author)

  16. Functional annotation of hierarchical modularity.

    Directory of Open Access Journals (Sweden)

    Kanchana Padmanabhan

    Full Text Available In biological networks of molecular interactions in a cell, network motifs that are biologically relevant are also functionally coherent, or form functional modules. These functionally coherent modules combine in a hierarchical manner into larger, less cohesive subsystems, thus revealing one of the essential design principles of system-level cellular organization and function-hierarchical modularity. Arguably, hierarchical modularity has not been explicitly taken into consideration by most, if not all, functional annotation systems. As a result, the existing methods would often fail to assign a statistically significant functional coherence score to biologically relevant molecular machines. We developed a methodology for hierarchical functional annotation. Given the hierarchical taxonomy of functional concepts (e.g., Gene Ontology and the association of individual genes or proteins with these concepts (e.g., GO terms, our method will assign a Hierarchical Modularity Score (HMS to each node in the hierarchy of functional modules; the HMS score and its p-value measure functional coherence of each module in the hierarchy. While existing methods annotate each module with a set of "enriched" functional terms in a bag of genes, our complementary method provides the hierarchical functional annotation of the modules and their hierarchically organized components. A hierarchical organization of functional modules often comes as a bi-product of cluster analysis of gene expression data or protein interaction data. Otherwise, our method will automatically build such a hierarchy by directly incorporating the functional taxonomy information into the hierarchy search process and by allowing multi-functional genes to be part of more than one component in the hierarchy. In addition, its underlying HMS scoring metric ensures that functional specificity of the terms across different levels of the hierarchical taxonomy is properly treated. We have evaluated our

  17. Making web annotations persistent over time

    Energy Technology Data Exchange (ETDEWEB)

    Sanderson, Robert [Los Alamos National Laboratory; Van De Sompel, Herbert [Los Alamos National Laboratory

    2010-01-01

    As Digital Libraries (DL) become more aligned with the web architecture, their functional components need to be fundamentally rethought in terms of URIs and HTTP. Annotation, a core scholarly activity enabled by many DL solutions, exhibits a clearly unacceptable characteristic when existing models are applied to the web: due to the representations of web resources changing over time, an annotation made about a web resource today may no longer be relevant to the representation that is served from that same resource tomorrow. We assume the existence of archived versions of resources, and combine the temporal features of the emerging Open Annotation data model with the capability offered by the Memento framework that allows seamless navigation from the URI of a resource to archived versions of that resource, and arrive at a solution that provides guarantees regarding the persistence of web annotations over time. More specifically, we provide theoretical solutions and proof-of-concept experimental evaluations for two problems: reconstructing an existing annotation so that the correct archived version is displayed for all resources involved in the annotation, and retrieving all annotations that involve a given archived version of a web resource.

  18. Semantic annotation in biomedicine: the current landscape.

    Science.gov (United States)

    Jovanović, Jelena; Bagheri, Ebrahim

    2017-09-22

    The abundance and unstructured nature of biomedical texts, be it clinical or research content, impose significant challenges for the effective and efficient use of information and knowledge stored in such texts. Annotation of biomedical documents with machine intelligible semantics facilitates advanced, semantics-based text management, curation, indexing, and search. This paper focuses on annotation of biomedical entity mentions with concepts from relevant biomedical knowledge bases such as UMLS. As a result, the meaning of those mentions is unambiguously and explicitly defined, and thus made readily available for automated processing. This process is widely known as semantic annotation, and the tools that perform it are known as semantic annotators.Over the last dozen years, the biomedical research community has invested significant efforts in the development of biomedical semantic annotation technology. Aiming to establish grounds for further developments in this area, we review a selected set of state of the art biomedical semantic annotators, focusing particularly on general purpose annotators, that is, semantic annotation tools that can be customized to work with texts from any area of biomedicine. We also examine potential directions for further improvements of today's annotators which could make them even more capable of meeting the needs of real-world applications. To motivate and encourage further developments in this area, along the suggested and/or related directions, we review existing and potential practical applications and benefits of semantic annotators.

  19. 77 FR 2126 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Science.gov (United States)

    2012-01-13

    ... Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements.'' The final rule...

  20. 77 FR 36606 - Pipeline Safety: Government/Industry Pipeline Research and Development Forum, Public Meeting

    Science.gov (United States)

    2012-06-19

    ...: Threat Prevention --Working Group 2: Leak Detection/Mitigation & Storage --Working Group 3: Anomaly... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0146] Pipeline Safety: Government/Industry Pipeline Research and Development Forum, Public...

  1. Automatic Function Annotations for Hoare Logic

    Directory of Open Access Journals (Sweden)

    Daniel Matichuk

    2012-11-01

    Full Text Available In systems verification we are often concerned with multiple, inter-dependent properties that a program must satisfy. To prove that a program satisfies a given property, the correctness of intermediate states of the program must be characterized. However, this intermediate reasoning is not always phrased such that it can be easily re-used in the proofs of subsequent properties. We introduce a function annotation logic that extends Hoare logic in two important ways: (1 when proving that a function satisfies a Hoare triple, intermediate reasoning is automatically stored as function annotations, and (2 these function annotations can be exploited in future Hoare logic proofs. This reduces duplication of reasoning between the proofs of different properties, whilst serving as a drop-in replacement for traditional Hoare logic to avoid the costly process of proof refactoring. We explain how this was implemented in Isabelle/HOL and applied to an experimental branch of the seL4 microkernel to significantly reduce the size and complexity of existing proofs.

  2. Pipeline four-dimension management is the trend of pipeline integrity management in the future

    Energy Technology Data Exchange (ETDEWEB)

    Shaohua, Dong; Feifan; Zhongchen, Han [China National Petroleum Corporation (CNPC), Beijing (China)

    2009-07-01

    Pipeline integrity management is essential for today's operators to operate their pipelines safety and cost effectively. The latest developments of pipeline integrity management around the world are involved with change of regulation, industry standard and innovation of technology. And who know the trend of PIM in the future, which can be answered in the paper. As a result, the concept of P4DM was set up firstly in the world. The paper analyzed the pipeline HSE management, pipeline integrity management (PIM) and asset integrity management (AIM), the problem of management was produced, and also the Pipeline 4-dimension Management (P4DM) theory was brought forward. According to P4DM, from the hierarchy of P4DM, the management elements, fields, space and time was analyzed. The main content is P4DM integrate the space geography location and time, control and manage the pipeline system in whole process, anywhere and anytime. It includes the pipeline integrity, pipeline operation and emergency, which is integrated by IT system. It come true that the idea, solution, technology, organization, manager alternately intelligently control the process of management. What the paper talks about included the definition of pipeline 4D management, the research develop of P4DM, the theory of P4DM, the relationship between P4DM and PIM, the technology basis of P4DM, how to perform the P4DM and conclusion. The P4DM was produced, which provide the development direction of PIM in the future, and also provide the new ideas for PetroChina in the field of technology and management. (author)

  3. Pipelines to eastern Canada

    International Nuclear Information System (INIS)

    Otsason, J.

    1998-01-01

    This presentation focused on four main topics: (1) the existing path of pipelines to eastern Canada, (2) the Chicago hub, (3) transport alternatives, and (4) the Vector Pipeline' expansion plans. In the eastern Canadian market, TransCanada Pipelines dominates 96 per cent of the market share and is effectively immune to expansion costs. Issues regarding the attractiveness of the Chicago hub were addressed. One attractive feature is that the Chicago hub has access to multiple supply basins including western Canada, the Gulf Coast, the mid-continent, and the Rockies. Regarding Vector Pipelines' future plans, the company proposes to construct 343 miles of pipeline from Joliet, Illinois to Dawn, Ontario. Project description included discussion of some of the perceived advantages of this route, namely, extensive storage in Michigan and south-western Ontario, the fact that the proposed pipeline traverses major markets which would mitigate excess capacity concerns, arbitrage opportunities, cost effective expansion capability reducing tolls, and likely lower landed costs in Ontario. Project schedule, costs, rates and tariffs are also discussed. tabs., figs

  4. 78 FR 42889 - Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems

    Science.gov (United States)

    2013-07-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION...

  5. Color correction pipeline optimization for digital cameras

    Science.gov (United States)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  6. Risk from transport of gas by pipeline ''kokui-perm''

    International Nuclear Information System (INIS)

    Yelokhin, A.

    1998-01-01

    Full text of publication follows: the length of gas pipelines in Russia is 142 thousands km, 62 % are pipelines of the large diameters. Annually on gas pipelines in Russia there are more than 70 large accidents, more than 50 % from them is accompanied by ignition of gas. The average ecological looses from accident is: destruction arable lands - 78 hectares; removing from consumption agricultural soils - 6,2 hectares; destruction forests - 47,5 hectares. In work the reasons of accidents on gas pipelines of different diameters are analyzed. So, for pipelines a diameter of 1220 mm by the reasons of accidents are: marriage of civil and erection works - 39, 1 %; outside corrosion - 35,9 %; mechanical damages - 9,4 %; defects of pipes - 6,2 %; defects of the factory equipment - 1,6 %; nature disasters and other reasons - 7,8 %. In work the results of risk analysis on a gas pipeline 'Kokui - Perm' are analysed. The gas pipeline 'Kokui - Perm' passes near 22 towns and countries, crosses 15 highways, 2 rail ways, 15 rivers. In work the concrete recommendations for management of risk and safety of the population are given. (author)

  7. Pipelines and salmon in northern British Columbia : potential impacts

    International Nuclear Information System (INIS)

    Levy, D.A.

    2009-10-01

    Four pipeline projects have been proposed for northern British Columbia that could threaten the health of the Fraser, Skeena, and Kitimat watersheds. The pipelines will expose salmon to risks on several fronts. Enbridge's Northern Gateway pipeline project has generated the most concern for a several reasons, including the risks to salmon and freshwater habitat from pipeline failures, notably leaks or ruptures. This paper reviewed the salmon resources in affected watersheds; salmon and BC's economy; salmon diversity and abundance; impacts on fish from pipeline construction, operations and failures; behaviours of different petroleum products in fresh water; hydrocarbon toxicity; history of pipeline failures; sabotage and natural disasters; and Canadian case studies. Salmon are already experiencing stresses from forestry, hydro-electricity, transportation, agriculture, mining, mountain pine beetle, climate change and coalbed methane development. Their cumulative impact will dictate the long-term health and viability of salmon. It was concluded that if all of the proposed pipelines were built, they would extend over 4,000 km, crossing more than 1,000 rivers and streams in some of Canada's most productive salmon habitat. During construction, pipeline stream crossings are vulnerable to increased sedimentation, which can degrade salmon habitat. In the event of a spill, the condensate and oil sands products carried in the pipelines are highly toxic to salmon, with serious and lasting adverse impacts on salmon and their habitat. Any decision to approve such a pipeline should be made in recognition of these risks. 73 refs., 5 tabs., 15 figs., 2 appendices.

  8. Energy geopolitics and Iran-Pakistan-India gas pipeline

    International Nuclear Information System (INIS)

    Verma, Shiv Kumar

    2007-01-01

    With the growing energy demands in India and its neighboring countries, Iran-Pakistan-India (IPI) gas pipeline assumes special significance. Energy-deficient countries such as India, China, and Pakistan are vying to acquire gas fields in different parts of the world. This has led to two conspicuous developments: first, they are competing against each other and secondly, a situation is emerging where they might have to confront the US and the western countries in the near future in their attempt to control energy bases. The proposed IPI pipeline is an attempt to acquire such base. However, Pakistan is playing its own game to maximize its leverages. Pakistan, which refuses to establish even normal trading ties with India, craves to earn hundreds of millions of dollars in transit fees and other annual royalties from a gas pipeline which runs from Iran's South Pars fields to Barmer in western India. Pakistan promises to subsidize its gas imports from Iran and thus also become a major forex earner. It is willing to give pipeline related 'international guarantees' notwithstanding its record of covert actions in breach of international law (such as the export of terrorism) and its reluctance to reciprocally provide India what World Trade Organization (WTO) rules obligate it to do-Most Favored Nation (MFN) status. India is looking at the possibility of using some set of norms for securing gas supply through pipeline as the European Union has already initiated a discussion on the issue. The key point that is relevant to India's plan to build a pipeline to source gas from Iran relates to national treatment for pipeline. Under the principle of national treatment which also figures in relation to foreign direct investment (FDI), the country through which a pipeline transits should provide some level of security to the transiting pipeline as it would have provided to its domestic pipelines. This paper will endeavor to analyze, first, the significance of this pipeline for India

  9. Energy geopolitics and Iran-Pakistan-India gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Verma, Shiv Kumar [Political Geography Division, Center for International Politics, Organization and Disarmament, School of International Studies, Jawaharlal Nehru University, New Delhi 110067 (India)

    2007-06-15

    With the growing energy demands in India and its neighboring countries, Iran-Pakistan-India (IPI) gas pipeline assumes special significance. Energy-deficient countries such as India, China, and Pakistan are vying to acquire gas fields in different parts of the world. This has led to two conspicuous developments: first, they are competing against each other and secondly, a situation is emerging where they might have to confront the US and the western countries in the near future in their attempt to control energy bases. The proposed IPI pipeline is an attempt to acquire such base. However, Pakistan is playing its own game to maximize its leverages. Pakistan, which refuses to establish even normal trading ties with India, craves to earn hundreds of millions of dollars in transit fees and other annual royalties from a gas pipeline which runs from Iran's South Pars fields to Barmer in western India. Pakistan promises to subsidize its gas imports from Iran and thus also become a major forex earner. It is willing to give pipeline related 'international guarantees' notwithstanding its record of covert actions in breach of international law (such as the export of terrorism) and its reluctance to reciprocally provide India what World Trade Organization (WTO) rules obligate it to do-Most Favored Nation (MFN) status. India is looking at the possibility of using some set of norms for securing gas supply through pipeline as the European Union has already initiated a discussion on the issue. The key point that is relevant to India's plan to build a pipeline to source gas from Iran relates to national treatment for pipeline. Under the principle of national treatment which also figures in relation to foreign direct investment (FDI), the country through which a pipeline transits should provide some level of security to the transiting pipeline as it would have provided to its domestic pipelines. This paper will endeavor to analyze, first, the significance of this pipeline for India

  10. Energy geopolitics and Iran-Pakistan-India gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Verma, Shiv Kumar [Political Geography Division, Center for International Politics, Organization and Disarmament, School of International Studies, Jawaharlal Nehru University, New Delhi 110067 (India)]. E-mail: vermajnu@gmail.com

    2007-06-15

    With the growing energy demands in India and its neighboring countries, Iran-Pakistan-India (IPI) gas pipeline assumes special significance. Energy-deficient countries such as India, China, and Pakistan are vying to acquire gas fields in different parts of the world. This has led to two conspicuous developments: first, they are competing against each other and secondly, a situation is emerging where they might have to confront the US and the western countries in the near future in their attempt to control energy bases. The proposed IPI pipeline is an attempt to acquire such base. However, Pakistan is playing its own game to maximize its leverages. Pakistan, which refuses to establish even normal trading ties with India, craves to earn hundreds of millions of dollars in transit fees and other annual royalties from a gas pipeline which runs from Iran's South Pars fields to Barmer in western India. Pakistan promises to subsidize its gas imports from Iran and thus also become a major forex earner. It is willing to give pipeline related 'international guarantees' notwithstanding its record of covert actions in breach of international law (such as the export of terrorism) and its reluctance to reciprocally provide India what World Trade Organization (WTO) rules obligate it to do-Most Favored Nation (MFN) status. India is looking at the possibility of using some set of norms for securing gas supply through pipeline as the European Union has already initiated a discussion on the issue. The key point that is relevant to India's plan to build a pipeline to source gas from Iran relates to national treatment for pipeline. Under the principle of national treatment which also figures in relation to foreign direct investment (FDI), the country through which a pipeline transits should provide some level of security to the transiting pipeline as it would have provided to its domestic pipelines. This paper will endeavor to analyze, first, the significance of this

  11. Design Criteria for Suspended Pipelines Based on Structural Analysis

    Directory of Open Access Journals (Sweden)

    Mariana Simão

    2016-06-01

    Full Text Available Mathematical models have become the target of numerous attempts to obtain results that can be extrapolated to the study of hydraulic pressure infrastructures associated with different engineering requests. Simulation analysis based on finite element method (FEM models are used to determine the vulnerability of hydraulic systems under different types of actions (e.g., natural events and pressure variation. As part of the numerical simulation of a suspended pipeline, the adequacy of existing supports to sustain the pressure loads is verified. With a certain value of load application, the pipeline is forced to sway sideways, possibly lifting up off its deadweight supports. Thus, identifying the frequency, consequences and predictability of accidental events is of extreme importance. This study focuses on the stability of vertical supports associated with extreme transient loads and how a pipeline design can be improved using FEM simulations, in the design stage, to avoid accidents. Distribution of bending moments, axial forces, displacements and deformations along the pipeline and supports are studied for a set of important parametric variations. A good representation of the pipeline displacements is obtained using FEM.

  12. Pipeline mapping and strain assessment using ILI (In-line Inspection) tolls

    Energy Technology Data Exchange (ETDEWEB)

    Purvis, Brian [GE PII Pipeline Solutions, Rio de Janeiro, RJ (Brazil); Huewener, Thomas [E.ON Ruhrgas AG, Essen (Germany)

    2009-07-01

    GE PII IMU Mapping inspection system measures pipeline location coordinates (x, y, z) and provides data for determining pipeline curvature and consequential pipeline bending strain. The changes in strain can be used in the application of structural analyses and integrity evaluation of pipeline systems. This paper reviews the Inertia Measuring Unit (IMU) system and field investigation works performed on a high-pressure gas pipeline for E.ON Ruhrgas AG. The Inertial Measuring Unit of the pipeline inspection tool provides continuous measurement of the pipeline centreline coordinates. More than one inspection run was performed which allowed a more accurate strain comparison to be made. Repeatability is important to establish the reasons for increasing strain values detected at specific pipeline sections through in-line inspection surveys conducted in regular intervals over many years. Moreover, the flexibility resulting from a combination of different sensor technologies, makes it possible to provide a more complete picture of the overall situation. This paper reviews the work involved in detecting, locating and determining the magnitude and type of strain corresponding to the pipeline movement in field. (author)

  13. SG-ADVISER mtDNA: a web server for mitochondrial DNA annotation with data from 200 samples of a healthy aging cohort.

    Science.gov (United States)

    Rueda, Manuel; Torkamani, Ali

    2017-08-18

    Whole genome and exome sequencing usually include reads containing mitochondrial DNA (mtDNA). Yet, state-of-the-art pipelines and services for human nuclear genome variant calling and annotation do not handle mitochondrial genome data appropriately. As a consequence, any researcher desiring to add mtDNA variant analysis to their investigations is forced to explore the literature for mtDNA pipelines, evaluate them, and implement their own instance of the desired tool. This task is far from trivial, and can be prohibitive for non-bioinformaticians. We have developed SG-ADVISER mtDNA, a web server to facilitate the analysis and interpretation of mtDNA genomic data coming from next generation sequencing (NGS) experiments. The server was built in the context of our SG-ADVISER framework and on top of the MtoolBox platform (Calabrese et al., Bioinformatics 30(21):3115-3117, 2014), and includes most of its functionalities (i.e., assembly of mitochondrial genomes, heteroplasmic fractions, haplogroup assignment, functional and prioritization analysis of mitochondrial variants) as well as a back-end and a front-end interface. The server has been tested with unpublished data from 200 individuals of a healthy aging cohort (Erikson et al., Cell 165(4):1002-1011, 2016) and their data is made publicly available here along with a preliminary analysis of the variants. We observed that individuals over ~90 years old carried low levels of heteroplasmic variants in their genomes. SG-ADVISER mtDNA is a fast and functional tool that allows for variant calling and annotation of human mtDNA data coming from NGS experiments. The server was built with simplicity in mind, and builds on our own experience in interpreting mtDNA variants in the context of sudden death and rare diseases. Our objective is to provide an interface for non-bioinformaticians aiming to acquire (or contrast) mtDNA annotations via MToolBox. SG-ADVISER web server is freely available to all users at https://genomics.scripps.edu/mtdna .

  14. PANNOTATOR

    DEFF Research Database (Denmark)

    Santos, A R; Barbosa, Eudes Guilherme Vieria; Fiaux, K

    2013-01-01

    . In order to ensure quality and standards for functional genome annotation among different strains, we developed and made available PANNOTATOR (http://bnet.egr.vcu.edu/iioab/agenote.php), a web-based automated pipeline for the annotation of closely related and well-suited genomes for pan-genome studies...

  15. Active learning reduces annotation time for clinical concept extraction.

    Science.gov (United States)

    Kholghi, Mahnoosh; Sitbon, Laurianne; Zuccon, Guido; Nguyen, Anthony

    2017-10-01

    To investigate: (1) the annotation time savings by various active learning query strategies compared to supervised learning and a random sampling baseline, and (2) the benefits of active learning-assisted pre-annotations in accelerating the manual annotation process compared to de novo annotation. There are 73 and 120 discharge summary reports provided by Beth Israel institute in the train and test sets of the concept extraction task in the i2b2/VA 2010 challenge, respectively. The 73 reports were used in user study experiments for manual annotation. First, all sequences within the 73 reports were manually annotated from scratch. Next, active learning models were built to generate pre-annotations for the sequences selected by a query strategy. The annotation/reviewing time per sequence was recorded. The 120 test reports were used to measure the effectiveness of the active learning models. When annotating from scratch, active learning reduced the annotation time up to 35% and 28% compared to a fully supervised approach and a random sampling baseline, respectively. Reviewing active learning-assisted pre-annotations resulted in 20% further reduction of the annotation time when compared to de novo annotation. The number of concepts that require manual annotation is a good indicator of the annotation time for various active learning approaches as demonstrated by high correlation between time rate and concept annotation rate. Active learning has a key role in reducing the time required to manually annotate domain concepts from clinical free text, either when annotating from scratch or reviewing active learning-assisted pre-annotations. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Logistics aspects of petroleum pipeline operations

    Directory of Open Access Journals (Sweden)

    W. J. Pienaar

    2010-11-01

    Full Text Available The paper identifies, assesses and describes the logistics aspects of the commercial operation of petroleum pipelines. The nature of petroleum-product supply chains, in which pipelines play a role, is outlined and the types of petroleum pipeline systems are described. An outline is presented of the nature of the logistics activities of petroleum pipeline operations. The reasons for the cost efficiency of petroleum pipeline operations are given. The relative modal service effectiveness of petroleum pipeline transport, based on the most pertinent service performance measures, is offered. The segments in the petroleum-products supply chain where pipelines can play an efficient and effective role are identified.

  17. The Microstructure Evolution of Dual-Phase Pipeline Steel with Plastic Deformation at Different Strain Rates

    Science.gov (United States)

    Ji, L. K.; Xu, T.; Zhang, J. M.; Wang, H. T.; Tong, M. X.; Zhu, R. H.; Zhou, G. S.

    2017-07-01

    Tensile properties of the high-deformability dual-phase ferrite-bainite X70 pipeline steel have been investigated at room temperature under the strain rates of 2.5 × 10-5, 1.25 × 10-4, 2.5 × 10-3, and 1.25 × 10-2 s-1. The microstructures at different amount of plastic deformation were examined by using scanning and transmission electron microscopy. Generally, the ductility of typical body-centered cubic steels is reduced when its stain rate increases. However, we observed a different ductility dependence on strain rates in the dual-phase X70 pipeline steel. The uniform elongation (UEL%) and elongation to fracture (EL%) at the strain rate of 2.5 × 10-3 s-1 increase about 54 and 74%, respectively, compared to those at 2.5 × 10-5 s-1. The UEL% and EL% reach to their maximum at the strain rate of 2.5 × 10-3 s-1. This phenomenon was explained by the observed grain structures and dislocation configurations. Whether or not the ductility can be enhanced with increasing strain rates depends on the competition between the homogenization of plastic deformation among the microconstituents (ultra-fine ferrite grains, relatively coarse ferrite grains as well as bainite) and the progress of cracks formed as a consequence of localized inconsistent plastic deformation.

  18. Automatic annotation of lecture videos for multimedia driven pedagogical platforms

    Directory of Open Access Journals (Sweden)

    Ali Shariq Imran

    2016-12-01

    Full Text Available Today’s eLearning websites are heavily loaded with multimedia contents, which are often unstructured, unedited, unsynchronized, and lack inter-links among different multimedia components. Hyperlinking different media modality may provide a solution for quick navigation and easy retrieval of pedagogical content in media driven eLearning websites. In addition, finding meta-data information to describe and annotate media content in eLearning platforms is challenging, laborious, prone to errors, and time-consuming task. Thus annotations for multimedia especially of lecture videos became an important part of video learning objects. To address this issue, this paper proposes three major contributions namely, automated video annotation, the 3-Dimensional (3D tag clouds, and the hyper interactive presenter (HIP eLearning platform. Combining existing state-of-the-art SIFT together with tag cloud, a novel approach for automatic lecture video annotation for the HIP is proposed. New video annotations are implemented automatically providing the needed random access in lecture videos within the platform, and a 3D tag cloud is proposed as a new way of user interaction mechanism. A preliminary study of the usefulness of the system has been carried out, and the initial results suggest that 70% of the students opted for using HIP as their preferred eLearning platform at Gjøvik University College (GUC.

  19. Local buckling failure analysis of high-strength pipelines

    Institute of Scientific and Technical Information of China (English)

    Yan Li; Jian Shuai; Zhong-Li Jin; Ya-Tong Zhao; Kui Xu

    2017-01-01

    Pipelines in geological disaster regions typically suffer the risk of local buckling failure because of slender structure and complex load.This paper is meant to reveal the local buckling behavior of buried pipelines with a large diameter and high strength,which are under different conditions,including pure bending and bending combined with internal pressure.Finite element analysis was built according to previous data to study local buckling behavior of pressurized and unpressurized pipes under bending conditions and their differences in local buckling failure modes.In parametric analysis,a series of parameters,including pipe geometrical dimension,pipe material properties and internal pressure,were selected to study their influences on the critical bending moment,critical compressive stress and critical compressive strain of pipes.Especially the hardening exponent of pipe material was introduced to the parameter analysis by using the Ramberg-Osgood constitutive model.Results showed that geometrical dimensions,material and internal pressure can exert similar effects on the critical bending moment and critical compressive stress,which have different,even reverse effects on the critical compressive strain.Based on these analyses,more accurate design models of critical bending moment and critical compressive stress have been proposed for high-strength pipelines under bending conditions,which provide theoretical methods for highstrength pipeline engineering.

  20. Hydrocarbons pipeline transportation risk assessment

    Science.gov (United States)

    Zanin, A. V.; Milke, A. A.; Kvasov, I. N.

    2018-04-01

    The pipeline transportation applying risks assessment issue in the arctic conditions is addressed in the paper. Pipeline quality characteristics in the given environment has been assessed. To achieve the stated objective, the pipelines mathematical model was designed and visualized by using the software product SOLIDWORKS. When developing the mathematical model the obtained results made possible to define the pipeline optimal characteristics for designing on the Arctic sea bottom. In the course of conducting the research the pipe avalanche collapse risks were examined, internal longitudinal and circular loads acting on the pipeline were analyzed, as well as the water impact hydrodynamic force was taken into consideration. The conducted calculation can contribute to the pipeline transport further development under the harsh climate conditions of the Russian Federation Arctic shelf territory.

  1. neXtA5: accelerating annotation of articles via automated approaches in neXtProt.

    Science.gov (United States)

    Mottin, Luc; Gobeill, Julien; Pasche, Emilie; Michel, Pierre-André; Cusin, Isabelle; Gaudet, Pascale; Ruch, Patrick

    2016-01-01

    implemented into the neXtProt annotation pipeline.Available on: http://babar.unige.ch:8082/neXtA5Database URL: http://babar.unige.ch:8082/neXtA5/fetcher.jsp. © The Author(s) 2016. Published by Oxford University Press.

  2. Towards the integration, annotation and association of historical microarray experiments with RNA-seq.

    Science.gov (United States)

    Chavan, Shweta S; Bauer, Michael A; Peterson, Erich A; Heuck, Christoph J; Johann, Donald J

    2013-01-01

    Transcriptome analysis by microarrays has produced important advances in biomedicine. For instance in multiple myeloma (MM), microarray approaches led to the development of an effective disease subtyping via cluster assignment, and a 70 gene risk score. Both enabled an improved molecular understanding of MM, and have provided prognostic information for the purposes of clinical management. Many researchers are now transitioning to Next Generation Sequencing (NGS) approaches and RNA-seq in particular, due to its discovery-based nature, improved sensitivity, and dynamic range. Additionally, RNA-seq allows for the analysis of gene isoforms, splice variants, and novel gene fusions. Given the voluminous amounts of historical microarray data, there is now a need to associate and integrate microarray and RNA-seq data via advanced bioinformatic approaches. Custom software was developed following a model-view-controller (MVC) approach to integrate Affymetrix probe set-IDs, and gene annotation information from a variety of sources. The tool/approach employs an assortment of strategies to integrate, cross reference, and associate microarray and RNA-seq datasets. Output from a variety of transcriptome reconstruction and quantitation tools (e.g., Cufflinks) can be directly integrated, and/or associated with Affymetrix probe set data, as well as necessary gene identifiers and/or symbols from a diversity of sources. Strategies are employed to maximize the annotation and cross referencing process. Custom gene sets (e.g., MM 70 risk score (GEP-70)) can be specified, and the tool can be directly assimilated into an RNA-seq pipeline. A novel bioinformatic approach to aid in the facilitation of both annotation and association of historic microarray data, in conjunction with richer RNA-seq data, is now assisting with the study of MM cancer biology.

  3. Slurry pipeline design approach

    Energy Technology Data Exchange (ETDEWEB)

    Betinol, Roy; Navarro R, Luis [Brass Chile S.A., Santiago (Chile)

    2009-12-19

    Compared to other engineering technologies, the design of a commercial long distance Slurry Pipeline design is a relatively new engineering concept which gained more recognition in the mid 1960 's. Slurry pipeline was first introduced to reduce cost in transporting coal to power generating units. Since then this technology has caught-up worldwide to transport other minerals such as limestone, copper, zinc and iron. In South America, the use of pipeline is commonly practiced in the transport of Copper (Chile, Peru and Argentina), Iron (Chile and Brazil), Zinc (Peru) and Bauxite (Brazil). As more mining operations expand and new mine facilities are opened, the design of the long distance slurry pipeline will continuously present a commercially viable option. The intent of this paper is to present the design process and discuss any new techniques and approach used today to ensure a better, safer and economical slurry pipeline. (author)

  4. Influence of remanent magnetization on pitting corrosion in pipeline steel

    Energy Technology Data Exchange (ETDEWEB)

    Espina-Hernandez, J. H. [ESIME Zacatenco, SEPI Electronica Instituto Politecnico Nacional Mexico, D. F. (Mexico); Caleyo, F.; Hallen, J. M. [DIM-ESIQIE, Instituto Politecnico Nacional Mexico D. F. (Mexico); Lopez-Montenegro, A.; Perez-Baruch, E. [Pemex Exploracion y Produccion, Region Sur Villahermosa, Tabasco (Mexico)

    2010-07-01

    Statistical studies performed in Mexico indicate that leakage due to external pitting corrosion is the most likely cause of failure of buried pipelines. When pipelines are inspected with the magnetic flux leakage (MFL) technology, which is routinely used, the magnetization level of every part of the pipeline changes as the MFL tool travels through it. Remanent magnetization stays in the pipeline wall after inspection, at levels that may differ from a point to the next. This paper studies the influence of the magnetic field on pitting corrosion. Experiments were carried out on grade 52 steel under a level of remanent magnetization and other laboratory conditions that imitated the conditions of a pipeline after an MLF inspection. Non-magnetized control samples and magnetized samples were subjected to pitting by immersion in a solution containing chlorine and sulfide ions for seven days, and then inspected with optical microscopy. Results show that the magnetic field in the pipeline wall significantly increases pitting corrosion.

  5. Modelling and transient simulation of water flow in pipelines using WANDA Transient software

    Directory of Open Access Journals (Sweden)

    P.U. Akpan

    2017-09-01

    Full Text Available Pressure transients in conduits such as pipelines are unsteady flow conditions caused by a sudden change in the flow velocity. These conditions might cause damage to the pipelines and its fittings if the extreme pressure (high or low is experienced within the pipeline. In order to avoid this occurrence, engineers usually carry out pressure transient analysis in the hydraulic design phase of pipeline network systems. Modelling and simulation of transients in pipelines is an acceptable and cost effective method of assessing this problem and finding technical solutions. This research predicts the pressure surge for different flow conditions in two different pipeline systems using WANDA Transient simulation software. Computer models were set-up in WANDA Transient for two different systems namely; the Graze experiment (miniature system and a simple main water riser system based on some initial laboratory data and system parameters. The initial laboratory data and system parameters were used for all the simulations. Results obtained from the computer model simulations compared favourably with the experimental results at Polytropic index of 1.2.

  6. LNG transport through pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Pfund, P; Philipps, A

    1975-01-01

    LNG pipelines could help solve some peakshaving problems if operated in conjunction with other facilities that could use the LNG cold recovered during regasification. In some areas at present, LNG is delivered by tanker and regasified near the terminal for transmission through conventional gas pipelines. In other places, utilities liquefy natural gas for easy storage for later peakshaving use. The only chance to avoid the second expensive liquefaction step would be to convey imported LNG through a suitable designed LNG pipeline. The technical problems involved in LNG pipeline construction have basically been solved in recent years, but those pipelines actually constructed have been only short ones. To be economically justified, long-distance LNG lines require additional credit, which could be obtained by selling the LNG cold recovered during regasification to industrial users located in or near the points of gas consumption. Technical details presented cover the pipe material, stress relief, steel composition, pressure enthalpy, bellows-type expansion joints, and mechanical and thermal insulation.

  7. Computer systems for annotation of single molecule fragments

    Science.gov (United States)

    Schwartz, David Charles; Severin, Jessica

    2016-07-19

    There are provided computer systems for visualizing and annotating single molecule images. Annotation systems in accordance with this disclosure allow a user to mark and annotate single molecules of interest and their restriction enzyme cut sites thereby determining the restriction fragments of single nucleic acid molecules. The markings and annotations may be automatically generated by the system in certain embodiments and they may be overlaid translucently onto the single molecule images. An image caching system may be implemented in the computer annotation systems to reduce image processing time. The annotation systems include one or more connectors connecting to one or more databases capable of storing single molecule data as well as other biomedical data. Such diverse array of data can be retrieved and used to validate the markings and annotations. The annotation systems may be implemented and deployed over a computer network. They may be ergonomically optimized to facilitate user interactions.

  8. Reading Actively Online: An Exploratory Investigation of Online Annotation Tools for Inquiry Learning

    Science.gov (United States)

    Lu, Jingyan; Deng, Liping

    2012-01-01

    This study seeks to design and facilitate active reading among secondary school students with an online annotation tool--Diigo. Two classes of different academic performance levels were recruited to examine their annotation behavior and perceptions of Diigo. We wanted to determine whether the two classes differed in how they used Diigo; how they…

  9. Image annotation under X Windows

    Science.gov (United States)

    Pothier, Steven

    1991-08-01

    A mechanism for attaching graphic and overlay annotation to multiple bits/pixel imagery while providing levels of performance approaching that of native mode graphics systems is presented. This mechanism isolates programming complexity from the application programmer through software encapsulation under the X Window System. It ensures display accuracy throughout operations on the imagery and annotation including zooms, pans, and modifications of the annotation. Trade-offs that affect speed of display, consumption of memory, and system functionality are explored. The use of resource files to tune the display system is discussed. The mechanism makes use of an abstraction consisting of four parts; a graphics overlay, a dithered overlay, an image overly, and a physical display window. Data structures are maintained that retain the distinction between the four parts so that they can be modified independently, providing system flexibility. A unique technique for associating user color preferences with annotation is introduced. An interface that allows interactive modification of the mapping between image value and color is discussed. A procedure that provides for the colorization of imagery on 8-bit display systems using pixel dithering is explained. Finally, the application of annotation mechanisms to various applications is discussed.

  10. Review of the Factors that Influence the Condition of Wax Deposition in Subsea Pipelines

    Directory of Open Access Journals (Sweden)

    Koh Junyi

    2018-03-01

    Full Text Available When crude oil is transported via sub-sea pipeline, the temperature of the pipeline decreases at a deep depth which causes a difference in temperature with the crude oil inside. This causes the crude oil to dissipate its heat to the surrounding until thermal equilibrium is achieved. This is also known as the cloud point where wax begins to precipitate and solidifies at the walls of the pipeline which obstruct the flow of fluid. The main objective of this review is to quantify the factors that influence wax deposition such as temperature difference between the wall of the pipeline and the fluid flowing within, the flow rate of the fluid in the pipeline and residence time of the fluid in the pipeline. It is found the main factor that causes wax deposition in the pipeline is the difference in temperature between the petroleum pipeline and the fluid flowing within. Most Literature deduces that decreasing temperature difference results in lower wax content deposited on the wall of the pipeline. The wax content increases with rising flow rate. As for the residence time, the amount of deposited wax initially increases when residence time increases until it reaches a peak value and gradually decreases. Flow-loop system and cold finger apparatus were used in literature investigations to determine the trends above. Three new models are generated through a regression analysis based on the results from other authors. These new models form a relationship between temperature difference, flow rate, residence time and Reynolds number with wax deposition. These models have high values of R-square and adjusted R-square which demonstrate the reliability of these models.

  11. Planned and proposed pipeline regulations

    International Nuclear Information System (INIS)

    De Leon, C.

    1992-01-01

    The Research and Special Programs Administration administers the Natural Gas Pipeline Safety Act of 1968 (NGPSA) and the Hazardous Liquid Pipeline Safety Act of 1979 (HLPSA). The RSPA issues and enforces design, construction, operation and maintenance regulations for natural gas pipelines and hazardous liquid pipelines. This paper discusses a number of proposed and pending safety regulations and legislative initiatives currently being considered by the RSPA and the US Congress. Some new regulations have been enacted. The next few years will see a great deal of regulatory activity regarding natural gas and hazardous liquid pipelines, much of it resulting from legislative requirements. The office of Pipeline Safety is currently conducting a study to streamline its operations. This study is analyzing the office's business, social and technical operations with the goal of improving overall efficiency, effectiveness, productivity and job satisfaction to meet the challenges of the future

  12. Motion lecture annotation system to learn Naginata performances

    Science.gov (United States)

    Kobayashi, Daisuke; Sakamoto, Ryota; Nomura, Yoshihiko

    2013-12-01

    This paper describes a learning assistant system using motion capture data and annotation to teach "Naginata-jutsu" (a skill to practice Japanese halberd) performance. There are some video annotation tools such as YouTube. However these video based tools have only single angle of view. Our approach that uses motion-captured data allows us to view any angle. A lecturer can write annotations related to parts of body. We have made a comparison of effectiveness between the annotation tool of YouTube and the proposed system. The experimental result showed that our system triggered more annotations than the annotation tool of YouTube.

  13. Chechnya: the pipeline front

    Energy Technology Data Exchange (ETDEWEB)

    Anon,

    1999-11-01

    This article examines the impact of the Russian campaign against Chechnya on projects for oil and gas pipelines from the new Caspian republics, which are seeking financial support. Topics discussed include the pipeline transport of oil from Azerbaijan through Chechnya to the Black Sea, the use of oil money to finance the war, the push for non-Russian export routes, the financing of pipelines, the impact of the war on the supply of Russian and Turkmenistan gas to Turkey, the proposed construction of the Trans Caspian pipeline, the weakening of trust between Russia and its neighbours, and the potential for trans Caucasus republics to look to western backers due to the instability of the North Caucasus. (UK)

  14. Predicting wear of hydrotransport pipelines in oil sand slurries

    Energy Technology Data Exchange (ETDEWEB)

    Been, J.; Lu, B.; Wolodko, J. [Alberta Research Council, Edmonton, AB (Canada); Kiel, D. [Coanda Research and Development Corp., Burnaby, BC (Canada)

    2008-07-01

    An overview of erosion and corrosion methods and techniques was presented. Wear to pipelines is influenced by slurry flow and chemistry; solids loading; and electrochemical interactions. While several experimental techniques have been developed to rank the performance of different pipeline materials, experiments do not currently provide accurate quantitative prediction of pipeline wear in the field. Rotating cylinder electrodes (RCE) and jet impingement methods are used to study the effect of flow velocity on corrosion rate. Slurry pot erosion-corrosion testers are used to rank materials for use in more dilute, less turbulent slurries. Coriolois slurry erosion testers are used to rank the erosion resistance of different pipeline materials. A pilot-scale flow loop is now being constructed by the Alberta Research Council (ARC) in order to replicate wet erosion phenomena in oil sands applications. The flow loop will be used to simulate the field conditions of oil sands pipelines and develop predictive wear data and models. Coulombic shear stress and characteristic wall velocities have been determined using a 2-layer model designed to represent flow as 2 distinct layers. To date, the flow loop pilot study has demonstrated that wear rates in smaller diameter flow loops are not significantly different than larger diameter field installations. Preliminary calculations have demonstrated that the flow loop can be used to accurately simulate the hydrodynamics and wear typically experienced in field slurry flows. 67 refs., 2 tabs., 7 figs.

  15. Phenex: ontological annotation of phenotypic diversity.

    Directory of Open Access Journals (Sweden)

    James P Balhoff

    2010-05-01

    Full Text Available Phenotypic differences among species have long been systematically itemized and described by biologists in the process of investigating phylogenetic relationships and trait evolution. Traditionally, these descriptions have been expressed in natural language within the context of individual journal publications or monographs. As such, this rich store of phenotype data has been largely unavailable for statistical and computational comparisons across studies or integration with other biological knowledge.Here we describe Phenex, a platform-independent desktop application designed to facilitate efficient and consistent annotation of phenotypic similarities and differences using Entity-Quality syntax, drawing on terms from community ontologies for anatomical entities, phenotypic qualities, and taxonomic names. Phenex can be configured to load only those ontologies pertinent to a taxonomic group of interest. The graphical user interface was optimized for evolutionary biologists accustomed to working with lists of taxa, characters, character states, and character-by-taxon matrices.Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.

  16. Phenex: ontological annotation of phenotypic diversity.

    Science.gov (United States)

    Balhoff, James P; Dahdul, Wasila M; Kothari, Cartik R; Lapp, Hilmar; Lundberg, John G; Mabee, Paula; Midford, Peter E; Westerfield, Monte; Vision, Todd J

    2010-05-05

    Phenotypic differences among species have long been systematically itemized and described by biologists in the process of investigating phylogenetic relationships and trait evolution. Traditionally, these descriptions have been expressed in natural language within the context of individual journal publications or monographs. As such, this rich store of phenotype data has been largely unavailable for statistical and computational comparisons across studies or integration with other biological knowledge. Here we describe Phenex, a platform-independent desktop application designed to facilitate efficient and consistent annotation of phenotypic similarities and differences using Entity-Quality syntax, drawing on terms from community ontologies for anatomical entities, phenotypic qualities, and taxonomic names. Phenex can be configured to load only those ontologies pertinent to a taxonomic group of interest. The graphical user interface was optimized for evolutionary biologists accustomed to working with lists of taxa, characters, character states, and character-by-taxon matrices. Annotation of phenotypic data using ontologies and globally unique taxonomic identifiers will allow biologists to integrate phenotypic data from different organisms and studies, leveraging decades of work in systematics and comparative morphology.

  17. Model and Interoperability using Meta Data Annotations

    Science.gov (United States)

    David, O.

    2011-12-01

    Software frameworks and architectures are in need for meta data to efficiently support model integration. Modelers have to know the context of a model, often stepping into modeling semantics and auxiliary information usually not provided in a concise structure and universal format, consumable by a range of (modeling) tools. XML often seems the obvious solution for capturing meta data, but its wide adoption to facilitate model interoperability is limited by XML schema fragmentation, complexity, and verbosity outside of a data-automation process. Ontologies seem to overcome those shortcomings, however the practical significance of their use remains to be demonstrated. OMS version 3 took a different approach for meta data representation. The fundamental building block of a modular model in OMS is a software component representing a single physical process, calibration method, or data access approach. Here, programing language features known as Annotations or Attributes were adopted. Within other (non-modeling) frameworks it has been observed that annotations lead to cleaner and leaner application code. Framework-supported model integration, traditionally accomplished using Application Programming Interfaces (API) calls is now achieved using descriptive code annotations. Fully annotated components for various hydrological and Ag-system models now provide information directly for (i) model assembly and building, (ii) data flow analysis for implicit multi-threading or visualization, (iii) automated and comprehensive model documentation of component dependencies, physical data properties, (iv) automated model and component testing, calibration, and optimization, and (v) automated audit-traceability to account for all model resources leading to a particular simulation result. Such a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework but a strong reference to its originating code. Since models and

  18. Pipelines in power plants

    International Nuclear Information System (INIS)

    Oude-Hengel, H.H.

    1978-01-01

    Since the end of the Sixties, steam-transporting pipelines are given great attention, as pipeline components often fail, partially even long before their designed operation time is over. Thus, experts must increasingly deal with questions of pipelines and their components. Design and calculation, production and operation of pipelines are included in the discussion. Within the frame of this discussion, planners, producers, operators, and technical surveillance personnel must be able to offer a homogenous 'plan for assuring the quality of pipelines' in fossil and nuclear power plants. This book tries to make a contribution to this topic. 'Quality assuring' means efforts made for meeting the demands of quality (reliability). The book does not intend to complete with well-known manuals, as for as a complete covering of the topic is concerned. A substantial part of its sections serves to show how quality assurance of pipelines can be at least partially obtained by surveillance measures beginning with the planning, covering the production, and finally accompanying the operation. There is hardly need to mention that the sort of planning, production, and operation has an important influence on the quality. This is why another part of the sections contain process aspects from the view of the planners, producers, and operators. (orig.) [de

  19. Fatigue analysis of corroded pipelines subjected to pressure and temperature loadings

    International Nuclear Information System (INIS)

    Cunha, Divino J.S.; Benjamin, Adilson C.; Silva, Rita C.C.; Guerreiro, João N.C.; Drach, Patrícia R.C.

    2014-01-01

    In this paper a methodology for the fatigue analysis of pipelines containing corrosion defects is proposed. This methodology is based on the nominal stresses from a Global Analysis using a one-dimensional Finite Element (FE) model of the pipeline together with the application of stress concentration factors (SCFs). As the stresses may exceed the yielding limit in the corrosion defects, the methodology also adopts a strain-life approach (ε–N method) which is capable of producing less conservative fatigue lives than the stress-based methods. In addition the proposed methodology is applied in the assessment of the fatigue life of an onshore-hot pipeline containing corrosion pits and patches. Five corrosion pits and five corrosion patches with different sizes are considered. The corrosion defects are situated on the external surface of the pipeline base material. The SCFs are calculated using solid FE models and the fatigue analyses are performed for an out-of-phase/non-proportional (NP) biaxial stresses related to the combined loading (internal pressure and temperature) variations caused by an intermittent operation with hot heavy oil (start-up and shut-down). The results show that for buried pipelines subjected to cyclic combined loadings of internal pressure and temperature fatigue may become an important failure mode when corroded pipeline segments are left in operation without being replaced. -- Highlights: • An ε–N methodology for the fatigue life assessment of corroded pipelines is proposed. • The methodology includes: global analysis, stress amplification, and strain life calculation. • Different-size corrosion patches and pits on the external surface of the pipeline were analyzed. • It's shown that fatigue is a concern when corroded pipeline segments operate for many years

  20. Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.

    Science.gov (United States)

    Agapito, Giuseppe; Milano, Marianna; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-01-01

    Gene Ontology (GO) is a structured repository of concepts (GO Terms) that are associated to one or more gene products through a process referred to as annotation. The analysis of annotated data is an important opportunity for bioinformatics. There are different approaches of analysis, among those, the use of association rules (AR) which provides useful knowledge, discovering biologically relevant associations between terms of GO, not previously known. In a previous work, we introduced GO-WAR (Gene Ontology-based Weighted Association Rules), a methodology for extracting weighted association rules from ontology-based annotated datasets. We here adapt the GO-WAR algorithm to mine cross-ontology association rules, i.e., rules that involve GO terms present in the three sub-ontologies of GO. We conduct a deep performance evaluation of GO-WAR by mining publicly available GO annotated datasets, showing how GO-WAR outperforms current state of the art approaches.

  1. Annotating temporal information in clinical narratives.

    Science.gov (United States)

    Sun, Weiyi; Rumshisky, Anna; Uzuner, Ozlem

    2013-12-01

    Temporal information in clinical narratives plays an important role in patients' diagnosis, treatment and prognosis. In order to represent narrative information accurately, medical natural language processing (MLP) systems need to correctly identify and interpret temporal information. To promote research in this area, the Informatics for Integrating Biology and the Bedside (i2b2) project developed a temporally annotated corpus of clinical narratives. This corpus contains 310 de-identified discharge summaries, with annotations of clinical events, temporal expressions and temporal relations. This paper describes the process followed for the development of this corpus and discusses annotation guideline development, annotation methodology, and corpus quality. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Thermal expansion absorbing structure for pipeline

    International Nuclear Information System (INIS)

    Nagata, Takashi; Yamashita, Takuya.

    1995-01-01

    A thermal expansion absorbing structure for a pipeline is disposed to the end of pipelines to form a U-shaped cross section connecting a semi-circular torus shell and a short double-walled cylindrical tube. The U-shaped longitudinal cross-section is deformed in accordance with the shrinking deformation of the pipeline and absorbs thermal expansion. Namely, since the central lines of the outer and inner tubes of the double-walled cylindrical tube deform so as to incline, when the pipeline is deformed by thermal expansion, thermal expansion can be absorbed by a simple configuration thereby enabling to contribute to ensure the safety. Then, the entire length of the pipeline can greatly be shortened by applying it to the pipeline disposed in a high temperature state compared with a method of laying around a pipeline using only elbows, which has been conducted so far. Especially, when it is applied to a pipeline for an FBR-type reactor, the cost for the construction of a facility of a primary systems can greater be reduced. In addition, it can be applied to a pipeline for usual chemical plants and any other structures requiring absorption of deformation. (N.H.)

  3. Report of study group 4.1 ''pipeline ageing and rehabilitation''

    Energy Technology Data Exchange (ETDEWEB)

    Serena, L.

    2000-07-01

    This report describes the work on the subject 'pipeline ageing and rehabilitation' carried out by the Study Group 4.1 and related to the triennium 1997 - 2000. The report is focused on ageing and rehabilitation of natural gas transmission pipelines and more in detail on the following topics: - Definition of pipeline ageing; - Different ageing elements; - Main causes of ageing; - Inspections and monitoring; - Repair methods on ageing pipelines; - Programmes and strategies for pipeline maintenance and rehabilitation. The report includes the state of the art of the different techniques used to assess pipeline ageing such as pig inspection, landslide areas monitoring as well as advanced monitoring methods used nowadays by pipeline operators; a clarification of the concepts for different maintenance approaches is also presented. In addition the report gives some information regarding repair methods in use, the methodologies to evaluate the defects and the philosophy on which each repair system is based. The remaining topics deal with the strategies of pipelines and coating rehabilitation, locus the attention in the economical and technical considerations also beyond the ageing concept and describe in details the main causes of ageing as indicated by operators. A questionnaire on these topics was in fact distributed and the obtained results are included in this report. (author)

  4. Fishing intensity around the BBL pipeline

    NARCIS (Netherlands)

    Hintzen, Niels

    2016-01-01

    Wageningen Marine Research was requested by ACRB B.V. to investigate the fishing activities around the BBL pipeline. This gas pipeline crosses the southern North Sea from Balgzand (near Den Helder) in the Netherlands to Bacton in the UK (230km). This pipeline is abbreviated as the BBL pipeline. Part

  5. Diagnosing in building main pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Telegin, L.G.; Gorelov, A.S.; Kurepin, B.N.; Orekhov, V.I.; Vasil' yev, G.G.; Yakovlev, Ye. I.

    1984-01-01

    General principles are examined for technical diagnosis in building main pipelines. A technique is presented for diagnosis during construction, as well as diagnosis of the technical state of the pipeline-construction machines and mechanisms. The survey materials could be used to set up construction of main pipelines.

  6. Pipeline coating inspection in Mexico applying surface electromagnetic technology

    Energy Technology Data Exchange (ETDEWEB)

    Delgado, O.; Mousatov, A.; Nakamura, E.; Villarreal, J.M. [Instituto Mexicano del Petroleo (IMP), Mexico City (Mexico); Shevnin, V. [Moscow State University (Russian Federation); Cano, B. [Petroleos Mexicanos (PEMEX), Mexico City (Mexico)

    2009-07-01

    The main problems in the pipeline systems in Mexico include: extremely aggressive soil characterized by a high clay content and low resistivity, interconnection between several pipes, including electrical contacts of active pipelines with out of service pipes, and short distances between pipes in comparison with their depths which reduce the resolution of coating inspection. The results presented in this work show the efficiency of the Surface Electromagnetic Pipeline Inspection (SEMPI) technology to determine the technical condition of pipelines in situations before mentioned. The SEMPI technology includes two stages: regional and detailed measurements. The regional stage consists of magnetic field measurements along the pipeline using large distances (10 - 100 m) between observation points to delimit zones with damaged coating. For quantitative assessing the leakage and coating resistances along pipeline, additional measurements of voltage and soil resistivity measurements are performed. The second stage includes detailed measurements of the electric field on the pipe intervals with anomalous technical conditions identified in the regional stage. Based on the distribution of the coating electric resistance and the subsoil resistivity values, the delimitation of the zones with different grade of coating quality and soil aggressiveness are performed. (author)

  7. Workflow and web application for annotating NCBI BioProject transcriptome data.

    Science.gov (United States)

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A; Barrero, Luz S; Landsman, David; Mariño-Ramírez, Leonardo

    2017-01-01

    The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. URL: http://www.ncbi.nlm.nih.gov/projects/physalis/. Published by Oxford University Press 2017. This work is written by US Government employees and is in the public domain in the US.

  8. Pipeline integrity handbook risk management and evaluation

    CERN Document Server

    Singh, Ramesh

    2013-01-01

    Based on over 40 years of experience in the field, Ramesh Singh goes beyond corrosion control, providing techniques for addressing present and future integrity issues. Pipeline Integrity Handbook provides pipeline engineers with the tools to evaluate and inspect pipelines, safeguard the life cycle of their pipeline asset and ensure that they are optimizing delivery and capability. Presented in easy-to-use, step-by-step order, Pipeline Integrity Handbook is a quick reference for day-to-day use in identifying key pipeline degradation mechanisms and threats to pipeline integrity. The book begins

  9. Internal corrosion control of northern pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Papavinasam, S.

    2005-02-01

    The general causes of internal corrosion in pipelines were discussed along with the methods to control them. Efficient methods are needed to determine chemical efficiency for mitigating internal corrosion in transmission pipelines, particularly those used in environmentally sensitive regions in the Arctic where harsh environmental conditions prevail. According to the Office of Pipeline Safety, 15 per cent of pipeline failures in the United States from 1994 to 2000 were caused by internal corrosion. Since pipelines in the United States are slightly older than Canadian pipelines, internal corrosion is a significant issue from a Canadian perspective. There are 306,618 km of energy-related pipelines in western Canada. Between April 2001 and March 2002 there were 808 failures, of which 425 failures resulted from internal corrosion. The approach to control internal corrosion comprises of dehydrating the gases at production facilities; controlling the quality of corrosive gases such as carbon dioxide and hydrogen sulphide; and, using internal coatings. The approaches to control internal corrosion are appropriate, when supplemented by adequate integrity management program to ensure that corrosive liquids do not collect, over the operational lifetime of the pipelines, at localized areas. It was suggested that modeling of pipeline operations may need improvement. This paper described the causes, prediction and control of internal pitting corrosion. It was concluded that carbon steel equipment can continue to be used reliably and safely as pipeline materials for northern pipelines if the causes that lead to internal corrosion are scientifically and accurately predicted, and if corrosion inhibitors are properly evaluated and applied. 5 figs.

  10. Fluid pipeline system leak detection based on neural network and pattern recognition

    International Nuclear Information System (INIS)

    Tang Xiujia

    1998-01-01

    The mechanism of the stress wave propagation along the pipeline system of NPP, caused by turbulent ejection from pipeline leakage, is researched. A series of characteristic index are described in time domain or frequency domain, and compress numerical algorithm is developed for original data compression. A back propagation neural networks (BPNN) with the input matrix composed by stress wave characteristics in time domain or frequency domain is first proposed to classify various situations of the pipeline, in order to detect the leakage in the fluid flow pipelines. The capability of the new method had been demonstrated by experiments and finally used to design a handy instrument for the pipeline leakage detection. Usually a pipeline system has many inner branches and often in adjusting dynamic condition, it is difficult for traditional pipeline diagnosis facilities to identify the difference between inner pipeline operation and pipeline fault. The author first proposed pipeline wave propagation identification by pattern recognition to diagnose pipeline leak. A series of pattern primitives such as peaks, valleys, horizon lines, capstan peaks, dominant relations, slave relations, etc., are used to extract features of the negative pressure wave form. The context-free grammar of symbolic representation of the negative wave form is used, and a negative wave form parsing system with application to structural pattern recognition based on the representation is first proposed to detect and localize leaks of the fluid pipelines

  11. Dictionary-driven protein annotation.

    Science.gov (United States)

    Rigoutsos, Isidore; Huynh, Tien; Floratos, Aris; Parida, Laxmi; Platt, Daniel

    2002-09-01

    Computational methods seeking to automatically determine the properties (functional, structural, physicochemical, etc.) of a protein directly from the sequence have long been the focus of numerous research groups. With the advent of advanced sequencing methods and systems, the number of amino acid sequences that are being deposited in the public databases has been increasing steadily. This has in turn generated a renewed demand for automated approaches that can annotate individual sequences and complete genomes quickly, exhaustively and objectively. In this paper, we present one such approach that is centered around and exploits the Bio-Dictionary, a collection of amino acid patterns that completely covers the natural sequence space and can capture functional and structural signals that have been reused during evolution, within and across protein families. Our annotation approach also makes use of a weighted, position-specific scoring scheme that is unaffected by the over-representation of well-conserved proteins and protein fragments in the databases used. For a given query sequence, the method permits one to determine, in a single pass, the following: local and global similarities between the query and any protein already present in a public database; the likeness of the query to all available archaeal/ bacterial/eukaryotic/viral sequences in the database as a function of amino acid position within the query; the character of secondary structure of the query as a function of amino acid position within the query; the cytoplasmic, transmembrane or extracellular behavior of the query; the nature and position of binding domains, active sites, post-translationally modified sites, signal peptides, etc. In terms of performance, the proposed method is exhaustive, objective and allows for the rapid annotation of individual sequences and full genomes. Annotation examples are presented and discussed in Results, including individual queries and complete genomes that were

  12. Trouble in the pipeline?

    Energy Technology Data Exchange (ETDEWEB)

    Snieckus, Darius

    2002-10-01

    The author provides a commentary on the political, economic, environmental and social problems facing the proposed 3 billion US dollars Baku-Ceyhan-Tbilisi export pipeline. The 1760 km long pipeline has been designed to carry 1 million b/d of crude oil from the Caspian Sea to Turkey's Mediterranean coast. The pipeline is being constructed by a BP-led consortium made up of Socar, Statoil, Unocal, TPAO, Eni, Itochu, Amerada Hess, TotalFinaElf and BP. (UK)

  13. Canadian pipeline transportation system : transportation assessment

    International Nuclear Information System (INIS)

    2009-07-01

    In addition to regulating the construction and operation of 70,000 km of oil and natural gas pipelines in Canada, the National Energy Board (NEB) regulates the trade of natural gas, oil and natural gas liquids. This report provided an assessment of the Canadian hydrocarbon transportation system in relation to its ability to provide a robust energy infrastructure. Data was collected from NEB-regulated pipeline companies and a range of publicly available sources to determine if adequate pipeline capacity is in place to transport products to consumers. The NEB also used throughput and capacity information received from pipeline operators as well as members of the investment community. The study examined price differentials compared with firm service tolls for transportation paths, as well as capacity utilization on pipelines and the degree of apportionment on major oil pipelines. This review indicated that in general, the Canadian pipeline transportation system continues to work effectively, with adequate pipeline capacity in place to move products to consumers who need them. 9 tabs., 30 figs., 3 appendices.

  14. 77 FR 6857 - Pipeline Safety: Notice of Public Meetings on Improving Pipeline Leak Detection System...

    Science.gov (United States)

    2012-02-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... installed to lessen the volume of natural gas and hazardous liquid released during catastrophic pipeline... p.m. Panel 3: Considerations for Natural Gas Pipeline Leak Detection Systems 3:30 p.m. Break 3:45 p...

  15. Neutron backscattered application in investigation for Pipeline Intelligent Gauge (PIG) tracking in RAYMINTEX matrix pipeline

    International Nuclear Information System (INIS)

    Mohd Fakarudin Badul Rahman; Ismail Mustapha; Nor Paiza Mohd Hasan; Pairu Ibrahim; Airwan Affandi Mahmood; Mior Ahmad Khusaini Adnan; Najib Mohammed Zakey

    2012-01-01

    The Radiation Vulcanized Natural Rubber Latex (RVNRL) process plants such RAYMINTEX, pipelines are used extensively to transfer a latex product from storage vessel and being irradiated to produce a high quality of latex. A hydraulically activated Pipeline Intelligent Gauge (PIG) was held back against the latex flow. Consequently, the stuck PIG in pipeline was subjected to interrupt plant operation. The investigation was carried out using the neutron backscattered technique scanner to track the stuck PIG in pipeline of RVNRL plant. The 50 mCi Americium Beryllium (AmBe 241 ) fast neutron emitter source in the range 0.5-11 MeV has been used and thermal neutrons in the 30 eV- 0.5 MeV was detected using Helium-3 (He 3 ) detector. It is observed that there is unambiguous relationship between vapour and RVNRL consequence of diverse hydrogen concentration in pipeline. Thus, neutron backscattered technique was capable to determine the location of stuck PIG in a RVNRL pipeline. (author)

  16. Natural gas pipeline technology overview.

    Energy Technology Data Exchange (ETDEWEB)

    Folga, S. M.; Decision and Information Sciences

    2007-11-01

    The United States relies on natural gas for one-quarter of its energy needs. In 2001 alone, the nation consumed 21.5 trillion cubic feet of natural gas. A large portion of natural gas pipeline capacity within the United States is directed from major production areas in Texas and Louisiana, Wyoming, and other states to markets in the western, eastern, and midwestern regions of the country. In the past 10 years, increasing levels of gas from Canada have also been brought into these markets (EIA 2007). The United States has several major natural gas production basins and an extensive natural gas pipeline network, with almost 95% of U.S. natural gas imports coming from Canada. At present, the gas pipeline infrastructure is more developed between Canada and the United States than between Mexico and the United States. Gas flows from Canada to the United States through several major pipelines feeding U.S. markets in the Midwest, Northeast, Pacific Northwest, and California. Some key examples are the Alliance Pipeline, the Northern Border Pipeline, the Maritimes & Northeast Pipeline, the TransCanada Pipeline System, and Westcoast Energy pipelines. Major connections join Texas and northeastern Mexico, with additional connections to Arizona and between California and Baja California, Mexico (INGAA 2007). Of the natural gas consumed in the United States, 85% is produced domestically. Figure 1.1-1 shows the complex North American natural gas network. The pipeline transmission system--the 'interstate highway' for natural gas--consists of 180,000 miles of high-strength steel pipe varying in diameter, normally between 30 and 36 inches in diameter. The primary function of the transmission pipeline company is to move huge amounts of natural gas thousands of miles from producing regions to local natural gas utility delivery points. These delivery points, called 'city gate stations', are usually owned by distribution companies, although some are owned by

  17. 75 FR 4134 - Pipeline Safety: Leak Detection on Hazardous Liquid Pipelines

    Science.gov (United States)

    2010-01-26

    ... safety study on pipeline Supervisory Control and Data Acquisition (SCADA) systems (NTSB/SS-05/02). The... indications of a leak on the SCADA interface was the impetus for this study. The NTSB examined 13 hazardous... pipelines, the line balance technique for leak detection can often be performed with manual calculations...

  18. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    Science.gov (United States)

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  19. A pipeline for the de novo assembly of the Themira biloba (Sepsidae: Diptera) transcriptome using a multiple k-mer length approach.

    Science.gov (United States)

    Melicher, Dacotah; Torson, Alex S; Dworkin, Ian; Bowsher, Julia H

    2014-03-12

    The Sepsidae family of flies is a model for investigating how sexual selection shapes courtship and sexual dimorphism in a comparative framework. However, like many non-model systems, there are few molecular resources available. Large-scale sequencing and assembly have not been performed in any sepsid, and the lack of a closely related genome makes investigation of gene expression challenging. Our goal was to develop an automated pipeline for de novo transcriptome assembly, and to use that pipeline to assemble and analyze the transcriptome of the sepsid Themira biloba. Our bioinformatics pipeline uses cloud computing services to assemble and analyze the transcriptome with off-site data management, processing, and backup. It uses a multiple k-mer length approach combined with a second meta-assembly to extend transcripts and recover more bases of transcript sequences than standard single k-mer assembly. We used 454 sequencing to generate 1.48 million reads from cDNA generated from embryo, larva, and pupae of T. biloba and assembled a transcriptome consisting of 24,495 contigs. Annotation identified 16,705 transcripts, including those involved in embryogenesis and limb patterning. We assembled transcriptomes from an additional three non-model organisms to demonstrate that our pipeline assembled a higher-quality transcriptome than single k-mer approaches across multiple species. The pipeline we have developed for assembly and analysis increases contig length, recovers unique transcripts, and assembles more base pairs than other methods through the use of a meta-assembly. The T. biloba transcriptome is a critical resource for performing large-scale RNA-Seq investigations of gene expression patterns, and is the first transcriptome sequenced in this Dipteran family.

  20. Reading Actively Online: An Exploratory Investigation of Online Annotation Tools for Inquiry Learning / La lecture active en ligne: étude exploratoire sur les outils d'annotation en ligne pour l'apprentissage par l’enquête

    OpenAIRE

    Jingyan Lu; Liping Deng

    2012-01-01

    This study seeks to design and facilitate active reading among secondary school students with an online annotation tool – Diigo. Two classes of different academic performance levels were recruited to examine their annotation behavior and perceptions of Diigo. We wanted to determine whether the two classes differed in how they used Diigo; how they perceived Diigo; and whether how they used Diigo was related to how they perceived it. Using annotation data and surveys in which students reported ...

  1. Contemporary methods of emergency repair works on transit pipelines. Repair works on in-service pipelines

    International Nuclear Information System (INIS)

    Olma, T.; Winckowski, J.

    2007-01-01

    The paper presents modern methods and relevant technologies of pipeline failure repairs, basing on TD Williamson technique for hermetic plugging of gas pipelines without interrupting service. Rules for management of emergency situations on the Polish Section of Yamal - Europe Transit Gas Pipeline are being discussed as well. (author)

  2. Current trend of annotating single nucleotide variation in humans--A case study on SNVrap.

    Science.gov (United States)

    Li, Mulin Jun; Wang, Junwen

    2015-06-01

    As high throughput methods, such as whole genome genotyping arrays, whole exome sequencing (WES) and whole genome sequencing (WGS), have detected huge amounts of genetic variants associated with human diseases, function annotation of these variants is an indispensable step in understanding disease etiology. Large-scale functional genomics projects, such as The ENCODE Project and Roadmap Epigenomics Project, provide genome-wide profiling of functional elements across different human cell types and tissues. With the urgent demands for identification of disease-causal variants, comprehensive and easy-to-use annotation tool is highly in demand. Here we review and discuss current progress and trend of the variant annotation field. Furthermore, we introduce a comprehensive web portal for annotating human genetic variants. We use gene-based features and the latest functional genomics datasets to annotate single nucleotide variation (SNVs) in human, at whole genome scale. We further apply several function prediction algorithms to annotate SNVs that might affect different biological processes, including transcriptional gene regulation, alternative splicing, post-transcriptional regulation, translation and post-translational modifications. The SNVrap web portal is freely available at http://jjwanglab.org/snvrap. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The effectiveness of annotated (vs. non-annotated) digital pathology slides as a teaching tool during dermatology and pathology residencies.

    Science.gov (United States)

    Marsch, Amanda F; Espiritu, Baltazar; Groth, John; Hutchens, Kelli A

    2014-06-01

    With today's technology, paraffin-embedded, hematoxylin & eosin-stained pathology slides can be scanned to generate high quality virtual slides. Using proprietary software, digital images can also be annotated with arrows, circles and boxes to highlight certain diagnostic features. Previous studies assessing digital microscopy as a teaching tool did not involve the annotation of digital images. The objective of this study was to compare the effectiveness of annotated digital pathology slides versus non-annotated digital pathology slides as a teaching tool during dermatology and pathology residencies. A study group composed of 31 dermatology and pathology residents was asked to complete an online pre-quiz consisting of 20 multiple choice style questions, each associated with a static digital pathology image. After completion, participants were given access to an online tutorial composed of digitally annotated pathology slides and subsequently asked to complete a post-quiz. A control group of 12 residents completed a non-annotated version of the tutorial. Nearly all participants in the study group improved their quiz score, with an average improvement of 17%, versus only 3% (P = 0.005) in the control group. These results support the notion that annotated digital pathology slides are superior to non-annotated slides for the purpose of resident education. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Automatic annotation of head velocity and acceleration in Anvil

    DEFF Research Database (Denmark)

    Jongejan, Bart

    2012-01-01

    We describe an automatic face tracker plugin for the ANVIL annotation tool. The face tracker produces data for velocity and for acceleration in two dimensions. We compare the annotations generated by the face tracking algorithm with independently made manual annotations for head movements....... The annotations are a useful supplement to manual annotations and may help human annotators to quickly and reliably determine onset of head movements and to suggest which kind of head movement is taking place....

  5. Mesotext. Framing and exploring annotations

    NARCIS (Netherlands)

    Boot, P.; Boot, P.; Stronks, E.

    2007-01-01

    From the introduction: Annotation is an important item on the wish list for digital scholarly tools. It is one of John Unsworth’s primitives of scholarship (Unsworth 2000). Especially in linguistics,a number of tools have been developed that facilitate the creation of annotations to source material

  6. Field sludge characterization obtained from inner of pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Nava, N.; Sosa, E.; Alamilla, J.L. [Instituto Mexicano del Petroleo, Programa de Integridad de Ductos, Eje Central Lazaro Cardenas Norte 152, San Bartolo Atepehuacan, C.P. 07730 (Mexico); Knigth, C. [PEMEX Refinacion, Avenida Marina Nacional 329, Edificio B-2, Piso 11, C.P. 11311 (Mexico); Contreras, A. [Instituto Mexicano del Petroleo, Programa de Integridad de Ductos, Eje Central Lazaro Cardenas Norte 152, San Bartolo Atepehuacan, C.P. 07730 (Mexico)], E-mail: acontrer@imp.mx

    2009-11-15

    Physicochemical characterization of sludge obtained from refined hydrocarbons transmission pipeline was carried out through Moessbauer spectroscopy and X-ray diffraction. The Moessbauer and X-ray patterns indicate the presence of corrosion products composed of different iron oxide and sulfide phases. Hematite ({alpha}-Fe{sub 2}O{sub 3}), magnetite (Fe{sub 3}O{sub 4}), maghemite ({gamma}-Fe{sub 2}O{sub 3}), magnetic and superparamagnetic goethite ({alpha}-FeOOH), pyrrhotite (Fe{sub 1-x}S), akaganeite ({beta}-FeOOH), and lepidocrocite ({gamma}-FeOOH) were identified as corrosion products in samples obtained from pipeline transporting Magna and Premium gasoline. For diesel transmission pipeline, hematite, magnetite, and magnetic goethite were identified. Corrosion products follow a simple reaction mechanism of steel dissolution in aerated aqueous media at a near-neutral pH. Chemical composition of the corrosion products depends on H{sub 2}O and sulfur inherent in fluids (traces). These results can be useful for decision-making with regard to pipeline corrosion control.

  7. Annotating images by mining image search results

    NARCIS (Netherlands)

    Wang, X.J.; Zhang, L.; Li, X.; Ma, W.Y.

    2008-01-01

    Although it has been studied for years by the computer vision and machine learning communities, image annotation is still far from practical. In this paper, we propose a novel attempt at model-free image annotation, which is a data-driven approach that annotates images by mining their search

  8. sRNAnalyzer—a flexible and customizable small RNA sequencing data analysis pipeline

    Science.gov (United States)

    Kim, Taek-Kyun; Baxter, David; Scherler, Kelsey; Gordon, Aaron; Fong, Olivia; Etheridge, Alton; Galas, David J.

    2017-01-01

    Abstract Although many tools have been developed to analyze small RNA sequencing (sRNA-Seq) data, it remains challenging to accurately analyze the small RNA population, mainly due to multiple sequence ID assignment caused by short read length. Additional issues in small RNA analysis include low consistency of microRNA (miRNA) measurement results across different platforms, miRNA mapping associated with miRNA sequence variation (isomiR) and RNA editing, and the origin of those unmapped reads after screening against all endogenous reference sequence databases. To address these issues, we built a comprehensive and customizable sRNA-Seq data analysis pipeline—sRNAnalyzer, which enables: (i) comprehensive miRNA profiling strategies to better handle isomiRs and summarization based on each nucleotide position to detect potential SNPs in miRNAs, (ii) different sequence mapping result assignment approaches to simulate results from microarray/qRT-PCR platforms and a local probabilistic model to assign mapping results to the most-likely IDs, (iii) comprehensive ribosomal RNA filtering for accurate mapping of exogenous RNAs and summarization based on taxonomy annotation. We evaluated our pipeline on both artificial samples (including synthetic miRNA and Escherichia coli cultures) and biological samples (human tissue and plasma). sRNAnalyzer is implemented in Perl and available at: http://srnanalyzer.systemsbiology.net/. PMID:29069500

  9. Combining rules, background knowledge and change patterns to maintain semantic annotations.

    Science.gov (United States)

    Cardoso, Silvio Domingos; Chantal, Reynaud-Delaître; Da Silveira, Marcos; Pruski, Cédric

    2017-01-01

    Knowledge Organization Systems (KOS) play a key role in enriching biomedical information in order to make it machine-understandable and shareable. This is done by annotating medical documents, or more specifically, associating concept labels from KOS with pieces of digital information, e.g., images or texts. However, the dynamic nature of KOS may impact the annotations, thus creating a mismatch between the evolved concept and the associated information. To solve this problem, methods to maintain the quality of the annotations are required. In this paper, we define a framework based on rules, background knowledge and change patterns to drive the annotation adaption process. We evaluate experimentally the proposed approach in realistic cases-studies and demonstrate the overall performance of our approach in different KOS considering the precision, recall, F1-score and AUC value of the system.

  10. Crude oil pipeline expansion summary

    International Nuclear Information System (INIS)

    2005-02-01

    The Canadian Association of Petroleum Producers has been working with producers to address issues associated with the development of new pipeline capacity from western Canada. This document presents an assessment of the need for additional oil pipeline capacity given the changing mix of crude oil types and forecasted supply growth. It is of particular interest to crude oil producers and contributes to current available information for market participants. While detailed, the underlying analysis does not account for all the factors that may come into play when individual market participants make choices about which expansions they may support. The key focus is on the importance of timely expansion. It was emphasized that if pipeline expansions lags the crude supply growth, then the consequences would be both significant and unacceptable. Obstacles to timely expansion are also discussed. The report reviews the production and supply forecasts, the existing crude oil pipeline infrastructure, opportunities for new market development, requirements for new pipeline capacity and tolling options for pipeline development. tabs., figs., 1 appendix

  11. 77 FR 45417 - Pipeline Safety: Inspection and Protection of Pipeline Facilities After Railway Accidents

    Science.gov (United States)

    2012-07-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Accidents AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. [[Page 45418

  12. Canadian pipeline contractors in holding pattern

    Energy Technology Data Exchange (ETDEWEB)

    Caron, G [Pe Ben Pipelines Ltd.; Osadchuk, V; Sharp, M; Stabback, J G

    1979-05-21

    A discussion of papers presented at a Pipe Line Contractors Association of Canada convention includes comments by G. Caron (Pe Ben Pipelines Ltd.) on the continued slack in big-inch pipeline construction into 1980 owing mainly to delayed U.S. and Canadian decisions on outstanding Alaska Highway gas pipeline issues and associated gas export bids and on the use of automatic welding for expeditious construction of the northern sections of the Alaska Highway pipeline; by V. Osadchuk (Majestic Wiley Contract. Ltd.) on the liquidation of surplus construction equipment because of these delays; by M. Sharp (Can. North. Pipeline Agency) on the need for close U.S. and Canadian governmental and industrial cooperation to permit an early 1980 start for construction of the prebuild sections of the Alaska pipeline; and by J. G. Stabback (Can. Natl. Energy Board) on the Alaska oil pipeline applications by Foothills Pipe Lines Ltd., Trans Mountain Pipe Line Co. Ltd., and Kitimat Pipe Line Ltd.

  13. 76 FR 28326 - Pipeline Safety: National Pipeline Mapping System Data Submissions and Submission Dates for Gas...

    Science.gov (United States)

    2011-05-17

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR 191... Reports AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Issuance of... Pipeline and Hazardous Materials Safety Administration (PHMSA) published a final rule on November 26, 2010...

  14. Measures for security and supervision of pipelines; Massnahmen zur Pipeline-Sicherheit und -Ueberwachung

    Energy Technology Data Exchange (ETDEWEB)

    Horlacher, Hans-Burkhard [TU Dresden (Germany). Inst. fuer Wasserbau und Technische Hydromechanik; Giesecke, Juergen [Stuttgart Univ. (Germany). Inst. fuer Wasserbau

    2010-07-01

    In a previous publication, the two authors dealt with the hydraulic problems as regards mineral oil pipelines. The present report describes the measures mainly used to guarantee the safety of such pipelines. (orig.)

  15. Pipeline integrity evaluation of oil pipelines using free-swimming acoustic technology

    Energy Technology Data Exchange (ETDEWEB)

    Ariaratnam, Samuel T. [Arizona State University, Tempe, Arizona (United States); Chandrasekaran, Muthu [Pure Technologies Limited, Calgary, AB (Canada)

    2010-07-01

    In the United States, the Pipeline and Hazardous Materials Safety Administration (PHMSA) funded a joint academy-industry research project, which developed and refined a free-swimming tool called SmartBall. The tool swims through the pipeline and gives results at a much lower cost than current leak detection methods, and it can detect leaks as small as 0.03 gpm of oil. GPS-synchronized above-ground loggers capture acoustic signals and record the passage of the tool through the pipeline. The tool is spherical and smaller than the pipe, through which it rolls silently; it can overcome obstacles that could otherwise make a pipeline unpiggable. SmartBall uses the great potential of acoustic detection, because when a pressurized product leaks from a pipe, it produces a distinctive acoustic signal that travels through the product; at the same time, it overcomes the problem caused by the very limited range of this signal. This technology can prevent enormous economic consequences such as a 50,000-gallon gasoline spill that happened in 2003 between Tucson and Phoenix.

  16. Consumer energy research: an annotated bibliography. Vol. 3

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, D.C.; McDougall, G.H.G.

    1983-04-01

    This annotated bibliography attempts to provide a comprehensive package of existing information in consumer related energy research. A concentrated effort was made to collect unpublished material as well as material from journals and other sources, including governments, utilities research institutes and private firms. A deliberate effort was made to include agencies outside North America. For the most part the bibliography is limited to annotations of empiracal studies. However, it includes a number of descriptive reports which appear to make a significant contribution to understanding consumers and energy use. The format of the annotations displays the author, date of publication, title and source of the study. Annotations of empirical studies are divided into four parts: objectives, methods, variables and findings/implications. Care was taken to provide a reasonable amount of detail in the annotations to enable the reader to understand the methodology, the results and the degree to which the implications fo the study can be generalized to other situations. Studies are arranged alphabetically by author. The content of the studies reviewed is classified in a series of tables which are intended to provide a summary of sources, types and foci of the various studies. These tables are intended to aid researchers interested in specific topics to locate those studies most relevant to their work. The studies are categorized using a number of different classification criteria, for example, methodology used, type of energy form, type of policy initiative, and type of consumer activity. A general overview of the studies is also presented. 17 tabs.

  17. WormBase: Annotating many nematode genomes.

    Science.gov (United States)

    Howe, Kevin; Davis, Paul; Paulini, Michael; Tuli, Mary Ann; Williams, Gary; Yook, Karen; Durbin, Richard; Kersey, Paul; Sternberg, Paul W

    2012-01-01

    WormBase (www.wormbase.org) has been serving the scientific community for over 11 years as the central repository for genomic and genetic information for the soil nematode Caenorhabditis elegans. The resource has evolved from its beginnings as a database housing the genomic sequence and genetic and physical maps of a single species, and now represents the breadth and diversity of nematode research, currently serving genome sequence and annotation for around 20 nematodes. In this article, we focus on WormBase's role of genome sequence annotation, describing how we annotate and integrate data from a growing collection of nematode species and strains. We also review our approaches to sequence curation, and discuss the impact on annotation quality of large functional genomics projects such as modENCODE.

  18. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    Science.gov (United States)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  19. Challenges in the development of market-based pipeline investments

    International Nuclear Information System (INIS)

    Von Bassenheim, G.; Mohitpour, M.; Klaudt, D.; Jenkins, A.

    2000-01-01

    The challenges, risks and uncertainties that the natural gas industry faces in developing market-based pipeline projects were discussed. Market-based pipeline investments are fundamentally different from user-driven projects. Market-based projected involve finding enough energy users and linking them with a pipeline infrastructure to viable supplies of natural gas. Each unique project is developed individually and requires a strong corporate vision and support before it can be successfully implemented. The three phases of a pipeline investment include the business development phase, the project development phase, and the implementation/operations phase. Market-based companies will need a clear vision for long-term goals and the desire to succeed. The company will have to prepare a detailed strategy and policies that clearly define geographic areas of operations, risk tolerance, availability of capital and expected project performance. 3 refs., 3 tabs., 2 figs

  20. Teaching and Learning Communities through Online Annotation

    Science.gov (United States)

    van der Pluijm, B.

    2016-12-01

    What do colleagues do with your assigned textbook? What they say or think about the material? Want students to be more engaged in their learning experience? If so, online materials that complement standard lecture format provide new opportunity through managed, online group annotation that leverages the ubiquity of internet access, while personalizing learning. The concept is illustrated with the new online textbook "Processes in Structural Geology and Tectonics", by Ben van der Pluijm and Stephen Marshak, which offers a platform for sharing of experiences, supplementary materials and approaches, including readings, mathematical applications, exercises, challenge questions, quizzes, alternative explanations, and more. The annotation framework used is Hypothes.is, which offers a free, open platform markup environment for annotation of websites and PDF postings. The annotations can be public, grouped or individualized, as desired, including export access and download of annotations. A teacher group, hosted by a moderator/owner, limits access to members of a user group of teachers, so that its members can use, copy or transcribe annotations for their own lesson material. Likewise, an instructor can host a student group that encourages sharing of observations, questions and answers among students and instructor. Also, the instructor can create one or more closed groups that offers study help and hints to students. Options galore, all of which aim to engage students and to promote greater responsibility for their learning experience. Beyond new capacity, the ability to analyze student annotation supports individual learners and their needs. For example, student notes can be analyzed for key phrases and concepts, and identify misunderstandings, omissions and problems. Also, example annotations can be shared to enhance notetaking skills and to help with studying. Lastly, online annotation allows active application to lecture posted slides, supporting real-time notetaking

  1. Displaying Annotations for Digitised Globes

    Science.gov (United States)

    Gede, Mátyás; Farbinger, Anna

    2018-05-01

    Thanks to the efforts of the various globe digitising projects, nowadays there are plenty of old globes that can be examined as 3D models on the computer screen. These globes usually contain a lot of interesting details that an average observer would not entirely discover for the first time. The authors developed a website that can display annotations for such digitised globes. These annotations help observers of the globe to discover all the important, interesting details. Annotations consist of a plain text title, a HTML formatted descriptive text and a corresponding polygon and are stored in KML format. The website is powered by the Cesium virtual globe engine.

  2. ToTem: a tool for variant calling pipeline optimization.

    Science.gov (United States)

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  3. Offshore Pipeline Locations in the Gulf of Mexico, Geographic NAD27, MMS (2007) [pipelines_vectors_mms_2007

    Data.gov (United States)

    Louisiana Geographic Information Center — Offshore Minerals Management Pipeline Locations for the Gulf of Mexico (GOM). Contains the lines of the pipeline in the GOM. All pipelines existing in the databases...

  4. Offshore Pipeline Locations in the Gulf of Mexico, Geographic NAD27, MMS (2007) [pipelines_points_mms_2007

    Data.gov (United States)

    Louisiana Geographic Information Center — Offshore Minerals Management Pipeline Locations for the Gulf of Mexico (GOM). Contains the points of the pipeline in the GOM. All pipelines existing in the databases...

  5. THE DIMENSIONS OF COMPOSITION ANNOTATION.

    Science.gov (United States)

    MCCOLLY, WILLIAM

    ENGLISH TEACHER ANNOTATIONS WERE STUDIED TO DETERMINE THE DIMENSIONS AND PROPERTIES OF THE ENTIRE SYSTEM FOR WRITING CORRECTIONS AND CRITICISMS ON COMPOSITIONS. FOUR SETS OF COMPOSITIONS WERE WRITTEN BY STUDENTS IN GRADES 9 THROUGH 13. TYPESCRIPTS OF THE COMPOSITIONS WERE ANNOTATED BY CLASSROOM ENGLISH TEACHERS. THEN, 32 ENGLISH TEACHERS JUDGED…

  6. Combining gene prediction methods to improve metagenomic gene annotation

    Directory of Open Access Journals (Sweden)

    Rosen Gail L

    2011-01-01

    Full Text Available Abstract Background Traditional gene annotation methods rely on characteristics that may not be available in short reads generated from next generation technology, resulting in suboptimal performance for metagenomic (environmental samples. Therefore, in recent years, new programs have been developed that optimize performance on short reads. In this work, we benchmark three metagenomic gene prediction programs and combine their predictions to improve metagenomic read gene annotation. Results We not only analyze the programs' performance at different read-lengths like similar studies, but also separate different types of reads, including intra- and intergenic regions, for analysis. The main deficiencies are in the algorithms' ability to predict non-coding regions and gene edges, resulting in more false-positives and false-negatives than desired. In fact, the specificities of the algorithms are notably worse than the sensitivities. By combining the programs' predictions, we show significant improvement in specificity at minimal cost to sensitivity, resulting in 4% improvement in accuracy for 100 bp reads with ~1% improvement in accuracy for 200 bp reads and above. To correctly annotate the start and stop of the genes, we find that a consensus of all the predictors performs best for shorter read lengths while a unanimous agreement is better for longer read lengths, boosting annotation accuracy by 1-8%. We also demonstrate use of the classifier combinations on a real dataset. Conclusions To optimize the performance for both prediction and annotation accuracies, we conclude that the consensus of all methods (or a majority vote is the best for reads 400 bp and shorter, while using the intersection of GeneMark and Orphelia predictions is the best for reads 500 bp and longer. We demonstrate that most methods predict over 80% coding (including partially coding reads on a real human gut sample sequenced by Illumina technology.

  7. Statistical method to compare massive parallel sequencing pipelines.

    Science.gov (United States)

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.

  8. MimoSA: a system for minimotif annotation

    Directory of Open Access Journals (Sweden)

    Kundeti Vamsi

    2010-06-01

    Full Text Available Abstract Background Minimotifs are short peptide sequences within one protein, which are recognized by other proteins or molecules. While there are now several minimotif databases, they are incomplete. There are reports of many minimotifs in the primary literature, which have yet to be annotated, while entirely novel minimotifs continue to be published on a weekly basis. Our recently proposed function and sequence syntax for minimotifs enables us to build a general tool that will facilitate structured annotation and management of minimotif data from the biomedical literature. Results We have built the MimoSA application for minimotif annotation. The application supports management of the Minimotif Miner database, literature tracking, and annotation of new minimotifs. MimoSA enables the visualization, organization, selection and editing functions of minimotifs and their attributes in the MnM database. For the literature components, Mimosa provides paper status tracking and scoring of papers for annotation through a freely available machine learning approach, which is based on word correlation. The paper scoring algorithm is also available as a separate program, TextMine. Form-driven annotation of minimotif attributes enables entry of new minimotifs into the MnM database. Several supporting features increase the efficiency of annotation. The layered architecture of MimoSA allows for extensibility by separating the functions of paper scoring, minimotif visualization, and database management. MimoSA is readily adaptable to other annotation efforts that manually curate literature into a MySQL database. Conclusions MimoSA is an extensible application that facilitates minimotif annotation and integrates with the Minimotif Miner database. We have built MimoSA as an application that integrates dynamic abstract scoring with a high performance relational model of minimotif syntax. MimoSA's TextMine, an efficient paper-scoring algorithm, can be used to

  9. Northern pipelines : challenges and needs

    Energy Technology Data Exchange (ETDEWEB)

    Dean, D.; Brownie, D. [ProLog Canada Inc., Calgary, AB (Canada); Fafara, R. [TransCanada PipeLines Ltd., Calgary, AB (Canada)

    2007-07-01

    Working Group 10 presented experiences acquired from the operation of pipeline systems in a northern environment. There are currently 3 pipelines operating north of 60, notably the Shiha gas pipeline near Fort Liard, the Ikhil gas pipeline in Inuvik and the Norman Wells oil pipeline. Each has its unique commissioning, operating and maintenance challenges, as well as specific training and logistical support requirements for the use of in-line inspection tools and other forms of integrity assessment. The effectiveness of cathodic protection systems in a permafrost northern environment was also discussed. It was noted that the delay of the Mackenzie Gas Pipeline Project by two to three years due to joint regulatory review may lead to resource constraints for the project as well as competition for already scarce human resources. The issue of a potential timing conflict with the Alaskan Pipeline Project was also addressed as well as land use issues for routing of supply roads. Integrity monitoring and assessment issues were outlined with reference to pipe soil interaction monitoring in discontinuous permafrost; south facing denuded slope stability; base lining projects; and reclamation issues. It was noted that automatic welding and inspection will increase productivity, while reducing the need for manual labour. In response to anticipated training needs, companies are planning to involve and train Aboriginal labour and will provide camp living conditions that will attract labour. tabs., figs.

  10. Proceedings of the ice scour and Arctic marine pipelines workshop

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    This conference was organized to discuss the challenges facing engineers in Arctic offshore oil and gas operations, particularly those dealing with the design, installation and operation of offshore pipelines. Adding to the usual engineering considerations, formidable enough in themselves, Arctic offshore pipelines also face constraints due to permafrost, ice cover, and ice scouring from icebergs. In addition to an examinations of the roles played by these constraints, the forces and deformation mechanisms experienced by different soils during ice scouring events, modeling the scouring process and the application of models to the issue of pipeline burial and protection were other topics that were addressed by various speakers. Some of the regulatory concerns regarding issues for Arctic pipelines were also discussed. refs., tabs., figs.

  11. Pydpiper: A Flexible Toolkit for Constructing Novel Registration Pipelines

    Directory of Open Access Journals (Sweden)

    Miriam eFriedel

    2014-07-01

    Full Text Available Using neuroimaging technologies to elucidate the relationship between genotype and phenotype and brain and behavior will be a key contribution to biomedical research in the twenty-first century. Among the many methods for analyzing neuroimaging data, image registration deserves particular attention due to its wide range of applications. Finding strategies to register together many images and analyze the differences between them can be a challenge, particularly given that different experimental designs require different registration strategies. Moreover, writing software that can handle different types of image registration pipelines in a flexible, reusable and extensible way can be challenging. In response to this challenge, we have created Pydpiper, a neuroimaging registration toolkit written in Python. Pydpiper is an open-source, freely available pipeline framework that provides multiple modules for various image registration applications. Pydpiper offers five key innovations. Specifically: (1 a robust file handling class that allows access to outputs from all stages of registration at any point in the pipeline; (2 the ability of the framework to eliminate duplicate stages; (3 reusable, easy to subclass modules; (4 a development toolkit written for non-developers; (5 four complete applications that run complex image registration pipelines ``out-of-the-box.'' In this paper, we will discuss both the general Pydpiper framework and the various ways in which component modules can be pieced together to easily create new registration pipelines. This will include a discussion of the core principles motivating code development and a comparison of Pydpiper with other available toolkits. We also provide a comprehensive, line-by-line example to orient users with limited programming knowledge and highlight some of the most useful features of Pydpiper. In addition, we will present the four current applications of the code.

  12. Oil pipeline valve automation for spill reduction

    Energy Technology Data Exchange (ETDEWEB)

    Mohitpour, Mo; Trefanenko, Bill [Enbridge Technology Inc, Calgary (Canada); Tolmasquim, Sueli Tiomno; Kossatz, Helmut [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2003-07-01

    Liquid pipeline codes generally stipulate placement of block valves along liquid transmission pipelines such as on each side of major river crossings where environmental hazards could cause or are foreseen to potentially cause serious consequences. Codes, however, do not stipulate any requirement for block valve spacing for low vapour pressure petroleum transportation, nor for remote pipeline valve operations to reduce spills. A review of pipeline codes for valve requirement and spill limitation in high consequence areas is thus presented along with a criteria for an acceptable spill volume that could be caused by pipeline leak/full rupture. A technique for deciding economically and technically effective pipeline block valve automation for remote operation to reduce oil spilled and control of hazards is also provided. In this review, industry practice is highlighted and application of the criteria for maximum permissible oil spill and the technique for deciding valve automation thus developed, as applied to ORSUB pipeline is presented. ORSUB is one of the three initially selected pipelines that have been studied. These pipelines represent about 14% of the total length of petroleum transmission lines operated by PETROBRAS Transporte S.A. (TRANSPETRO) in Brazil. Based on the implementation of valve motorization on these three pipeline, motorization of block valves for remote operation on the remaining pipelines is intended, depending on the success of these implementations, on historical records of failure and appropriate ranking. (author)

  13. Stabilité des pipelines non ensouillés. Etude bibliographique Stability of Unburied Pipelines. Bibliographic Study

    OpenAIRE

    Alliot J. M.

    2006-01-01

    The integrity of an unburied subsea pipeline depends to a very large extent on its stability on the seabed along its entire length. Hence the determination of this stability is of great importance in the engineering design of pipelines. This article proposes to examine the principal problems raised by the stability of unburied pipelines in the field of soil mechanics. These problems mainly concern the reactions of the soil to pipelines and their assessment, i. e. the forces of soil resistance...

  14. Tissue-specific Proteogenomic Analysis of Plutella xylostella Larval Midgut Using a Multialgorithm Pipeline.

    Science.gov (United States)

    Zhu, Xun; Xie, Shangbo; Armengaud, Jean; Xie, Wen; Guo, Zhaojiang; Kang, Shi; Wu, Qingjun; Wang, Shaoli; Xia, Jixing; He, Rongjun; Zhang, Youjun

    2016-06-01

    The diamondback moth, Plutella xylostella (L.), is the major cosmopolitan pest of brassica and other cruciferous crops. Its larval midgut is a dynamic tissue that interfaces with a wide variety of toxicological and physiological processes. The draft sequence of the P. xylostella genome was recently released, but its annotation remains challenging because of the low sequence coverage of this branch of life and the poor description of exon/intron splicing rules for these insects. Peptide sequencing by computational assignment of tandem mass spectra to genome sequence information provides an experimental independent approach for confirming or refuting protein predictions, a concept that has been termed proteogenomics. In this study, we carried out an in-depth proteogenomic analysis to complement genome annotation of P. xylostella larval midgut based on shotgun HPLC-ESI-MS/MS data by means of a multialgorithm pipeline. A total of 876,341 tandem mass spectra were searched against the predicted P. xylostella protein sequences and a whole-genome six-frame translation database. Based on a data set comprising 2694 novel genome search specific peptides, we discovered 439 novel protein-coding genes and corrected 128 existing gene models. To get the most accurate data to seed further insect genome annotation, more than half of the novel protein-coding genes, i.e. 235 over 439, were further validated after RT-PCR amplification and sequencing of the corresponding transcripts. Furthermore, we validated 53 novel alternative splicings. Finally, a total of 6764 proteins were identified, resulting in one of the most comprehensive proteogenomic study of a nonmodel animal. As the first tissue-specific proteogenomics analysis of P. xylostella, this study provides the fundamental basis for high-throughput proteomics and functional genomics approaches aimed at deciphering the molecular mechanisms of resistance and controlling this pest. © 2016 by The American Society for Biochemistry and

  15. Tissue-specific Proteogenomic Analysis of Plutella xylostella Larval Midgut Using a Multialgorithm Pipeline*

    Science.gov (United States)

    Zhu, Xun; Xie, Shangbo; Armengaud, Jean; Xie, Wen; Guo, Zhaojiang; Kang, Shi; Wu, Qingjun; Wang, Shaoli; Xia, Jixing; He, Rongjun; Zhang, Youjun

    2016-01-01

    The diamondback moth, Plutella xylostella (L.), is the major cosmopolitan pest of brassica and other cruciferous crops. Its larval midgut is a dynamic tissue that interfaces with a wide variety of toxicological and physiological processes. The draft sequence of the P. xylostella genome was recently released, but its annotation remains challenging because of the low sequence coverage of this branch of life and the poor description of exon/intron splicing rules for these insects. Peptide sequencing by computational assignment of tandem mass spectra to genome sequence information provides an experimental independent approach for confirming or refuting protein predictions, a concept that has been termed proteogenomics. In this study, we carried out an in-depth proteogenomic analysis to complement genome annotation of P. xylostella larval midgut based on shotgun HPLC-ESI-MS/MS data by means of a multialgorithm pipeline. A total of 876,341 tandem mass spectra were searched against the predicted P. xylostella protein sequences and a whole-genome six-frame translation database. Based on a data set comprising 2694 novel genome search specific peptides, we discovered 439 novel protein-coding genes and corrected 128 existing gene models. To get the most accurate data to seed further insect genome annotation, more than half of the novel protein-coding genes, i.e. 235 over 439, were further validated after RT-PCR amplification and sequencing of the corresponding transcripts. Furthermore, we validated 53 novel alternative splicings. Finally, a total of 6764 proteins were identified, resulting in one of the most comprehensive proteogenomic study of a nonmodel animal. As the first tissue-specific proteogenomics analysis of P. xylostella, this study provides the fundamental basis for high-throughput proteomics and functional genomics approaches aimed at deciphering the molecular mechanisms of resistance and controlling this pest. PMID:26902207

  16. A reference methylome database and analysis pipeline to facilitate integrative and comparative epigenomics.

    Directory of Open Access Journals (Sweden)

    Qiang Song

    Full Text Available DNA methylation is implicated in a surprising diversity of regulatory, evolutionary processes and diseases in eukaryotes. The introduction of whole-genome bisulfite sequencing has enabled the study of DNA methylation at a single-base resolution, revealing many new aspects of DNA methylation and highlighting the usefulness of methylome data in understanding a variety of genomic phenomena. As the number of publicly available whole-genome bisulfite sequencing studies reaches into the hundreds, reliable and convenient tools for comparing and analyzing methylomes become increasingly important. We present MethPipe, a pipeline for both low and high-level methylome analysis, and MethBase, an accompanying database of annotated methylomes from the public domain. Together these resources enable researchers to extract interesting features from methylomes and compare them with those identified in public methylomes in our database.

  17. Theory and Application of Magnetic Flux Leakage Pipeline Detection.

    Science.gov (United States)

    Shi, Yan; Zhang, Chao; Li, Rui; Cai, Maolin; Jia, Guanwei

    2015-12-10

    Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted.

  18. Comparing Existing Pipeline Networks with the Potential Scale of Future U.S. CO2 Pipeline Networks

    Energy Technology Data Exchange (ETDEWEB)

    Dooley, James J.; Dahowski, Robert T.; Davidson, Casie L.

    2008-02-29

    There is growing interest regarding the potential size of a future U.S. dedicated CO2 pipeline infrastructure if carbon dioxide capture and storage (CCS) technologies are commercially deployed on a large scale. In trying to understand the potential scale of a future national CO2 pipeline network, comparisons are often made to the existing pipeline networks used to deliver natural gas and liquid hydrocarbons to markets within the U.S. This paper assesses the potential scale of the CO2 pipeline system needed under two hypothetical climate policies and compares this to the extant U.S. pipeline infrastructures used to deliver CO2 for enhanced oil recovery (EOR), and to move natural gas and liquid hydrocarbons from areas of production and importation to markets. The data presented here suggest that the need to increase the size of the existing dedicated CO2 pipeline system should not be seen as a significant obstacle for the commercial deployment of CCS technologies.

  19. Generating pipeline networks for corrosion assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J. [Cimarron Engineering Ltd., Calgary, AB (Canada)

    2008-07-01

    Production characteristics and gas-fluid compositions of fluids must be known in order to assess pipelines for internal corrosion risk. In this study, a gathering system pipeline network was built in order to determine corrosion risk for gathering system pipelines. Connections were established between feeder and collector lines in order measure upstream production and the weighted average of the upstream composition of each pipeline in the system. A Norsok M-506 carbon dioxide (CO{sub 2}) corrosion rate model was used to calculate corrosion rates. A spreadsheet was then used to tabulate the obtained data. The analysis used straight lines drawn between the 'from' and 'to' legal sub-division (LSD) endpoints in order to represent pipelines on an Alberta township system (ATS) and identify connections between pipelines. Well connections were established based on matching surface hole location and 'from' LSDs. Well production, composition, pressure, and temperature data were sourced and recorded as well attributes. XSL hierarchical computations were used to determine the production and composition properties of the commingled inflows. It was concluded that the corrosion assessment process can identify locations within the pipeline network where potential deadlegs branched off from flowing pipelines. 4 refs., 2 tabs., 2 figs.

  20. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    Science.gov (United States)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and

  1. A discrete event simulation model for evaluating time delays in a pipeline network

    Energy Technology Data Exchange (ETDEWEB)

    Spricigo, Deisi; Muggiati, Filipe V.; Lueders, Ricardo; Neves Junior, Flavio [Federal University of Technology of Parana (UTFPR), Curitiba, PR (Brazil)

    2009-07-01

    Currently in the oil industry the logistic chain stands out as a strong candidate to obtain highest profit, since recent studies have pointed out to a cost reduction by adoption of better policies for distribution of oil derivatives, particularly those where pipelines are used to transport products. Although there are models to represent transfers of oil derivatives in pipelines, they are quite complex and computationally burden. In this paper, we are interested on models that are less detailed in terms of fluid dynamics but provide more information about operational decisions in a pipeline network. We propose a discrete event simulation model in ARENA that allows simulating a pipeline network based on average historical data. Time delays for transferring different products can be evaluated through different routes. It is considered that transport operations follow a historical behavior and average time delays can thus be estimated within certain bounds. Due to its stochastic nature, time quantities are characterized by average and dispersion measures. This allows comparing different operational scenarios for product transportation. Simulation results are compared to data obtained from a real world pipeline network and different scenarios of production and demand are analyzed. (author)

  2. Calculation of NPP pipeline seismic stability

    International Nuclear Information System (INIS)

    Kirillov, A.P.; Ambriashvili, Yu.K.; Kaliberda, I.V.

    1982-01-01

    A simplified design procedure of seismic pipeline stability of NPP at WWER reactor is described. The simplified design procedure envisages during the selection and arrangement of pipeline saddle and hydraulic shock absorbers use of method of introduction of resilient mountings of very high rigidity into the calculated scheme of the pipeline and performance of calculations with step-by-step method. It is concluded that the application of the design procedure considered permits to determine strains due to seismic loads, to analyze stressed state in pipeline elements and supporting power of pipe-line saddle with provision for seismic loads to plan measures on seismic protection

  3. Slurry pipeline technology: an overview

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Jay P. [Pipeline Systems Incorporated (PSI), Belo Horizonte, MG (Brazil); Lima, Rafael; Pinto, Daniel; Vidal, Alisson [Ausenco do Brasil Engenharia Ltda., Nova Lima, MG (Brazil). PSI Div.

    2009-12-19

    Slurry pipelines represent an economical and environmentally friendly transportation means for many solid materials. This paper provides an over-view of the technology, its evolution and current Brazilian activity. Mineral resources are increasingly moving farther away from ports, processing plants and end use points, and slurry pipelines are an important mode of solids transport. Application guidelines are discussed. State-of-the-Art technical solutions such as pipeline system simulation, pipe materials, pumps, valves, automation, telecommunications, and construction techniques that have made the technology successful are presented. A discussion of where long distant slurry pipelines fit in a picture that also includes thickened and paste materials pipe lining is included. (author)

  4. Annotating individual human genomes.

    Science.gov (United States)

    Torkamani, Ali; Scott-Van Zeeland, Ashley A; Topol, Eric J; Schork, Nicholas J

    2011-10-01

    Advances in DNA sequencing technologies have made it possible to rapidly, accurately and affordably sequence entire individual human genomes. As impressive as this ability seems, however, it will not likely amount to much if one cannot extract meaningful information from individual sequence data. Annotating variations within individual genomes and providing information about their biological or phenotypic impact will thus be crucially important in moving individual sequencing projects forward, especially in the context of the clinical use of sequence information. In this paper we consider the various ways in which one might annotate individual sequence variations and point out limitations in the available methods for doing so. It is arguable that, in the foreseeable future, DNA sequencing of individual genomes will become routine for clinical, research, forensic, and personal purposes. We therefore also consider directions and areas for further research in annotating genomic variants. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. ANNOTATING INDIVIDUAL HUMAN GENOMES*

    Science.gov (United States)

    Torkamani, Ali; Scott-Van Zeeland, Ashley A.; Topol, Eric J.; Schork, Nicholas J.

    2014-01-01

    Advances in DNA sequencing technologies have made it possible to rapidly, accurately and affordably sequence entire individual human genomes. As impressive as this ability seems, however, it will not likely to amount to much if one cannot extract meaningful information from individual sequence data. Annotating variations within individual genomes and providing information about their biological or phenotypic impact will thus be crucially important in moving individual sequencing projects forward, especially in the context of the clinical use of sequence information. In this paper we consider the various ways in which one might annotate individual sequence variations and point out limitations in the available methods for doing so. It is arguable that, in the foreseeable future, DNA sequencing of individual genomes will become routine for clinical, research, forensic, and personal purposes. We therefore also consider directions and areas for further research in annotating genomic variants. PMID:21839162

  6. A novel approach to pipeline tensioner modeling

    Energy Technology Data Exchange (ETDEWEB)

    O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)

    2009-07-01

    As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods

  7. Lay Pipeline Abandonment Head during Some

    African Journals Online (AJOL)

    2016-12-01

    Dec 1, 2016 ... is very cruel to the structural integrity of the pipeline structure after ... and properties may be jeopardized should the pipeline structure be used for oil or gas transport when such ... pipelines under bending may alter the material.

  8. 78 FR 53190 - Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on...

    Science.gov (United States)

    2013-08-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0185] Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on Leak Repair Clamps Due to Defective Seal AGENCY: Pipeline and Hazardous Materials Safety...

  9. GSV Annotated Bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Randy S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pope, Paul A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jiang, Ming [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aragon, Cecilia R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ni, Kevin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wei, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Chilton, Lawrence K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bakel, Alan [Argonne National Lab. (ANL), Argonne, IL (United States)

    2010-09-14

    The following annotated bibliography was developed as part of the geospatial algorithm verification and validation (GSV) project for the Simulation, Algorithms and Modeling program of NA-22. Verification and Validation of geospatial image analysis algorithms covers a wide range of technologies. Papers in the bibliography are thus organized into the following five topic areas: Image processing and analysis, usability and validation of geospatial image analysis algorithms, image distance measures, scene modeling and image rendering, and transportation simulation models. Many other papers were studied during the course of the investigation including. The annotations for these articles can be found in the paper "On the verification and validation of geospatial image analysis algorithms".

  10. Repairing method for reactor primary system pipeline

    International Nuclear Information System (INIS)

    Hosokawa, Hideyuki; Uetake, Naoto; Hara, Teruo.

    1997-01-01

    Pipelines after decontamination of radioactive nuclides deposited on the pipelines in a nuclear power plant during operation or pipelines to replace pipelines deposited with radioactive nuclide are connected to each system of the nuclear power plant. They are heated in a gas phase containing oxygen to form an oxide film on the surface of the pipelines. The thickness of the oxide film formed in the gas phase is 1nm or greater, preferably 100nm. The concentration of oxygen in the gas phase containing oxygen must be 0.1% or greater. The heating is conducted by circulating a heated gas to the inside of the pipelines or disposing a movable heater such as a high frequency induction heater inside of the pipelines to form the oxide film. Then, redeposition of radioactive nuclide can be suppressed and since the oxide film is formed in the gas phase, a large scaled facilities are not necessary, thereby enabling to repair pipelines of reactor primary system at low cost. (N.H.)

  11. STRESS AND STRAIN STATE OF REPAIRING SECTION OF PIPELINE

    Directory of Open Access Journals (Sweden)

    V. V. Nikolaev

    2015-01-01

    Full Text Available Reliability of continuous operation of pipelines is an actual problem. For this reason should be developed an effective warning system of the main pipelines‘  failures and accidents not only in design and operation but also in selected repair. Changing of linear, unloaded by bending position leads to the change of stress and strain state of pipelines. And besides this, the stress and strain state should be determined and controlled in the process of carrying out the repair works. The article presents mathematical model of pipeline’s section straining in viscoelastic setting taking into account soils creep and high-speed stress state of pipeline with the purpose of stresses evaluation and load-supporting capacity of repairing section of pipeline, depending on time.  Stress and strain state analysis of pipeline includes longitudinal and circular stresses calculation  with  account of axis-asymmetrical straining and  was  fulfilled  on  the base of momentless theory of shells. To prove the consistency of data there were compared the calcu- lation results and the solution results by analytical methods for different cases (long pipeline’s section strain only under influence of cross-axis action; long pipeline’s section strain under in- fluence of longitudinal stress; long pipeline’s section strain; which is on the elastic foundation, under influence of cross-axis action. Comparison results shows that the calculation error is not more than 3 %.Analysis of stress-strain state change of pipeline’s section was carried out with development  of  this  model,  which  indicates  the  enlargement  of  span  deflection  in  comparison with problem’s solution in elastic approach. It is also proved, that for consistent assessment of pipeline maintenance conditions, it is necessary to consider the areolas of rheological processes of soils. On the base of complex analysis of pipelines there were determined stresses and time

  12. U.S. interstate pipelines ran more efficiently in 1994

    International Nuclear Information System (INIS)

    True, W.R.

    1995-01-01

    Regulated US interstate pipelines began 1995 under the momentum of impressive efficiency improvements in 1994. Annual reports filed with the US Federal Energy Regulatory Commission (FERC) show that both natural-gas and petroleum liquids pipeline companies increased their net incomes last year despite declining operating revenues. This article discusses trends in the pipeline industry and gives data on the following: pipeline revenues, incomes--1994; current pipeline costs; pipeline costs--estimated vs. actual; current compressor construction costs; compressor costs--estimated vs. actual; US interstate mileage; investment in liquids pipelines; 10-years of land construction costs; top 10 interstate liquids pipelines; top 10 interstate gas pipelines; liquids pipeline companies; and gas pipeline companies

  13. Solar Tutorial and Annotation Resource (STAR)

    Science.gov (United States)

    Showalter, C.; Rex, R.; Hurlburt, N. E.; Zita, E. J.

    2009-12-01

    We have written a software suite designed to facilitate solar data analysis by scientists, students, and the public, anticipating enormous datasets from future instruments. Our “STAR" suite includes an interactive learning section explaining 15 classes of solar events. Users learn software tools that exploit humans’ superior ability (over computers) to identify many events. Annotation tools include time slice generation to quantify loop oscillations, the interpolation of event shapes using natural cubic splines (for loops, sigmoids, and filaments) and closed cubic splines (for coronal holes). Learning these tools in an environment where examples are provided prepares new users to comfortably utilize annotation software with new data. Upon completion of our tutorial, users are presented with media of various solar events and asked to identify and annotate the images, to test their mastery of the system. Goals of the project include public input into the data analysis of very large datasets from future solar satellites, and increased public interest and knowledge about the Sun. In 2010, the Solar Dynamics Observatory (SDO) will be launched into orbit. SDO’s advancements in solar telescope technology will generate a terabyte per day of high-quality data, requiring innovation in data management. While major projects develop automated feature recognition software, so that computers can complete much of the initial event tagging and analysis, still, that software cannot annotate features such as sigmoids, coronal magnetic loops, coronal dimming, etc., due to large amounts of data concentrated in relatively small areas. Previously, solar physicists manually annotated these features, but with the imminent influx of data it is unrealistic to expect specialized researchers to examine every image that computers cannot fully process. A new approach is needed to efficiently process these data. Providing analysis tools and data access to students and the public have proven

  14. Discovering gene annotations in biomedical text databases

    Directory of Open Access Journals (Sweden)

    Ozsoyoglu Gultekin

    2008-03-01

    Full Text Available Abstract Background Genes and gene products are frequently annotated with Gene Ontology concepts based on the evidence provided in genomics articles. Manually locating and curating information about a genomic entity from the biomedical literature requires vast amounts of human effort. Hence, there is clearly a need forautomated computational tools to annotate the genes and gene products with Gene Ontology concepts by computationally capturing the related knowledge embedded in textual data. Results In this article, we present an automated genomic entity annotation system, GEANN, which extracts information about the characteristics of genes and gene products in article abstracts from PubMed, and translates the discoveredknowledge into Gene Ontology (GO concepts, a widely-used standardized vocabulary of genomic traits. GEANN utilizes textual "extraction patterns", and a semantic matching framework to locate phrases matching to a pattern and produce Gene Ontology annotations for genes and gene products. In our experiments, GEANN has reached to the precision level of 78% at therecall level of 61%. On a select set of Gene Ontology concepts, GEANN either outperforms or is comparable to two other automated annotation studies. Use of WordNet for semantic pattern matching improves the precision and recall by 24% and 15%, respectively, and the improvement due to semantic pattern matching becomes more apparent as the Gene Ontology terms become more general. Conclusion GEANN is useful for two distinct purposes: (i automating the annotation of genomic entities with Gene Ontology concepts, and (ii providing existing annotations with additional "evidence articles" from the literature. The use of textual extraction patterns that are constructed based on the existing annotations achieve high precision. The semantic pattern matching framework provides a more flexible pattern matching scheme with respect to "exactmatching" with the advantage of locating approximate

  15. Phylogenetic molecular function annotation

    International Nuclear Information System (INIS)

    Engelhardt, Barbara E; Jordan, Michael I; Repo, Susanna T; Brenner, Steven E

    2009-01-01

    It is now easier to discover thousands of protein sequences in a new microbial genome than it is to biochemically characterize the specific activity of a single protein of unknown function. The molecular functions of protein sequences have typically been predicted using homology-based computational methods, which rely on the principle that homologous proteins share a similar function. However, some protein families include groups of proteins with different molecular functions. A phylogenetic approach for predicting molecular function (sometimes called 'phylogenomics') is an effective means to predict protein molecular function. These methods incorporate functional evidence from all members of a family that have functional characterizations using the evolutionary history of the protein family to make robust predictions for the uncharacterized proteins. However, they are often difficult to apply on a genome-wide scale because of the time-consuming step of reconstructing the phylogenies of each protein to be annotated. Our automated approach for function annotation using phylogeny, the SIFTER (Statistical Inference of Function Through Evolutionary Relationships) methodology, uses a statistical graphical model to compute the probabilities of molecular functions for unannotated proteins. Our benchmark tests showed that SIFTER provides accurate functional predictions on various protein families, outperforming other available methods.

  16. Annotating the human genome with Disease Ontology

    Science.gov (United States)

    Osborne, John D; Flatow, Jared; Holko, Michelle; Lin, Simon M; Kibbe, Warren A; Zhu, Lihua (Julie); Danila, Maria I; Feng, Gang; Chisholm, Rex L

    2009-01-01

    Background The human genome has been extensively annotated with Gene Ontology for biological functions, but minimally computationally annotated for diseases. Results We used the Unified Medical Language System (UMLS) MetaMap Transfer tool (MMTx) to discover gene-disease relationships from the GeneRIF database. We utilized a comprehensive subset of UMLS, which is disease-focused and structured as a directed acyclic graph (the Disease Ontology), to filter and interpret results from MMTx. The results were validated against the Homayouni gene collection using recall and precision measurements. We compared our results with the widely used Online Mendelian Inheritance in Man (OMIM) annotations. Conclusion The validation data set suggests a 91% recall rate and 97% precision rate of disease annotation using GeneRIF, in contrast with a 22% recall and 98% precision using OMIM. Our thesaurus-based approach allows for comparisons to be made between disease containing databases and allows for increased accuracy in disease identification through synonym matching. The much higher recall rate of our approach demonstrates that annotating human genome with Disease Ontology and GeneRIF for diseases dramatically increases the coverage of the disease annotation of human genome. PMID:19594883

  17. Annotating non-coding regions of the genome.

    Science.gov (United States)

    Alexander, Roger P; Fang, Gang; Rozowsky, Joel; Snyder, Michael; Gerstein, Mark B

    2010-08-01

    Most of the human genome consists of non-protein-coding DNA. Recently, progress has been made in annotating these non-coding regions through the interpretation of functional genomics experiments and comparative sequence analysis. One can conceptualize functional genomics analysis as involving a sequence of steps: turning the output of an experiment into a 'signal' at each base pair of the genome; smoothing this signal and segmenting it into small blocks of initial annotation; and then clustering these small blocks into larger derived annotations and networks. Finally, one can relate functional genomics annotations to conserved units and measures of conservation derived from comparative sequence analysis.

  18. A quick guide to pipeline engineering

    CERN Document Server

    Alkazraji, D

    2008-01-01

    Pipeline engineering requires an understanding of a wide range of topics. Operators must take into account numerous pipeline codes and standards, calculation approaches, and reference materials in order to make accurate and informed decisions.A Quick Guide to Pipeline Engineering provides concise, easy-to-use, and accessible information on onshore and offshore pipeline engineering. Topics covered include: design; construction; testing; operation and maintenance; and decommissioning.Basic principles are discussed and clear guidance on regulations is provided, in a way that will

  19. Elasticplastic dynamic analysis of pipelines

    International Nuclear Information System (INIS)

    Veloso Filho, D.; Loula, A.F.D.; Guerreiro, J.N.C.

    1982-01-01

    A model for structural analysis of spatial pipelines constituted by material with perfect elastoplastic behavior and submmited to time dependence stress is presented. The spatial discretization is done using the Finite Element method, and for the time integration of movement equations an stable finite difference algorithm is used. (E.G.) [pt

  20. Systems Theory and Communication. Annotated Bibliography.

    Science.gov (United States)

    Covington, William G., Jr.

    This annotated bibliography presents annotations of 31 books and journal articles dealing with systems theory and its relation to organizational communication, marketing, information theory, and cybernetics. Materials were published between 1963 and 1992 and are listed alphabetically by author. (RS)

  1. The surplus value of semantic annotations

    NARCIS (Netherlands)

    Marx, M.

    2010-01-01

    We compare the costs of semantic annotation of textual documents to its benefits for information processing tasks. Semantic annotation can improve the performance of retrieval tasks and facilitates an improved search experience through faceted search, focused retrieval, better document summaries,

  2. Optimal valve location in long oil pipelines

    OpenAIRE

    Grigoriev, A.; Grigorieva, N.V.

    2007-01-01

    We address the valve location problem, one of the basic problems in design of long oil pipelines. Whenever a pipeline is depressurized, the shutoff valves block the oil flow and seal the damaged part of the pipeline. Thus, the quantity of oil possibly contaminating the area around the pipeline is determined by the volume of the damaged section of the pipeline between two consecutive valves. Then, ecologic damage can be quantified by the amount of leaked oil and the environmental characteristi...

  3. Solving an unpiggable pipeline challenge

    Energy Technology Data Exchange (ETDEWEB)

    Walker, James R. [GE Oil and Gas, PII Pipeline Solutions, Cramlington Northumberland (United Kingdom); Kern, Michael [National Grid, New Hampshire (United Kingdom)

    2009-07-01

    Technically, any pipeline can be retrofitted to enable in line inspection. Sensibly however, the expense of excavations and construction of permanent facilities have been, in many cases, exceedingly prohibitive. Even where traditional modifications are feasible from engineering perspectives, flow interruption may not be an option - either because they are critical supply lines or because the associated lost revenues could be nearly insurmountable. Savvy pipeline integrity managers know the safety issue that is at stake over the long term. They are also well aware of the accuracy benefits that high-quality in-line inspection data offer over potentially supply disruptive alternatives such as hydrostatic testing. To complicate matters further, many operators, particularly in the US, now face regulatory pressure to assess the integrity of their yet-uninspected pipelines located in highly populated areas. This paper describes an important project National Grid undertook that made use of a unique pipeline access method that did not require permanent installation of expensive facilities required for in line inspection of a pipeline previously considered 'unpiggable'. Since the pipeline was located in an urban area, flow disruption had to be minimized. This paper will define the project background, its challenges, outcomes and lessons learned for the future. (author)

  4. Fishing activity near offshore pipelines, 2017

    NARCIS (Netherlands)

    Machiels, Marcel

    2018-01-01

    On the North Sea bottom lie numerous pipelines to link oil- or gas offshore drilling units, - platforms and processing stations on land. Although pipeline tubes are coated and covered with protective layers, the pipelines risk being damaged through man-made hazards like anchor dropping and fishing

  5. Annotation-based enrichment of Digital Objects using open-source frameworks

    Directory of Open Access Journals (Sweden)

    Marcus Emmanuel Barnes

    2017-07-01

    Full Text Available The W3C Web Annotation Data Model, Protocol, and Vocabulary unify approaches to annotations across the web, enabling their aggregation, discovery and persistence over time. In addition, new javascript libraries provide the ability for users to annotate multi-format content. In this paper, we describe how we have leveraged these developments to provide annotation features alongside Islandora’s existing preservation, access, and management capabilities. We also discuss our experience developing with the Web Annotation Model as an open web architecture standard, as well as our approach to integrating mature external annotation libraries. The resulting software (the Web Annotation Utility Module for Islandora accommodates annotation across multiple formats. This solution can be used in various digital scholarship contexts.

  6. 76 FR 54531 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by the Passage of Hurricanes

    Science.gov (United States)

    2011-09-01

    ... prescribed in Sec. 195.452(h).'' Operators of shallow-water gas and hazardous liquid pipelines in the Gulf of... pipeline safety: 1. Identify persons who normally engage in shallow-water commercial fishing, shrimping... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...

  7. Pipeline oil fire detection with MODIS active fire products

    Science.gov (United States)

    Ogungbuyi, M. G.; Martinez, P.; Eckardt, F. D.

    2017-12-01

    We investigate 85 129 MODIS satellite active fire events from 2007 to 2015 in the Niger Delta of Nigeria. The region is the oil base for Nigerian economy and the hub of oil exploration where oil facilities (i.e. flowlines, flow stations, trunklines, oil wells and oil fields) are domiciled, and from where crude oil and refined products are transported to different Nigerian locations through a network of pipeline systems. Pipeline and other oil facilities are consistently susceptible to oil leaks due to operational or maintenance error, and by acts of deliberate sabotage of the pipeline equipment which often result in explosions and fire outbreaks. We used ground oil spill reports obtained from the National Oil Spill Detection and Response Agency (NOSDRA) database (see www.oilspillmonitor.ng) to validate MODIS satellite data. NOSDRA database shows an estimate of 10 000 spill events from 2007 - 2015. The spill events were filtered to include largest spills by volume and events occurring only in the Niger Delta (i.e. 386 spills). By projecting both MODIS fire and spill as `input vector' layers with `Points' geometry, and the Nigerian pipeline networks as `from vector' layers with `LineString' geometry in a geographical information system, we extracted the nearest MODIS events (i.e. 2192) closed to the pipelines by 1000m distance in spatial vector analysis. The extraction process that defined the nearest distance to the pipelines is based on the global practices of the Right of Way (ROW) in pipeline management that earmarked 30m strip of land to the pipeline. The KML files of the extracted fires in a Google map validated their source origin to be from oil facilities. Land cover mapping confirmed fire anomalies. The aim of the study is to propose a near-real-time monitoring of spill events along pipeline routes using 250 m spatial resolution of MODIS active fire detection sensor when such spills are accompanied by fire events in the study location.

  8. Living and working near pipelines : Landowner guide 2002

    International Nuclear Information System (INIS)

    Anon

    2002-01-01

    The transportation of natural gas, oil and other commodities is effected by pipelines throughout most of the country. Safety in the vicinity of a pipeline is very important because damage to a pipeline could result in adverse conditions to public safety and/or the environment. Before digging, written approval must be obtained from the pipeline company. If a landowner is having difficulty negotiating an agreement with the pipeline company, they should call the National Energy Board. It is illegal to construct or excavate without authorization, and approval or denial of a request must be granted within 10 business days by the pipeline company. Three days are allowed to the pipeline company to locate its pipeline. A section dealing with pipeline right-of-way is included, as well as the safety zone and the restricted area. A 10-step checklist of safety tips assists the landowner in taking the appropriate measures in the vicinity of a pipeline. A brief overview of the responsibilities of the National Energy Board is provided, followed by a list of the main pipelines regulated by the National Energy Board. 2 figs

  9. Submarine pipelines and the North Sea environment

    International Nuclear Information System (INIS)

    Haldane, D.; Paul, M.A.; Reuben, R.L.; Side, J.C.

    1992-01-01

    The function and design of pipelines for use on the United Kingdom continental shelf are described. Environmental influences which can threaten the integrity of seabed pipelines in the North Sea include hydrodynamic forces due to residual, tidal and wave currents, the nature of seabed sediments and corrosion by seawater. Damage may be caused to pipelines by interaction with vessel anchors and with fishing gear. Special care has to be taken over the selection of the general area for the landfall of a pipeline and the engineering of the installation where the pipeline comes ashore. Trenching and other protection techniques for pipelines are discussed together with hydrostatic testing and commissioning and subsequent inspection, maintenance and repair. (UK)

  10. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  11. Transmission pipeline calculations and simulations manual

    CERN Document Server

    Menon, E Shashi

    2014-01-01

    Transmission Pipeline Calculations and Simulations Manual is a valuable time- and money-saving tool to quickly pinpoint the essential formulae, equations, and calculations needed for transmission pipeline routing and construction decisions. The manual's three-part treatment starts with gas and petroleum data tables, followed by self-contained chapters concerning applications. Case studies at the end of each chapter provide practical experience for problem solving. Topics in this book include pressure and temperature profile of natural gas pipelines, how to size pipelines for specified f

  12. PANNZER2: a rapid functional annotation web server.

    Science.gov (United States)

    Törönen, Petri; Medlar, Alan; Holm, Liisa

    2018-05-08

    The unprecedented growth of high-throughput sequencing has led to an ever-widening annotation gap in protein databases. While computational prediction methods are available to make up the shortfall, a majority of public web servers are hindered by practical limitations and poor performance. Here, we introduce PANNZER2 (Protein ANNotation with Z-scoRE), a fast functional annotation web server that provides both Gene Ontology (GO) annotations and free text description predictions. PANNZER2 uses SANSparallel to perform high-performance homology searches, making bulk annotation based on sequence similarity practical. PANNZER2 can output GO annotations from multiple scoring functions, enabling users to see which predictions are robust across predictors. Finally, PANNZER2 predictions scored within the top 10 methods for molecular function and biological process in the CAFA2 NK-full benchmark. The PANNZER2 web server is updated on a monthly schedule and is accessible at http://ekhidna2.biocenter.helsinki.fi/sanspanz/. The source code is available under the GNU Public Licence v3.

  13. Tubular lining material for pipelines having bends

    Energy Technology Data Exchange (ETDEWEB)

    Moringa, A.; Sakaguchi, Y.; Hyodo, M.; Yagi, I.

    1987-03-24

    A tubular lining material for pipelines having bends or curved portions comprises a tubular textile jacket made of warps and wefts woven in a tubular form overlaid with a coating of a flexible synthetic resin. It is applicable onto the inner surface of a pipeline having bends or curved portions in such manner that the tubular lining material with a binder onto the inner surface thereof is inserted into the pipeline and allowed to advance within the pipeline, with or without the aid of a leading rope-like elongated element, while turning the tubular lining material inside out under fluid pressure. In this manner the tubular lining material is applied onto the inner surface of the pipeline with the binder being interposed between the pipeline and the tubular lining material. The lining material is characterized in that a part of all of the warps are comprised of an elastic yarn around which, over the full length thereof, a synthetic fiber yarn or yarns have been left-and/or right-handedly coiled. This tubular lining material is particularly suitable for lining a pipeline having an inner diameter of 25-200 mm and a plurality of bends, such as gas service pipelines or house pipelines, without occurrence of wrinkles in the lining material in a bend.

  14. Dynamic pressure measures for long pipeline leak detection

    Energy Technology Data Exchange (ETDEWEB)

    Likun Wang; Hongchao Wang; Min Xiong; Bin Xu; Dongjie Tan; Hengzhang Zhou [PetroChina Pipeline Company, Langfang (China). R and D Center

    2009-07-01

    Pipeline leak detection method based on dynamic pressure is studied. The feature of dynamic pressure which is generated by the leakage of pipeline is analyzed. The dynamic pressure method is compared with the static pressure method for the advantages and disadvantages in pipeline leak detection. The dynamic pressure signal is suitable for pipeline leak detection for quick-change of pipeline internal pressure. Field tests show that the dynamic pressure method detects pipeline leak rapidly and precisely. (author)

  15. Location class change impact on onshore gas pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Cassia de Oliveira; Oliveira, Luiz Fernando Seixas de [DNV Energy Solutions, Oslo (Norway); Leal, Cesar Antonio [DNV Energy Solutions, Porto Alegre, RS (Brazil); Faertes, Denise [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil). Gas and Energy

    2009-07-01

    During a pipeline life cycle, some significant changes in the population may happen along its route. Such changes are indirectly evaluated by the increase in the amount of buildings constructed along the route, which determines the so called Location Class. Such changes, after licensing, provoke differences between what is required by the standards and what is actually done. This work has two goals. One is to study the requirements of international standards and legislations as well as some solutions used in the United States, Canada, United Kingdom and Netherlands. This goal intends to provide some technical bases for a comparative analysis on how the location class changes, during the life cycle of a pipeline, are treated in each country. Another goal is to present a risk-based methodology for the guideline development which can be used in decision-making concerning what to do in case of any location class change. Particularly, it has given special attention to the requirements which are imposed for the pipeline operational license continuation. This work is of supreme importance for the Brazilian pipeline segment, since the existing Brazilian design standard, ABNT NBR12712 for transmission and distribution pipeline design, does not deal with that issue. Moreover, a summary of the main solutions found in those countries together with a guideline, customized for the Brazilian reality, is presented. (author)

  16. Arctic pipeline planning design, construction, and equipment

    CERN Document Server

    Singh, Ramesh

    2013-01-01

    Utilize the most recent developments to combat challenges such as ice mechanics. The perfect companion for engineers wishing to learn state-of-the-art methods or further develop their knowledge of best practice techniques, Arctic Pipeline Planning provides a working knowledge of the technology and techniques for laying pipelines in the coldest regions of the world. Arctic Pipeline Planning provides must-have elements that can be utilized through all phases of arctic pipeline planning and construction. This includes information on how to: Solve challenges in designing arctic pipelines Protect pipelines from everyday threats such as ice gouging and permafrost Maintain safety and communication for construction workers while supporting typical codes and standards Covers such issues as land survey, trenching or above ground, environmental impact of construction Provides on-site problem-solving techniques utilized through all phases of arctic pipeline planning and construction Is packed with easy-to-read and under...

  17. A Flexible Object-of-Interest Annotation Framework for Online Video Portals

    Directory of Open Access Journals (Sweden)

    Robert Sorschag

    2012-02-01

    Full Text Available In this work, we address the use of object recognition techniques to annotate what is shown where in online video collections. These annotations are suitable to retrieve specific video scenes for object related text queries which is not possible with the manually generated metadata that is used by current portals. We are not the first to present object annotations that are generated with content-based analysis methods. However, the proposed framework possesses some outstanding features that offer good prospects for its application in real video portals. Firstly, it can be easily used as background module in any video environment. Secondly, it is not based on a fixed analysis chain but on an extensive recognition infrastructure that can be used with all kinds of visual features, matching and machine learning techniques. New recognition approaches can be integrated into this infrastructure with low development costs and a configuration of the used recognition approaches can be performed even on a running system. Thus, this framework might also benefit from future advances in computer vision. Thirdly, we present an automatic selection approach to support the use of different recognition strategies for different objects. Last but not least, visual analysis can be performed efficiently on distributed, multi-processor environments and a database schema is presented to store the resulting video annotations as well as the off-line generated low-level features in a compact form. We achieve promising results in an annotation case study and the instance search task of the TRECVID 2011 challenge.

  18. Global offshore pipeline markets

    International Nuclear Information System (INIS)

    Knight, R.; Parsons, B.

    2001-01-01

    In this article, two experts forecast a recovery in the offshore pipeline market followed by accelerating growth. A number of clearly definable macro trends are affecting the world offshore oil and gas industry and will be of considerable significance to the offshore pipelines industry. The authors' view is of markets that show every chance of enjoying long-term growth prospects driven by the fundamentals of a continuing increase in demand for offshore oil and gas. The offshore industry however has a highly cyclical nature, due to the impact of variations in oil and gas prices and the differing state of maturity of individual regions. Therefore those companies that are able to offer the widest range of pipe types and diameters and methods of installation across the greatest range of geographic markets are likely to prosper most. Thus, this continues to be a market best suited to those able to operate on a global scale and make a corporate commitment measured in decades

  19. Oil pipeline energy consumption and efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hooker, J.N.

    1981-01-01

    This report describes an investigation of energy consumption and efficiency of oil pipelines in the US in 1978. It is based on a simulation of the actual movement of oil on a very detailed representation of the pipeline network, and it uses engineering equations to calculate the energy that pipeline pumps must have exerted on the oil to move it in this manner. The efficiencies of pumps and drivers are estimated so as to arrive at the amount of energy consumed at pumping stations. The throughput in each pipeline segment is estimated by distributing each pipeline company's reported oil movements over its segments in proportions predicted by regression equations that show typical throughput and throughput capacity as functions of pipe diameter. The form of the equations is justified by a generalized cost-engineering study of pipelining, and their parameters are estimated using new techniques developed for the purpose. A simplified model of flow scheduling is chosen on the basis of actual energy use data obtained from a few companies. The study yields energy consumption and intensiveness estimates for crude oil trunk lines, crude oil gathering lines and oil products lines, for the nation as well as by state and by pipe diameter. It characterizes the efficiency of typical pipelines of various diameters operating at capacity. Ancillary results include estimates of oil movements by state and by diameter and approximate pipeline capacity utilization nationwide.

  20. Oil and gas pipeline construction cost analysis and developing regression models for cost estimation

    Science.gov (United States)

    Thaduri, Ravi Kiran

    In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.

  1. Economic evaluation: wood stave pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Rook, M.E.

    The spray of leakage from the wood stave water supply pipeline serving the New England Power Company's (NEPCO) Searsburg hydroelectric development had caused this facility to be dubbed ''The Searsburg Car Wash.'' In July, 1982, excessive leakage from this pipeline prompted NEPCO to perform a technical inspection which would inform the company's decision to replace, repair, or abandon the pipeline. The inspection indicated that a combination of interrelated factors has led to rapid deterioration. The feasibility study, which included a benefit -cost analysis of a times replacement with a continued repair program weighed annually by a risk factor representing the probability of pipeline failure during the replacement period, determined that direct replacement was most advantageous. 4 figures, 1 figures.

  2. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  3. Acoustic system for communication in pipelines

    Science.gov (United States)

    Martin, II, Louis Peter; Cooper, John F [Oakland, CA

    2008-09-09

    A system for communication in a pipe, or pipeline, or network of pipes containing a fluid. The system includes an encoding and transmitting sub-system connected to the pipe, or pipeline, or network of pipes that transmits a signal in the frequency range of 3-100 kHz into the pipe, or pipeline, or network of pipes containing a fluid, and a receiver and processor sub-system connected to the pipe, or pipeline, or network of pipes containing a fluid that receives said signal and uses said signal for a desired application.

  4. Drive Control System for Pipeline Crawl Robot Based on CAN Bus

    International Nuclear Information System (INIS)

    Chen, H J; Gao, B T; Zhang, X H; Deng, Z Q

    2006-01-01

    Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot

  5. Drive Control System for Pipeline Crawl Robot Based on CAN Bus

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H J [Department of Electrical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China); Gao, B T [Department of Electrical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China); Zhang, X H [Department of Electrical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China); Deng, Z Q [School of Mechanical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China)

    2006-10-15

    Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot.

  6. Pipelines, inexpensive and safe mode of transport

    Energy Technology Data Exchange (ETDEWEB)

    Grover, D D

    1979-01-01

    Pipelines are the leading bulk commodity transporter and should play an even more important role in the future of energy transportation and distribution. As fossil fuel and low-cost uranium resources become depleted, it will be economical to produce hydrogen by electrolysis and transport it through underground pipelines to points of consumption. The cost would be only two to three times that of transporting natural gas per unit of heat energy and substantially less than the cost of transporting electric energy in overhead, extra-high-voltage transmission lines. Pipeline design, including economic pipe diameter; pipe material; operation by remote control and automation; cathodic protection; pipeline construction; and pipeline maintenance, particularly as regards the 1157 km long Oil India Pipeline, are discussed.

  7. Health, safety and environment risk assessment in gas pipelines by indexing method:case of Kermanshah Sanandaj oil pipeline

    Directory of Open Access Journals (Sweden)

    Y. Hamidi

    2009-10-01

    Full Text Available Background and AimsUsing pipelines for oil products transportation involves ranges of safety, health and environmental risks, this option however, is dominant with numerous  advantages. The purpose of this study was; relative risk assessment of abovementioned risk in Kermanshah-Sanandaj Oil Pipeline.MethodsThe method used in this study was Kent Muhlbauer method in which relative risk was assessed using third-party damage, corrosion, design, incorrect operations and leak impact  factor.ResultsOnce applying this method, collection of required data and performing needed experiments, scoring results showed 96 risk segments along the pipeline length in which lengths 100+860, 101+384 and 103+670 had relative risk scores 9.74, 9.82 and 9.91 respectively and therefore these segments were identified as focal risk points and priority for improvement actions.ConclusionRegarding importance of pipeline failure, inspection and regular patrol along the pipeline route, precise control of cathodic protection of pipeline and using communication technologies such as SCADA or optical fibers along the pipeline route were amongst the mostimportant control action suggested by the study.

  8. Bulletin 2005-12 : revised Alberta pipeline regulation issued

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-05-31

    A revised Pipeline Regulation has been issued and is currently available on the Alberta Energy and Utilities Board (EUB) website. Changes to the regulation reflect bothchanges in EUB regulatory policy and processes and technological improvements. Goals of the revision include improvements in overall pipeline performance, and the implementation of recommendations derived from the Public Safety and Sour Gas Committee concerning sour gas pipeline safety. The regulation was re-organized for greater clarity, and structured into 11 parts. Issues concerning the transition to the revised regulation were presented. The summary of notable administrative changes included clarifications of when a pipeline application is not required; when ABSA approval is required for steam lines; situations for which low-pressure natural gas lines must be licensed; and emergency response requirements. Technical clarifications include requirements for pipeline operations and maintenance manuals; composite materials; limitations on amounts of H{sub 2}S in polymeric pipe; pressure mismatches; approval for testing with gaseous media; venting of small volumes of raw gas; right-of-way surveillance; inspection of surface construction activities; annual corrosion evaluations; registering of pipelines and excavators in controlled areas with Alberta One-Call; ground disturbance training; restoration and signage maintenance on abandoned pipelines; sour service steel pipelines; unused pipelines and abandoned pipelines; and remediation of stub ends in operating pipelines.

  9. Leak detection for city gas pipelines based on instantaneous energy distribution characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Zhigang, Chen [Deijing University of Civil Engineering and Architecture, Beijing, (China)

    2010-07-01

    Many natural gas pipelines are used in our cities. The development of efficient leakage detection systems is fundamental for safety issues avoidance. This paper investigated a new solution to the leak detection problem in city gas pipelines based on instantaneous energy distribution. In a theoretical approach, the Hilbert-Huang transform (HHT) was used to provide the instantaneous energy distribution feature of an unstable pressure signal. The signal noise was eliminated thanks to the instantaneous energy contribution. A leakage detection model with instantaneous energy distribution (IED) was then established. The correlation coefficients of instantaneous energy distribution were through correlation analysis. It was found that in different pipeline states, the instantaneous energy distribution characteristics are different. A strong correlation of IED signal characteristics was found of the two ends of a city gas pipeline in the same operation. The test results demonstrated the reliability and validity of the method.

  10. ANNOTATION SUPPORTED OCCLUDED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Devinder Kumar

    2012-08-01

    Full Text Available Tracking occluded objects at different depths has become as extremely important component of study for any video sequence having wide applications in object tracking, scene recognition, coding, editing the videos and mosaicking. The paper studies the ability of annotation to track the occluded object based on pyramids with variation in depth further establishing a threshold at which the ability of the system to track the occluded object fails. Image annotation is applied on 3 similar video sequences varying in depth. In the experiment, one bike occludes the other at a depth of 60cm, 80cm and 100cm respectively. Another experiment is performed on tracking humans with similar depth to authenticate the results. The paper also computes the frame by frame error incurred by the system, supported by detailed simulations. This system can be effectively used to analyze the error in motion tracking and further correcting the error leading to flawless tracking. This can be of great interest to computer scientists while designing surveillance systems etc.

  11. SAS- Semantic Annotation Service for Geoscience resources on the web

    Science.gov (United States)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  12. Stress analysis and mitigation measures for floating pipeline

    Science.gov (United States)

    Wenpeng, Guo; Yuqing, Liu; Chao, Li

    2017-03-01

    Pipeline-floating is a kind of accident with contingency and uncertainty associated to natural gas pipeline occurring during rainy season, which is significantly harmful to the safety of pipeline. Treatment measures against pipeline floating accident are summarized in this paper on the basis of practical project cases. Stress states of pipeline upon floating are analyzed by means of Finite Element Calculation method. The effectiveness of prevention ways and subsequent mitigation measures upon pipeline-floating are verified for giving guidance to the mitigation of such accidents.

  13. Inquiry into the Alaska road pipeline. Enquete sur le pipeline de la route de l'Alaska

    Energy Technology Data Exchange (ETDEWEB)

    Rowan, P

    1977-01-01

    This report is addressed to the Minister of Indian Affairs and to the Canadian North in Ottawa and deals with the social and economic impacts of a proposed gas pipeline in the South of Yukon and with the attitudes of the Yukonnese people with respect to this project. Many public hearings were held. The report discusses the Yukon people (Indians and non-Indians) and the consequences of the projected pipeline on the environment, the economics of the region and the way of life of its people. The report also presents the claims of the Indians pertaining to the land. It is advocated that an advance payment of $50M be made to the Indians and that the pipeline corporation pay a compensation of at least $200M to a fund administered by the Yukon. An organization for planning and regulating the pipeline should be establised. It is advised to delay constructing the pipeline until August 1981. Other recommendations are made. Many witnesses supported a layout following roughly the Tintina groove, but none supported the Dawson deviation. Most witnesses opposed constructing the lateral Dempster pipeline for the moment. The report is illustrated with numerous colour photographs. 7 figs., 2 tabs.

  14. Methodology for environmental audit of execution in gas-pipelines and pipelines

    International Nuclear Information System (INIS)

    Hurtado Palomino, Maria Patricia; Vargas Bejarano, Carlos Hernando

    1999-01-01

    In first instance the constructive aspects and the environmental impact related with the gas-pipes and pipelines construction are presented; then a methodology to make the environmental audit of execution in gas-pipes and pipelines, is showed. They contemplate four stages basically: planning, pre-auditory, execution and analysis, and post-auditory with their respective activities. Also, it is given to know, generalities of the practical case, to evaluate the applicability of the proposed methodology

  15. Best practices for the abandonment of pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Mackean, M; Reed, R; Snow, B [Nabors Canada, Calgary, AB (Canada). Abandonrite Service

    2006-07-01

    Pipeline regulations implemented in 2006 require that licensees register all pipelines. Training must also be provided for ground disturbance supervisors. In addition, signage must be maintained on abandoned pipelines, and discontinued pipelines must be properly isolated. Corrosion control and internal inhibition is required for discontinued lines. However, pipelines are often neglected during the well abandonment process. This presentation provided recommendations for coordinating well and pipeline abandonment processes. Pipeline ends can be located, depressurized, flushed and purged while wells are being abandoned. Contaminated soils around the wells can also be identified prior to reclamation activities. Administrative reviews must be conducted in order to provide accurate information on pipeline location, reclamation certification, and line break history. Field operation files must be reviewed before preliminary field work is conducted. Site inspections should be used to determine if all ends of the line are accessible. Landowners and occupants near the line must also be notified, and relevant documentation must be obtained. Skilled technicians must be used to assess the lines for obstructions as well as to cut and cap the lines after removing risers. The presentation also examined issues related to pressure change, movement, cold tapping, and live dead legs. tabs., figs.

  16. A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines

    Directory of Open Access Journals (Sweden)

    José Pablo Arias-Nicolás

    2015-06-01

    Full Text Available In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.

  17. MitoBamAnnotator: A web-based tool for detecting and annotating heteroplasmy in human mitochondrial DNA sequences.

    Science.gov (United States)

    Zhidkov, Ilia; Nagar, Tal; Mishmar, Dan; Rubin, Eitan

    2011-11-01

    The use of Next-Generation Sequencing of mitochondrial DNA is becoming widespread in biological and clinical research. This, in turn, creates a need for a convenient tool that detects and analyzes heteroplasmy. Here we present MitoBamAnnotator, a user friendly web-based tool that allows maximum flexibility and control in heteroplasmy research. MitoBamAnnotator provides the user with a comprehensively annotated overview of mitochondrial genetic variation, allowing for an in-depth analysis with no prior knowledge in programming. Copyright © 2011 Elsevier B.V. and Mitochondria Research Society. All rights reserved. All rights reserved.

  18. Correction of the Caulobacter crescentus NA1000 genome annotation.

    Directory of Open Access Journals (Sweden)

    Bert Ely

    Full Text Available Bacterial genome annotations are accumulating rapidly in the GenBank database and the use of automated annotation technologies to create these annotations has become the norm. However, these automated methods commonly result in a small, but significant percentage of genome annotation errors. To improve accuracy and reliability, we analyzed the Caulobacter crescentus NA1000 genome utilizing computer programs Artemis and MICheck to manually examine the third codon position GC content, alignment to a third codon position GC frame plot peak, and matches in the GenBank database. We identified 11 new genes, modified the start site of 113 genes, and changed the reading frame of 38 genes that had been incorrectly annotated. Furthermore, our manual method of identifying protein-coding genes allowed us to remove 112 non-coding regions that had been designated as coding regions. The improved NA1000 genome annotation resulted in a reduction in the use of rare codons since noncoding regions with atypical codon usage were removed from the annotation and 49 new coding regions were added to the annotation. Thus, a more accurate codon usage table was generated as well. These results demonstrate that a comparison of the location of peaks third codon position GC content to the location of protein coding regions could be used to verify the annotation of any genome that has a GC content that is greater than 60%.

  19. Improved annotation through genome-scale metabolic modeling of Aspergillus oryzae

    DEFF Research Database (Denmark)

    Vongsangnak, Wanwipa; Olsen, Peter; Hansen, Kim

    2008-01-01

    Background: Since ancient times the filamentous fungus Aspergillus oryzae has been used in the fermentation industry for the production of fermented sauces and the production of industrial enzymes. Recently, the genome sequence of A. oryzae with 12,074 annotated genes was released but the number...... to a genome scale metabolic model of A. oryzae. Results: Our assembled EST sequences we identified 1,046 newly predicted genes in the A. oryzae genome. Furthermore, it was possible to assign putative protein functions to 398 of the newly predicted genes. Noteworthy, our annotation strategy resulted...... model was validated and shown to correctly describe the phenotypic behavior of A. oryzae grown on different carbon sources. Conclusion: A much enhanced annotation of the A. oryzae genome was performed and a genomescale metabolic model of A. oryzae was reconstructed. The model accurately predicted...

  20. Comparisons of sediment losses from a newly constructed cross-country natural gas pipeline and an existing in-road pipeline

    Science.gov (United States)

    Pamela J. Edwards; Bridget M. Harrison; Daniel J. Holz; Karl W.J. Williard; Jon E. Schoonover

    2014-01-01

    Sediment loads were measured for about one year from natural gas pipelines in two studies in north central West Virginia. One study involved a 1-year-old pipeline buried within the bed of a 25-year-old skid road, and the other involved a newly constructed cross-country pipeline. Both pipelines were the same diameter and were installed using similar trenching and...

  1. 77 FR 27279 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2012-05-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... collections relate to the pipeline integrity management requirements for gas transmission pipeline operators... Management in High Consequence Areas Gas Transmission Pipeline Operators. OMB Control Number: 2137-0610...

  2. 75 FR 53733 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2010-09-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2010-0246] Pipeline Safety: Information Collection Activities AGENCY: Pipeline and Hazardous... liquefied natural gas, hazardous liquid, and gas transmission pipeline systems operated by a company. The...

  3. 77 FR 46155 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2012-08-02

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... collections relate to the pipeline integrity management requirements for gas transmission pipeline operators... Management in High Consequence Areas Gas Transmission Pipeline Operators. OMB Control Number: 2137-0610...

  4. Generation of silver standard concept annotations from biomedical texts with special relevance to phenotypes.

    Science.gov (United States)

    Oellrich, Anika; Collier, Nigel; Smedley, Damian; Groza, Tudor

    2015-01-01

    Electronic health records and scientific articles possess differing linguistic characteristics that may impact the performance of natural language processing tools developed for one or the other. In this paper, we investigate the performance of four extant concept recognition tools: the clinical Text Analysis and Knowledge Extraction System (cTAKES), the National Center for Biomedical Ontology (NCBO) Annotator, the Biomedical Concept Annotation System (BeCAS) and MetaMap. Each of the four concept recognition systems is applied to four different corpora: the i2b2 corpus of clinical documents, a PubMed corpus of Medline abstracts, a clinical trails corpus and the ShARe/CLEF corpus. In addition, we assess the individual system performances with respect to one gold standard annotation set, available for the ShARe/CLEF corpus. Furthermore, we built a silver standard annotation set from the individual systems' output and assess the quality as well as the contribution of individual systems to the quality of the silver standard. Our results demonstrate that mainly the NCBO annotator and cTAKES contribute to the silver standard corpora (F1-measures in the range of 21% to 74%) and their quality (best F1-measure of 33%), independent from the type of text investigated. While BeCAS and MetaMap can contribute to the precision of silver standard annotations (precision of up to 42%), the F1-measure drops when combined with NCBO Annotator and cTAKES due to a low recall. In conclusion, the performances of individual systems need to be improved independently from the text types, and the leveraging strategies to best take advantage of individual systems' annotations need to be revised. The textual content of the PubMed corpus, accession numbers for the clinical trials corpus, and assigned annotations of the four concept recognition systems as well as the generated silver standard annotation sets are available from http://purl.org/phenotype/resources. The textual content of the Sh

  5. Generation of silver standard concept annotations from biomedical texts with special relevance to phenotypes.

    Directory of Open Access Journals (Sweden)

    Anika Oellrich

    Full Text Available Electronic health records and scientific articles possess differing linguistic characteristics that may impact the performance of natural language processing tools developed for one or the other. In this paper, we investigate the performance of four extant concept recognition tools: the clinical Text Analysis and Knowledge Extraction System (cTAKES, the National Center for Biomedical Ontology (NCBO Annotator, the Biomedical Concept Annotation System (BeCAS and MetaMap. Each of the four concept recognition systems is applied to four different corpora: the i2b2 corpus of clinical documents, a PubMed corpus of Medline abstracts, a clinical trails corpus and the ShARe/CLEF corpus. In addition, we assess the individual system performances with respect to one gold standard annotation set, available for the ShARe/CLEF corpus. Furthermore, we built a silver standard annotation set from the individual systems' output and assess the quality as well as the contribution of individual systems to the quality of the silver standard. Our results demonstrate that mainly the NCBO annotator and cTAKES contribute to the silver standard corpora (F1-measures in the range of 21% to 74% and their quality (best F1-measure of 33%, independent from the type of text investigated. While BeCAS and MetaMap can contribute to the precision of silver standard annotations (precision of up to 42%, the F1-measure drops when combined with NCBO Annotator and cTAKES due to a low recall. In conclusion, the performances of individual systems need to be improved independently from the text types, and the leveraging strategies to best take advantage of individual systems' annotations need to be revised. The textual content of the PubMed corpus, accession numbers for the clinical trials corpus, and assigned annotations of the four concept recognition systems as well as the generated silver standard annotation sets are available from http://purl.org/phenotype/resources. The textual content

  6. Environmental, public health, and safety assessment of fuel pipelines and other freight transportation modes

    International Nuclear Information System (INIS)

    Strogen, Bret; Bell, Kendon; Breunig, Hanna; Zilberman, David

    2016-01-01

    Highlights: • Externalities are examined for pipelines, truck, rail, and barge. • Safety impact factors include incidences of injuries, illnesses, and fatalities. • Environmental impact factors include CO_2eq emissions and air pollution disease burden. • Externalities are estimated for constructing and operating a large domestic pipeline. • A large pipeline has lower cumulative impacts than other modes within ten years. - Abstract: The construction of pipelines along high-throughput fuel corridors can alleviate demand for rail, barge, and truck transportation. Pipelines have a very different externality profile than other freight transportation modes due to differences in construction, operation, and maintenance requirements; labor, energy, and material input intensity; location and profile of emissions from operations; and frequency and magnitude of environmental and safety incidents. Therefore, public policy makers have a strong justification to influence the economic viability of pipelines. We use data from prior literature and U.S. government statistics to estimate environmental, public health, and safety characterization factors for pipelines and other modes. In 2008, two pipeline companies proposed the construction of an ethanol pipeline from the Midwest to Northeast United States. This proposed project informs our case study of a 2735-km $3.5 billion pipeline (2009 USD), for which we evaluate potential long-term societal impacts including life-cycle costs, greenhouse gas emissions, employment, injuries, fatalities, and public health impacts. Although it may take decades to break even economically, and would result in lower cumulative employment, such a pipeline would likely have fewer safety incidents, pollution emissions, and health damages than the alternative multimodal system in less than ten years; these results stand even if comparing future cleaner ground transport modes to a pipeline that utilizes electricity produced from coal

  7. 78 FR 46560 - Pipeline Safety: Class Location Requirements

    Science.gov (United States)

    2013-08-01

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... class location requirements for gas transmission pipelines. Section 5 of the Pipeline Safety, Regulatory... and, with respect to gas transmission pipeline facilities, whether applying IMP requirements to...

  8. 77 FR 15453 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2012-03-15

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... information collection titled, ``Gas Pipeline Safety Program Certification and Hazardous Liquid Pipeline... collection request that PHMSA will be submitting to OMB for renewal titled, ``Gas Pipeline Safety Program...

  9. Development of ecologically safe method for main oil and gas pipeline trenching

    Directory of Open Access Journals (Sweden)

    Akhmedov Asvar Mikdadovich

    2014-05-01

    Full Text Available Constructive, technical and technological reliability of major pipeline ensures ecological safety on different stages of life circle - beginning with project preparation activities up to the end of major pipeline operation. Even in the process of transition into new life circle stage, no matter if the pipeline needs major repairs or reconstruction, such technical and technological solutions should be found, which would preserve ecological stability of nature-anthropogenic system. Development of ecology protection technologies of construction, reconstruction and major repairs of main pipelines is of great importance not only for a region, but ensures ecological safety across the globe. The article presents a new way of trenching the main oil and gas pipeline, preservation and increase of ecological safety during its service. The updated technological plan is given in the paper for overhaul of the main oil and gas pipeline using the new technology of pipeline trenching. The suggested technical solution contributes to environment preservation with the help of deteriorating shells - the shells’ material decomposes into environment-friendly components: carbon dioxide, water and humus. The quantity of polluting agents in the atmosphere decreases with the decrease of construction term and quantity of technical equipment.

  10. Pipeline Decommissioning Trial AWE Berkshire UK - 13619

    Energy Technology Data Exchange (ETDEWEB)

    Agnew, Kieran [AWE, Aldermaston, Reading, RG7 4PR (United Kingdom)

    2013-07-01

    This Paper details the implementation of a 'Decommissioning Trial' to assess the feasibility of decommissioning the redundant pipeline operated by AWE located in Berkshire UK. The paper also presents the tool box of decommissioning techniques that were developed during the decommissioning trial. Constructed in the 1950's and operated until 2005, AWE used a pipeline for the authorised discharge of treated effluent. Now redundant, the pipeline is under a care and surveillance regime awaiting decommissioning. The pipeline is some 18.5 km in length and extends from AWE site to the River Thames. Along its route the pipeline passes along and under several major roads, railway lines and rivers as well as travelling through woodland, agricultural land and residential areas. Currently under care and surveillance AWE is considering a number of options for decommissioning the pipeline. One option is to remove the pipeline. In order to assist option evaluation and assess the feasibility of removing the pipeline a decommissioning trial was undertaken and sections of the pipeline were removed within the AWE site. The objectives of the decommissioning trial were to: - Demonstrate to stakeholders that the pipeline can be removed safely, securely and cleanly - Develop a 'tool box' of methods that could be deployed to remove the pipeline - Replicate the conditions and environments encountered along the route of the pipeline The onsite trial was also designed to replicate the physical prevailing conditions and constraints encountered along the remainder of its route i.e. working along a narrow corridor, working in close proximity to roads, working in proximity to above ground and underground services (e.g. Gas, Water, Electricity). By undertaking the decommissioning trial AWE have successfully demonstrated the pipeline can be decommissioned in a safe, secure and clean manor and have developed a tool box of decommissioning techniques. The tool box of includes

  11. Ten steps to get started in Genome Assembly and Annotation [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Victoria Dominguez Del Angel

    2018-02-01

    Full Text Available As a part of the ELIXIR-EXCELERATE efforts in capacity building, we present here 10 steps to facilitate researchers getting started in genome assembly and genome annotation. The guidelines given are broadly applicable, intended to be stable over time, and cover all aspects from start to finish of a general assembly and annotation project. Intrinsic properties of genomes are discussed, as is the importance of using high quality DNA. Different sequencing technologies and generally applicable workflows for genome assembly are also detailed. We cover structural and functional annotation and encourage readers to also annotate transposable elements, something that is often omitted from annotation workflows. The importance of data management is stressed, and we give advice on where to submit data and how to make your results Findable, Accessible, Interoperable, and Reusable (FAIR.

  12. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  13. PCAS – a precomputed proteome annotation database resource

    Directory of Open Access Journals (Sweden)

    Luo Jingchu

    2003-11-01

    Full Text Available Abstract Background Many model proteomes or "complete" sets of proteins of given organisms are now publicly available. Much effort has been invested in computational annotation of those "draft" proteomes. Motif or domain based algorithms play a pivotal role in functional classification of proteins. Employing most available computational algorithms, mainly motif or domain recognition algorithms, we set up to develop an online proteome annotation system with integrated proteome annotation data to complement existing resources. Results We report here the development of PCAS (ProteinCentric Annotation System as an online resource of pre-computed proteome annotation data. We applied most available motif or domain databases and their analysis methods, including hmmpfam search of HMMs in Pfam, SMART and TIGRFAM, RPS-PSIBLAST search of PSSMs in CDD, pfscan of PROSITE patterns and profiles, as well as PSI-BLAST search of SUPERFAMILY PSSMs. In addition, signal peptide and TM are predicted using SignalP and TMHMM respectively. We mapped SUPERFAMILY and COGs to InterPro, so the motif or domain databases are integrated through InterPro. PCAS displays table summaries of pre-computed data and a graphical presentation of motifs or domains relative to the protein. As of now, PCAS contains human IPI, mouse IPI, and rat IPI, A. thaliana, C. elegans, D. melanogaster, S. cerevisiae, and S. pombe proteome. PCAS is available at http://pak.cbi.pku.edu.cn/proteome/gca.php Conclusion PCAS gives better annotation coverage for model proteomes by employing a wider collection of available algorithms. Besides presenting the most confident annotation data, PCAS also allows customized query so users can inspect statistically less significant boundary information as well. Therefore, besides providing general annotation information, PCAS could be used as a discovery platform. We plan to update PCAS twice a year. We will upgrade PCAS when new proteome annotation algorithms

  14. 77 FR 51848 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2012-08-27

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Program for Gas Distribution Pipelines. DATES: Interested persons are invited to submit comments on or.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and...

  15. 77 FR 26822 - Pipeline Safety: Verification of Records

    Science.gov (United States)

    2012-05-07

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0068] Pipeline Safety: Verification of Records AGENCY: Pipeline and Hazardous Materials... issuing an Advisory Bulletin to remind operators of gas and hazardous liquid pipeline facilities to verify...

  16. 77 FR 74275 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2012-12-13

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No.... These regulations require operators of hazardous liquid pipelines and gas pipelines to develop and... control room. Affected Public: Operators of both natural gas and hazardous liquid pipeline systems. Annual...

  17. A semi-automatic annotation tool for cooking video

    Science.gov (United States)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  18. Increase of ecological safety of the pipeline

    International Nuclear Information System (INIS)

    Dr Movsumov, Sh.N.; Prof Aliyev, F.G.

    2005-01-01

    Full text : For increase of ecological safety of the pipeline, necessary decrease of damage (risk) rendered by the pipeline on surrounding natural environment which depends: on the frequency of damage of the pipeline; on the volume poured oil; on the factor of sensitivity of an environment where flood of oil was. Frequency of damage of the pipeline depends on physico-chemical properties of a material of the pipeline, from its technical characteristics (thickness of a wall, length of a pipe, working pressure), on the seismic area of the district where the pipeline passed and also on the way of lining of the pipeline (underground or overground). The volume poured oil depends on diameter of the received damage, from stability of the pipeline mechanical and other external actions, from an ambient temperature, from capacity of the pipeline, from distance between the latches established in the pipeline, and also from time, necessary for their full closing. The factor of sensitivity of environment depends on geological structure and landscapes of district (mountain, the river, settlements) where passed the pipeline. At designing the pipeline, in report is shown questions of increase of ecological safety of the pipeline are considered at his construction and exploitation. For improvement of ecological safety of the pipeline is necessary to hold the following actions: Ecological education of the public, living near along a line of the oil pipeline; carrying out ecological monitoring; working of the public plan of response to oil spills; For ecological education of the public is necessary: carrying out informing of the public for all (technical, ecological, social and economic and legal) questions connected to an oil pipeline, and also on methods of protection of the rights at participation in acceptance of ecological significant decisions; Creation of public groups for realization of activity on observance of the legislation and to prevention of risks; Exposure of hot

  19. Experiments with crowdsourced re-annotation of a POS tagging data set

    DEFF Research Database (Denmark)

    Hovy, Dirk; Plank, Barbara; Søgaard, Anders

    2014-01-01

    Crowdsourcing lets us collect multiple annotations for an item from several annotators. Typically, these are annotations for non-sequential classification tasks. While there has been some work on crowdsourcing named entity annotations, researchers have assumed that syntactic tasks such as part......-of-speech (POS) tagging cannot be crowdsourced. This paper shows that workers can actually annotate sequential data almost as well as experts. Further, we show that the models learned from crowdsourced annotations fare as well as the models learned from expert annotations in downstream tasks....

  20. Pipeline monitoring with unmanned aerial vehicles

    Science.gov (United States)

    Kochetkova, L. I.

    2018-05-01

    Pipeline leakage during transportation of combustible substances leads to explosion and fire thus causing death of people and destruction of production and accommodation facilities. Continuous pipeline monitoring allows identifying leaks in due time and quickly taking measures for their elimination. The paper describes the solution of identification of pipeline leakage using unmanned aerial vehicles. It is recommended to apply the spectral analysis with input RGB signal to identify pipeline damages. The application of multi-zone digital images allows defining potential spill of oil hydrocarbons as well as possible soil pollution. The method of multi-temporal digital images within the visible region makes it possible to define changes in soil morphology for its subsequent analysis. The given solution is cost efficient and reliable thus allowing reducing timing and labor resources in comparison with other methods of pipeline monitoring.

  1. Efficiency improvements in pipeline transportation systems

    Energy Technology Data Exchange (ETDEWEB)

    Banks, W. F.; Horton, J. F.

    1977-09-09

    This report identifies potential energy-conservative pipeline innovations that are most energy- and cost-effective and formulates recommendations for the R, D, and D programs needed to exploit those opportunities. From a candidate field of over twenty classes of efficiency improvements, eight systems are recommended for pursuit. Most of these possess two highly important attributes: large potential energy savings and broad applicability outside the pipeline industry. The R, D, and D program for each improvement and the recommended immediate next step are described. The eight technologies recommended for R, D, and D are gas-fired combined cycle compressor station; internally cooled internal combustion engine; methanol-coal slurry pipeline; methanol-coal slurry-fired and coal-fired engines; indirect-fired coal-burning combined-cycle pump station; fuel-cell pump station; drag-reducing additives in liquid pipelines; and internal coatings in pipelines.

  2. 75 FR 73160 - Pipeline Safety: Information Collection Activities

    Science.gov (United States)

    2010-11-29

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...-Related Conditions on Gas, Hazardous Liquid, and Carbon Dioxide Pipelines and Liquefied Natural Gas... Pipelines and Liquefied Natural Gas Facilities.'' The Pipeline Safety Laws (49 U.S.C. 60132) require each...

  3. California Natural Gas Pipelines: A Brief Guide

    Energy Technology Data Exchange (ETDEWEB)

    Neuscamman, Stephanie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Price, Don [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pezzola, Genny [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-22

    The purpose of this document is to familiarize the reader with the general configuration and operation of the natural gas pipelines in California and to discuss potential LLNL contributions that would support the Partnership for the 21st Century collaboration. First, pipeline infrastructure will be reviewed. Then, recent pipeline events will be examined. Selected current pipeline industry research will be summarized. Finally, industry acronyms are listed for reference.

  4. Ground Truth Annotation in T Analyst

    DEFF Research Database (Denmark)

    2015-01-01

    This video shows how to annotate the ground truth tracks in the thermal videos. The ground truth tracks are produced to be able to compare them to tracks obtained from a Computer Vision tracking approach. The program used for annotation is T-Analyst, which is developed by Aliaksei Laureshyn, Ph...

  5. Application of whole slide image markup and annotation for pathologist knowledge capture.

    Science.gov (United States)

    Campbell, Walter S; Foster, Kirk W; Hinrichs, Steven H

    2013-01-01

    The ability to transfer image markup and annotation data from one scanned image of a slide to a newly acquired image of the same slide within a single vendor platform was investigated. The goal was to study the ability to use image markup and annotation data files as a mechanism to capture and retain pathologist knowledge without retaining the entire whole slide image (WSI) file. Accepted mathematical principles were investigated as a method to overcome variations in scans of the same glass slide and to accurately associate image markup and annotation data across different WSI of the same glass slide. Trilateration was used to link fixed points within the image and slide to the placement of markups and annotations of the image in a metadata file. Variation in markup and annotation placement between WSI of the same glass slide was reduced from over 80 μ to less than 4 μ in the x-axis and from 17 μ to 6 μ in the y-axis (P < 0.025). This methodology allows for the creation of a highly reproducible image library of histopathology images and interpretations for educational and research use.

  6. Pipeline integrity: ILI baseline data for QRA

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Todd R. [Tuboscope Pipeline Services, Houston, TX (United States)]. E-mail: tporter@varco.com; Silva, Jose Augusto Pereira da [Pipeway Engenharia, Rio de Janeiro, RJ (Brazil)]. E-mail: guto@pipeway.com; Marr, James [MARR and Associates, Calgary, AB (Canada)]. E-mail: jmarr@marr-associates.com

    2003-07-01

    The initial phase of a pipeline integrity management program (IMP) is conducting a baseline assessment of the pipeline system and segments as part of Quantitative Risk Assessment (QRA). This gives the operator's integrity team the opportunity to identify critical areas and deficiencies in the protection, maintenance, and mitigation strategies. As a part of data gathering and integration of a wide variety of sources, in-line inspection (ILI) data is a key element. In order to move forward in the integrity program development and execution, the baseline geometry of the pipeline must be determined with accuracy and confidence. From this, all subsequent analysis and conclusions will be derived. Tuboscope Pipeline Services (TPS), in conjunction with Pipeway Engenharia of Brazil, operate ILI inertial navigation system (INS) and Caliper geometry tools, to address this integrity requirement. This INS and Caliper ILI tool data provides pipeline trajectory at centimeter level resolution and sub-metre 3D position accuracy along with internal geometry - ovality, dents, misalignment, and wrinkle/buckle characterization. Global strain can be derived from precise INS curvature measurements and departure from the initial pipeline state. Accurate pipeline elevation profile data is essential in the identification of sag/over bend sections for fluid dynamic and hydrostatic calculations. This data, along with pipeline construction, operations, direct assessment and maintenance data is integrated in LinaViewPRO{sup TM}, a pipeline data management system for decision support functions, and subsequent QRA operations. This technology provides the baseline for an informed, accurate and confident integrity management program. This paper/presentation will detail these aspects of an effective IMP, and experience will be presented, showing the benefits for liquid and gas pipeline systems. (author)

  7. Propagating annotations of molecular networks using in silico fragmentation.

    Science.gov (United States)

    da Silva, Ricardo R; Wang, Mingxun; Nothias, Louis-Félix; van der Hooft, Justin J J; Caraballo-Rodríguez, Andrés Mauricio; Fox, Evan; Balunas, Marcy J; Klassen, Jonathan L; Lopes, Norberto Peporine; Dorrestein, Pieter C

    2018-04-18

    The annotation of small molecules is one of the most challenging and important steps in untargeted mass spectrometry analysis, as most of our biological interpretations rely on structural annotations. Molecular networking has emerged as a structured way to organize and mine data from untargeted tandem mass spectrometry (MS/MS) experiments and has been widely applied to propagate annotations. However, propagation is done through manual inspection of MS/MS spectra connected in the spectral networks and is only possible when a reference library spectrum is available. One of the alternative approaches used to annotate an unknown fragmentation mass spectrum is through the use of in silico predictions. One of the challenges of in silico annotation is the uncertainty around the correct structure among the predicted candidate lists. Here we show how molecular networking can be used to improve the accuracy of in silico predictions through propagation of structural annotations, even when there is no match to a MS/MS spectrum in spectral libraries. This is accomplished through creating a network consensus of re-ranked structural candidates using the molecular network topology and structural similarity to improve in silico annotations. The Network Annotation Propagation (NAP) tool is accessible through the GNPS web-platform https://gnps.ucsd.edu/ProteoSAFe/static/gnps-theoretical.jsp.

  8. Gene calling and bacterial genome annotation with BG7.

    Science.gov (United States)

    Tobes, Raquel; Pareja-Tobes, Pablo; Manrique, Marina; Pareja-Tobes, Eduardo; Kovach, Evdokim; Alekhin, Alexey; Pareja, Eduardo

    2015-01-01

    New massive sequencing technologies are providing many bacterial genome sequences from diverse taxa but a refined annotation of these genomes is crucial for obtaining scientific findings and new knowledge. Thus, bacterial genome annotation has emerged as a key point to investigate in bacteria. Any efficient tool designed specifically to annotate bacterial genomes sequenced with massively parallel technologies has to consider the specific features of bacterial genomes (absence of introns and scarcity of nonprotein-coding sequence) and of next-generation sequencing (NGS) technologies (presence of errors and not perfectly assembled genomes). These features make it convenient to focus on coding regions and, hence, on protein sequences that are the elements directly related with biological functions. In this chapter we describe how to annotate bacterial genomes with BG7, an open-source tool based on a protein-centered gene calling/annotation paradigm. BG7 is specifically designed for the annotation of bacterial genomes sequenced with NGS. This tool is sequence error tolerant maintaining their capabilities for the annotation of highly fragmented genomes or for annotating mixed sequences coming from several genomes (as those obtained through metagenomics samples). BG7 has been designed with scalability as a requirement, with a computing infrastructure completely based on cloud computing (Amazon Web Services).

  9. Flooding simulation of hilly pipeline commisionning process

    Energy Technology Data Exchange (ETDEWEB)

    Nan, Zhang [China National Oil and Gas Exploration and Development Corporation and China University of Petroleum, Beijing (China); Jing, Gong [China University of Petroleum, Beijing (China); Baoli, Zhu [China National Oil and Gas Exploration and Development Corporation, Beijing (China); Lin, Zheng [CNPC Oil and Gas Control Center, Beijing (China)

    2010-07-01

    When the construction of a pipeline has been completed, the pipeline flooding is done as part of the pipeline commissioning process. This method consists of filling the empty pipe with water or oil. In a pipeline situated in hilly terrain, air entrapped in the fluid causes problems with the flooding process and it is necessary to discharge the accumulated air to address this issue. The aim of this paper is to provide a model for predicting the location and volume of air pockets in a pipeline. This model was developed based on the fundamentals of mass balance and momentum transfer in multiphase flow and was then applied to a pipeline in China and compared with the SCADA data. Results showed a good match between the model's predictions of hydraulic movement and the real data from SCADA. The two flow model developed can predict hydraulic movement during pipeline flooding in a hilly area and thus it can be used to predict water front location and air pocket movement in the pipe.

  10. An integrated system for pipeline condition monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Strong, Andrew P.; Lees, Gareth; Hartog, Arthur; Twohig, Richard; Kader, Kamal; Hilton, Graeme; Mullens, Stephen; Khlybov, Artem [Schlumberger, Southampton (United Kingdom); Sanderson, Norman [BP Exploration, Sunbury (United Kingdom)

    2009-07-01

    monitoring in a wide range of other applications such as: long sub sea flow lines; offshore riser systems; settlement in tank farms; facilities perimeter security. An important element of this system is a bespoke direct-bury optical sensor cable, designed to allow distributed strain measurement and hence enable monitoring of ground movement, whilst withstanding the rigors of the pipeline environment. The system can also be configured for detection of third-party interference and leaks with the majority of existing buried cables. In this paper, we outline the optical sensing methods employed in the system, and the results of the extensive field trials performed to fully evaluate and prove the system for use on long hydrocarbon transmission pipelines. Specifically, we will describe the detection of small gas releases, simulated ground movement and detection and recognition of a number of different types of third party interventions at the full 100 km target range. Finally, the tracking of a pig during pigging operations is demonstrated on a pilot installation. (author)

  11. Annotation of the Evaluative Language in a Dependency Treebank

    Directory of Open Access Journals (Sweden)

    Šindlerová Jana

    2017-12-01

    Full Text Available In the paper, we present our efforts to annotate evaluative language in the Prague Dependency Treebank 2.0. The project is a follow-up of the series of annotations of small plaintext corpora. It uses automatic identification of potentially evaluative nodes through mapping a Czech subjectivity lexicon to syntactically annotated data. These nodes are then manually checked by an annotator and either dismissed as standing in a non-evaluative context, or confirmed as evaluative. In the latter case, information about the polarity orientation, the source and target of evaluation is added by the annotator. The annotations unveiled several advantages and disadvantages of the chosen framework. The advantages involve more structured and easy-to-handle environment for the annotator, visibility of syntactic patterning of the evaluative state, effective solving of discontinuous structures or a new perspective on the influence of good/bad news. The disadvantages include little capability of treating cases with evaluation spread among more syntactically connected nodes at once, little capability of treating metaphorical expressions, or disregarding the effects of negation and intensification in the current scheme.

  12. Reading Actively Online: An Exploratory Investigation of Online Annotation Tools for Inquiry Learning / La lecture active en ligne: étude exploratoire sur les outils d'annotation en ligne pour l'apprentissage par l’enquête

    Directory of Open Access Journals (Sweden)

    Jingyan Lu

    2012-11-01

    Full Text Available This study seeks to design and facilitate active reading among secondary school students with an online annotation tool – Diigo. Two classes of different academic performance levels were recruited to examine their annotation behavior and perceptions of Diigo. We wanted to determine whether the two classes differed in how they used Diigo; how they perceived Diigo; and whether how they used Diigo was related to how they perceived it. Using annotation data and surveys in which students reported on their use and perceptions of Diigo, we found that although the tool facilitated individual annotations, the two classes used and perceived it differently. Overall, the study showed Diigo to be a promising tool for enhancing active reading in the inquiry learning process. Cette étude vise à concevoir et à faciliter la lecture active chez les élèves du secondaire grâce à l’outil d'annotation en ligne Diigo. Deux classes avec des niveaux de rendement scolaire différents ont été retenues afin qu’on examine leur manière d’annoter et leur perception de Diigo. Nous avons voulu déterminer si les deux classes diffèrent dans leur façon d’utiliser Diigo, leur perception de Diigo, et si leur manière d’utiliser Diigo était liée à leur perception. En utilisant les données d'annotation et d'enquêtes dans lesquelles les élèves relataient leur utilisation et leur perception de Diigo, nous avons constaté que, même si l'outil a facilité les annotations individuelles, les deux classes l’ont utilisé et perçu différemment. Dans l'ensemble, l'étude a montré que Diigo est un outil prometteur pour l'amélioration de la lecture active dans le processus d'apprentissage par enquête.

  13. Beyond the pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Barnsley, J.; Ellis, D.; McIntosh, J.

    1979-12-01

    A study was conducted on the lives of women and their families in Fort Nelson, British Columbia, and Whitehorse, Yukon Territory, two communities which are to be affected by the proposed construction of the Alaska Highway gas pipeline. The womens' socio-economic concerns resulting from the proposed construction were examined by means of interviews with samples of women living in the two communities. Results from the study include descriptions of the communities and their basic services, community planning and housing, women's work in the home and for wages, and the perceived impact of the pipeline on such matters as employment, social services, living costs, business, housing, crime, and the overall community. Recommendations are made to improve the planning process for the pipeline to include the taking into account of womens' needs in such areas as training, health care, housing, and community services. 213 refs., 4 figs., 2 tabs.

  14. The caBIG annotation and image Markup project.

    Science.gov (United States)

    Channin, David S; Mongkolwat, Pattanasak; Kleper, Vladimir; Sepukar, Kastubh; Rubin, Daniel L

    2010-04-01

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

  15. Interoperable Multimedia Annotation and Retrieval for the Tourism Sector

    NARCIS (Netherlands)

    Chatzitoulousis, Antonios; Efraimidis, Pavlos S.; Athanasiadis, I.N.

    2015-01-01

    The Atlas Metadata System (AMS) employs semantic web annotation techniques in order to create an interoperable information annotation and retrieval platform for the tourism sector. AMS adopts state-of-the-art metadata vocabularies, annotation techniques and semantic web technologies.

  16. Publishing a Quality Context-aware Annotated Corpus and Lexicon for Harassment Research

    OpenAIRE

    Rezvan, Mohammadreza; Shekarpour, Saeedeh; Balasuriya, Lakshika; Thirunarayan, Krishnaprasad; Shalin, Valerie; Sheth, Amit

    2018-01-01

    Having a quality annotated corpus is essential especially for applied research. Despite the recent focus of Web science community on researching about cyberbullying, the community dose not still have standard benchmarks. In this paper, we publish first, a quality annotated corpus and second, an offensive words lexicon capturing different types type of harassment as (i) sexual harassment, (ii) racial harassment, (iii) appearance-related harassment, (iv) intellectual harassment, and (v) politic...

  17. A Novel Approach to Semantic and Coreference Annotation at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Firpo, M

    2005-02-04

    A case is made for the importance of high quality semantic and coreference annotation. The challenges of providing such annotation are described. Asperger's Syndrome is introduced, and the connections are drawn between the needs of text annotation and the abilities of persons with Asperger's Syndrome to meet those needs. Finally, a pilot program is recommended wherein semantic annotation is performed by people with Asperger's Syndrome. The primary points embodied in this paper are as follows: (1) Document annotation is essential to the Natural Language Processing (NLP) projects at Lawrence Livermore National Laboratory (LLNL); (2) LLNL does not currently have a system in place to meet its need for text annotation; (3) Text annotation is challenging for a variety of reasons, many related to its very rote nature; (4) Persons with Asperger's Syndrome are particularly skilled at rote verbal tasks, and behavioral experts agree that they would excel at text annotation; and (6) A pilot study is recommend in which two to three people with Asperger's Syndrome annotate documents and then the quality and throughput of their work is evaluated relative to that of their neuro-typical peers.

  18. Influence of grade on the reliability of corroding pipelines

    International Nuclear Information System (INIS)

    Maes, M.A.; Dann, M.; Salama, M.M.

    2008-01-01

    This paper focuses on a comparative analysis of the reliability associated with the evolution of corrosion between normal and high-strength pipe material. The use of high strength steel grades such as X100 and X120 for high pressure gas pipeline in the arctic is currently being considered. To achieve this objective, a time-dependent reliability analysis using variable Y/T ratios in a multiaxial finite strain analysis of thin-walled pipeline is performed. This analysis allows for the consideration of longitudinal grooves and the presence of companion axial tension and bending loads. Limit states models are developed based on suitable strain hardening models for the ultimate behavior of corroded medium and high strength pipeline material. In an application, the evolution of corrosion is modeled in pipelines of different grades that have been subjected to an internal corrosion inspection after a specified time which allows for a Bayesian updating of long-term corrosion estimates and, hence, the derivation of annual probabilities of failure as a function of time. The effect of grade and Y/T is clearly demonstrated

  19. Crude oil growth impact on pipelines

    International Nuclear Information System (INIS)

    Devries, O.

    2005-01-01

    This paper provided an outline of crude oil production and supply in Canada. Details of oil sands projects in Athabasca, Cold Lake and Peace River were presented. A chart of oil sands growth by major project was provided. A list of new emerging oil sands crude types was also presented along with details of a synthetic bitumen blending synergy. Maps of Western Canadian crude oil markets were provided, along with details of refinery and market demand by crude type. Various pipeline alternatives to new markets were examined, with reference to Enbridge Pipeline's supply and capacity. Details of the Hardisty to U.S Gulf Coast Pipeline and the Edmonton to Prince Rupert Pipeline and its terminal and dock facilities were presented. It was concluded that pipeline capacity and seasonal factors will influence market demand, while linefill, crude types and the quality of the product will influence operational strategies. tabs., figs

  20. Gene Ontology annotation of the rice blast fungus, Magnaporthe oryzae

    Directory of Open Access Journals (Sweden)

    Deng Jixin

    2009-02-01

    Full Text Available Abstract Background Magnaporthe oryzae, the causal agent of blast disease of rice, is the most destructive disease of rice worldwide. The genome of this fungal pathogen has been sequenced and an automated annotation has recently been updated to Version 6 http://www.broad.mit.edu/annotation/genome/magnaporthe_grisea/MultiDownloads.html. However, a comprehensive manual curation remains to be performed. Gene Ontology (GO annotation is a valuable means of assigning functional information using standardized vocabulary. We report an overview of the GO annotation for Version 5 of M. oryzae genome assembly. Methods A similarity-based (i.e., computational GO annotation with manual review was conducted, which was then integrated with a literature-based GO annotation with computational assistance. For similarity-based GO annotation a stringent reciprocal best hits method was used to identify similarity between predicted proteins of M. oryzae and GO proteins from multiple organisms with published associations to GO terms. Significant alignment pairs were manually reviewed. Functional assignments were further cross-validated with manually reviewed data, conserved domains, or data determined by wet lab experiments. Additionally, biological appropriateness of the functional assignments was manually checked. Results In total, 6,286 proteins received GO term assignment via the homology-based annotation, including 2,870 hypothetical proteins. Literature-based experimental evidence, such as microarray, MPSS, T-DNA insertion mutation, or gene knockout mutation, resulted in 2,810 proteins being annotated with GO terms. Of these, 1,673 proteins were annotated with new terms developed for Plant-Associated Microbe Gene Ontology (PAMGO. In addition, 67 experiment-determined secreted proteins were annotated with PAMGO terms. Integration of the two data sets resulted in 7,412 proteins (57% being annotated with 1,957 distinct and specific GO terms. Unannotated proteins

  1. Public perceptions of CO2 transportation in pipelines

    International Nuclear Information System (INIS)

    Gough, Clair; O'Keefe, Laura; Mander, Sarah

    2014-01-01

    This paper explores the response by members of the lay public to the prospect of an onshore CO 2 pipeline through their locality as part of a proposed CCS development and presents results from deliberative Focus Groups held along a proposed pipeline route. Although there is a reasonable level of general knowledge about CO 2 across the lay public, understanding of its specific properties is more limited. The main concerns expressed around pipelines focused on five areas: (i) safe operation of the pipeline; (ii) the risks to people, livestock and vegetation arising from the leakage of CO 2 from the pipeline; (iii) the innovative and ‘first of its kind' nature of the pipeline and the consequent lack of operational CO 2 pipelines in the UK to demonstrate the technology; (iv) impacts on coastal erosion at the landfall site; and (v) the potential disruption to local communities during pipeline construction. Participants expressed scepticism over the motivations of CO 2 pipeline developers. Trust that the developer will minimise risk during the route selection and subsequent construction, operation and maintenance of the pipeline is key; building trust within the local community requires early engagement processes, tailored to deliver a variety of engagement and information approaches. - Highlights: • Lay publics express good general knowledge of CO 2 but not of its specific properties. • Key concerns relate to risk and safety and ‘first of a kind' nature of CO 2 pipeline. • Group participants are sceptical about motivations of CO 2 pipeline developers. • Communities' trust in developer is a major element of their risk assessment

  2. Hydrogeological considerations in northern pipeline development. [Permafrost affected by hot or chilled pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Harlan, R L

    1974-11-01

    Some of the hydrogeological implications of construction and operation of oil and gas pipelines in northern regions of Canada are considered in relation to their potential environmental impacts and those factors affecting the security of the pipeline itself. Although the extent to which water in permafrost participates in the subsurface flow regime has not been fully demonstrated, the role of liquid as well as vapor transport in frozen earth materials can be shown from theory to be highly significant; water movement rates in frozen soil are on the same order as those in unsaturated, unfrozen soil. Below 0/sup 0/C, the unfrozen water content in a fine-grained porous medium is dependent on temperature but independent of the total water content. The thermal gradient controls the rate and direction of water movement in permafrost. The groundwater stabilizes the streamflow and in the absence of large lakes provides the main source of flow during the winter. As groundwater is frequently confined by the permafrost, degradation of the permafrost can have significant consequences. The thaw bulb formed around a hot oil pipeline can induce liquefactioned flow of the thawed material. A chilled pipeline could restrict groundwater movement, resulting in buildup of artesian conditions and icings. The liberation and absorption of latent heat on freezing and thawing affects the thermal regime in the ground surface. Recommendations are given for pipeline construction and areas for further study pointed out. (DLC)

  3. Annotation of selection strengths in viral genomes

    DEFF Research Database (Denmark)

    McCauley, Stephen; de Groot, Saskia; Mailund, Thomas

    2007-01-01

    Motivation: Viral genomes tend to code in overlapping reading frames to maximize information content. This may result in atypical codon bias and particular evolutionary constraints. Due to the fast mutation rate of viruses, there is additional strong evidence for varying selection between intra......- and intergenomic regions. The presence of multiple coding regions complicates the concept of Ka/Ks ratio, and thus begs for an alternative approach when investigating selection strengths. Building on the paper by McCauley & Hein (2006), we develop a method for annotating a viral genome coding in overlapping...... may thus achieve an annotation both of coding regions as well as selection strengths, allowing us to investigate different selection patterns and hypotheses. Results: We illustrate our method by applying it to a multiple alignment of four HIV2 sequences, as well as four Hepatitis B sequences. We...

  4. Annotating functional RNAs in genomes using Infernal.

    Science.gov (United States)

    Nawrocki, Eric P

    2014-01-01

    Many different types of functional non-coding RNAs participate in a wide range of important cellular functions but the large majority of these RNAs are not routinely annotated in published genomes. Several programs have been developed for identifying RNAs, including specific tools tailored to a particular RNA family as well as more general ones designed to work for any family. Many of these tools utilize covariance models (CMs), statistical models of the conserved sequence, and structure of an RNA family. In this chapter, as an illustrative example, the Infernal software package and CMs from the Rfam database are used to identify RNAs in the genome of the archaeon Methanobrevibacter ruminantium, uncovering some additional RNAs not present in the genome's initial annotation. Analysis of the results and comparison with family-specific methods demonstrate some important strengths and weaknesses of this general approach.

  5. Annotating spatio-temporal datasets for meaningful analysis in the Web

    Science.gov (United States)

    Stasch, Christoph; Pebesma, Edzer; Scheider, Simon

    2014-05-01

    More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.

  6. Mackenzie Valley Pipeline market demand, supply, and infrastructure analysis : final report

    International Nuclear Information System (INIS)

    2004-01-01

    Mackenzie Valley Pipeline Co-Venturers is a consortium of petroleum companies proposing to construct a 1,400 km long, large-diameter, high-pressure natural gas transmission pipeline from the northwestern edge of the Northwest Territories to the Alberta-Northwest Territories border. The Mackenzie Valley Pipeline will bring natural gas from the Mackenzie Delta region to markets in Alberta, central and eastern Canada and the United States. Navigant Consulting Ltd. prepared this assessment of the long-term market need for natural gas produced from the Mackenzie Delta. It presents an analysis of gas demand, supply and infrastructure. Three sensitivity cases were examined, incorporating different assumptions about the initial capacity of the pipeline, potential expansion of its capacity and different levels of gas demand in Canada and the United States. The report indicates that gas markets in North America support construction of the proposed 34 million cubic metre per day pipeline in the 2009 timeframe, with possible expansion in 2015 and 2020. It also indicates that there will be enough capacity on the intra-Alberta gas transmission system to accommodate the projected deliveries of Mackenzie Delta gas. The increase in gas demand is due to an increase in residential and commercial gas consumption, electric power generation and the energy intensive bitumen extraction and processing activities in the Alberta oil sands industry. 36 tabs., 56 figs

  7. Semantator: annotating clinical narratives with semantic web ontologies.

    Science.gov (United States)

    Song, Dezhao; Chute, Christopher G; Tao, Cui

    2012-01-01

    To facilitate clinical research, clinical data needs to be stored in a machine processable and understandable way. Manual annotating clinical data is time consuming. Automatic approaches (e.g., Natural Language Processing systems) have been adopted to convert such data into structured formats; however, the quality of such automatically extracted data may not always be satisfying. In this paper, we propose Semantator, a semi-automatic tool for document annotation with Semantic Web ontologies. With a loaded free text document and an ontology, Semantator supports the creation/deletion of ontology instances for any document fragment, linking/disconnecting instances with the properties in the ontology, and also enables automatic annotation by connecting to the NCBO annotator and cTAKES. By representing annotations in Semantic Web standards, Semantator supports reasoning based upon the underlying semantics of the owl:disjointWith and owl:equivalentClass predicates. We present discussions based on user experiences of using Semantator.

  8. Linking Disparate Datasets of the Earth Sciences with the SemantEco Annotator

    Science.gov (United States)

    Seyed, P.; Chastain, K.; McGuinness, D. L.

    2013-12-01

    Use of Semantic Web technologies for data management in the Earth sciences (and beyond) has great potential but is still in its early stages, since the challenges of translating data into a more explicit or semantic form for immediate use within applications has not been fully addressed. In this abstract we help address this challenge by introducing the SemantEco Annotator, which enables anyone, regardless of expertise, to semantically annotate tabular Earth Science data and translate it into linked data format, while applying the logic inherent in community-standard vocabularies to guide the process. The Annotator was conceived under a desire to unify dataset content from a variety of sources under common vocabularies, for use in semantically-enabled web applications. Our current use case employs linked data generated by the Annotator for use in the SemantEco environment, which utilizes semantics to help users explore, search, and visualize water or air quality measurement and species occurrence data through a map-based interface. The generated data can also be used immediately to facilitate discovery and search capabilities within 'big data' environments. The Annotator provides a method for taking information about a dataset, that may only be known to its maintainers, and making it explicit, in a uniform and machine-readable fashion, such that a person or information system can more easily interpret the underlying structure and meaning. Its primary mechanism is to enable a user to formally describe how columns of a tabular dataset relate and/or describe entities. For example, if a user identifies columns for latitude and longitude coordinates, we can infer the data refers to a point that can be plotted on a map. Further, it can be made explicit that measurements of 'nitrate' and 'NO3-' are of the same entity through vocabulary assignments, thus more easily utilizing data sets that use different nomenclatures. The Annotator provides an extensive and searchable

  9. 49 CFR 192.627 - Tapping pipelines under pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Tapping pipelines under pressure. 192.627 Section... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Operations § 192.627 Tapping pipelines under pressure. Each tap made on a pipeline under pressure must be performed by a crew qualified to make...

  10. Quantifying brain tissue volume in multiple sclerosis with automated lesion segmentation and filling

    Directory of Open Access Journals (Sweden)

    Sergi Valverde

    2015-01-01

    Full Text Available Lesion filling has been successfully applied to reduce the effect of hypo-intense T1-w Multiple Sclerosis (MS lesions on automatic brain tissue segmentation. However, a study of fully automated pipelines incorporating lesion segmentation and lesion filling on tissue volume analysis has not yet been performed. Here, we analyzed the % of error introduced by automating the lesion segmentation and filling processes in the tissue segmentation of 70 clinically isolated syndrome patient images. First of all, images were processed using the LST and SLS toolkits with different pipeline combinations that differed in either automated or manual lesion segmentation, and lesion filling or masking out lesions. Then, images processed following each of the pipelines were segmented into gray matter (GM and white matter (WM using SPM8, and compared with the same images where expert lesion annotations were filled before segmentation. Our results showed that fully automated lesion segmentation and filling pipelines reduced significantly the % of error in GM and WM volume on images of MS patients, and performed similarly to the images where expert lesion annotations were masked before segmentation. In all the pipelines, the amount of misclassified lesion voxels was the main cause in the observed error in GM and WM volume. However, the % of error was significantly lower when automatically estimated lesions were filled and not masked before segmentation. These results are relevant and suggest that LST and SLS toolboxes allow the performance of accurate brain tissue volume measurements without any kind of manual intervention, which can be convenient not only in terms of time and economic costs, but also to avoid the inherent intra/inter variability between manual annotations.

  11. A multi-ontology approach to annotate scientific documents based on a modularization technique.

    Science.gov (United States)

    Gomes, Priscilla Corrêa E Castro; Moura, Ana Maria de Carvalho; Cavalcanti, Maria Cláudia

    2015-12-01

    Scientific text annotation has become an important task for biomedical scientists. Nowadays, there is an increasing need for the development of intelligent systems to support new scientific findings. Public databases available on the Web provide useful data, but much more useful information is only accessible in scientific texts. Text annotation may help as it relies on the use of ontologies to maintain annotations based on a uniform vocabulary. However, it is difficult to use an ontology, especially those that cover a large domain. In addition, since scientific texts explore multiple domains, which are covered by distinct ontologies, it becomes even more difficult to deal with such task. Moreover, there are dozens of ontologies in the biomedical area, and they are usually big in terms of the number of concepts. It is in this context that ontology modularization can be useful. This work presents an approach to annotate scientific documents using modules of different ontologies, which are built according to a module extraction technique. The main idea is to analyze a set of single-ontology annotations on a text to find out the user interests. Based on these annotations a set of modules are extracted from a set of distinct ontologies, and are made available for the user, for complementary annotation. The reduced size and focus of the extracted modules tend to facilitate the annotation task. An experiment was conducted to evaluate this approach, with the participation of a bioinformatician specialist of the Laboratory of Peptides and Proteins of the IOC/Fiocruz, who was interested in discovering new drug targets aiming at the combat of tropical diseases. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. The impact of changing pipeline BS and W specifications : a survey

    International Nuclear Information System (INIS)

    Renouf, G.; Ranganathan, R.; Scoular, R.J.; Soveran, D.

    1997-01-01

    The current situation in Canada and in the US regarding BS and W was presented. In Canada BS and W specifications are 0.5 per cent and are the same for light and heavy oils, while in the US, BS and W specifications range from 0.5 to 3 per cent. Some pipelines allow more relaxed specifications for heavy oil. A telephone survey was conducted in which 12 producers, 25 pipeline representatives and 18 upgrader facilities from 45 different sites in Canada and the US were contacted. Contacts were questioned on which components in crude oil most affected their operations and their reaction to changing pipeline BS and W specifications. The most troublesome component, according to the majority of contacts, were solids. Downstream users cannot accept any increase in solids within the crude. Many pipeline companies would like to see solids regulated separately from water. There are several advantages and disadvantages for producers, pipeliners, and refiners in relaxing water limits. Among the advantages, flash evaporation proved to provide the strongest case for relaxing water limits. 1 fig

  13. Integrated surface management for pipeline construction: The Mid-America Pipeline Company Four Corners Project

    Science.gov (United States)

    Maria L. Sonett

    1999-01-01

    Integrated surface management techniques for pipeline construction through arid and semi-arid rangeland ecosystems are presented in a case history of a 412-mile pipeline construction project in New Mexico. Planning, implementation and monitoring for restoration of surface hydrology, soil stabilization, soil cover, and plant species succession are discussed. Planning...

  14. A Pipelined and Parallel Architecture for Quantum Monte Carlo Simulations on FPGAs

    Directory of Open Access Journals (Sweden)

    Akila Gothandaraman

    2010-01-01

    Full Text Available Recent advances in Field-Programmable Gate Array (FPGA technology make reconfigurable computing using FPGAs an attractive platform for accelerating scientific applications. We develop a deeply pipelined and parallel architecture for Quantum Monte Carlo simulations using FPGAs. Quantum Monte Carlo simulations enable us to obtain the structural and energetic properties of atomic clusters. We experiment with different pipeline structures for each component of the design and develop a deeply pipelined architecture that provides the best performance in terms of achievable clock rate, while at the same time has a modest use of the FPGA resources. We discuss the details of the pipelined and generic architecture that is used to obtain the potential energy and wave function of a cluster of atoms.

  15. Pipeline dreams face up to reality

    International Nuclear Information System (INIS)

    Ryan, Orla

    1999-01-01

    This article gives details of two gas pipelines which are expected to be built in Turkey to meet the estimated demand for gas. The Bluestream joint ENI/Gasprom project pipeline will convey Russian gas across the Black Sea to Turkey, and the PSG joint Bechtel/General Electric venture will bring gas from Turkmenistan to Turkey across the Caspian Sea. Construction of the pipelines and financing aspects are discussed. (uk)

  16. Cost reducing factors in effective pipeline piling structure design and construction in Alberta's thermal SAGD gathering pipeline systems

    Energy Technology Data Exchange (ETDEWEB)

    Farrokhzad, M.A. [IMV Projects, Calgary, AB (Canada)

    2008-10-15

    Oil sands steam assisted gravity drainage (SAGD) gathering pipeline systems are typically arranged so that above-ground steam pipeline and production pipelines lay next to each other on the same steel structure. Longitudinal and lateral loads build up in the pipeline supports, and the loads are consistently changing until pipeline temperatures reach a steady state condition. SAGD pipelines are required to have enough flexibility to absorb thermal expansion or contraction movements. However, most pipeline engineers only consider upper and lower temperature limits in the design of steel structures and pilings. This paper examined the effect of considering both the thermal gradient and time factor in designing supports for pipelines. The study examined how the factors impacted on standard load calculations and pile sizings. Sixteen stress analysis models for steam and production lines were prepared and designated thermal gradients were introduced to each model. Longitudinal and lateral loads caused by thermal gradient movements were calculated for all supports. The models were analyzed and absolute values for longitudinal and lateral loads were recorded. Results of the study showed that engineers do not necessarily need to rely on maximum temperatures as the condition that results in maximum longitudinal and lateral loads on supports. It was concluded that costs related to pipeline construction can be significantly reduced by considering the effects of thermal gradients in stress analyses and load calculations. 5 refs., 14 figs.

  17. Image annotation by deep neural networks with attention shaping

    Science.gov (United States)

    Zheng, Kexin; Lv, Shaohe; Ma, Fang; Chen, Fei; Jin, Chi; Dou, Yong

    2017-07-01

    Image annotation is a task of assigning semantic labels to an image. Recently, deep neural networks with visual attention have been utilized successfully in many computer vision tasks. In this paper, we show that conventional attention mechanism is easily misled by the salient class, i.e., the attended region always contains part of the image area describing the content of salient class at different attention iterations. To this end, we propose a novel attention shaping mechanism, which aims to maximize the non-overlapping area between consecutive attention processes by taking into account the history of previous attention vectors. Several weighting polices are studied to utilize the history information in different manners. In two benchmark datasets, i.e., PASCAL VOC2012 and MIRFlickr-25k, the average precision is improved by up to 10% in comparison with the state-of-the-art annotation methods.

  18. Ontology modularization to improve semantic medical image annotation.

    Science.gov (United States)

    Wennerberg, Pinar; Schulz, Klaus; Buitelaar, Paul

    2011-02-01

    Searching for medical images and patient reports is a significant challenge in a clinical setting. The contents of such documents are often not described in sufficient detail thus making it difficult to utilize the inherent wealth of information contained within them. Semantic image annotation addresses this problem by describing the contents of images and reports using medical ontologies. Medical images and patient reports are then linked to each other through common annotations. Subsequently, search algorithms can more effectively find related sets of documents on the basis of these semantic descriptions. A prerequisite to realizing such a semantic search engine is that the data contained within should have been previously annotated with concepts from medical ontologies. One major challenge in this regard is the size and complexity of medical ontologies as annotation sources. Manual annotation is particularly time consuming labor intensive in a clinical environment. In this article we propose an approach to reducing the size of clinical ontologies for more efficient manual image and text annotation. More precisely, our goal is to identify smaller fragments of a large anatomy ontology that are relevant for annotating medical images from patients suffering from lymphoma. Our work is in the area of ontology modularization, which is a recent and active field of research. We describe our approach, methods and data set in detail and we discuss our results. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Security of pipeline facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.C. [Alberta Energy and Utilities Board, Calgary, AB (Canada); Van Egmond, C.; Duquette, L. [National Energy Board, Calgary, AB (Canada); Revie, W. [Canada Centre for Mineral and Energy Technology, Ottawa, ON (Canada)

    2005-07-01

    This working group provided an update on provincial, federal and industry directions regarding the security of pipeline facilities. The decision to include security issues in the NEB Act was discussed as well as the Pipeline Security Management Assessment Project, which was created to establish a better understanding of existing security management programs as well as to assist the NEB in the development and implementation of security management regulations and initiatives. Amendments to the NEB were also discussed. Areas of pipeline security management assessment include physical safety management; cyber and information security management; and personnel security. Security management regulations were discussed, as well as implementation policies. Details of the Enbridge Liquids Pipelines Security Plan were examined. It was noted that the plan incorporates flexibility for operations and is integrated with Emergency Response and Crisis Management. Asset characterization and vulnerability assessments were discussed, as well as security and terrorist threats. It was noted that corporate security threat assessment and auditing are based on threat information from the United States intelligence community. It was concluded that the oil and gas industry is a leader in security in North America. The Trans Alaska Pipeline Incident was discussed as a reminder of how costly accidents can be. Issues of concern for the future included geographic and climate issues. It was concluded that limited resources are an ongoing concern, and that the regulatory environment is becoming increasingly prescriptive. Other concerns included the threat of not taking international terrorism seriously, and open media reporting of vulnerability of critical assets, including maps. tabs., figs.

  20. Estimating the Density of Fluid in a Pipeline System with an Electropump

    DEFF Research Database (Denmark)

    Sadeghi, H.; Poshtan, J.; Poulsen, Niels Kjølstad

    2018-01-01

    to detect the product in the pipeline is to sample the fluid in a laboratory and perform an offline measurement of its physical characteristics. The measurement requires sophisticated laboratory equipment and can be time-consuming and susceptible to human error. In this paper, for performing the online......To transfer petroleum products, a common pipeline is often used to continuously transfer various products in batches. Separating the different products requires detecting the interface between the batches at the storage facilities or pump stations along the pipelines. The conventional technique...

  1. Energy cost reduction in oil pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Limeira, Fabio Machado; Correa, Joao Luiz Lavoura; Costa, Luciano Macedo Josino da; Silva, Jose Luiz da; Henriques, Fausto Metzger Pessanha [Petrobras Transporte S.A. (TRANSPETRO), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    One of the key questions of modern society consists on the rational use of the planet's natural resources and energy. Due to the lack of energy, many companies are forced to reduce their workload, especially during peak hours, because residential demand reaches its top and there is not enough energy to fulfill the needs of all users, which affects major industries. Therefore, using energy more wisely has become a strategic issue for any company, due to the limited supply and also for the excessive cost it represents. With the objective of saving energy and reducing costs for oil pipelines, it has been identified that the increase in energy consumption is primordially related to pumping stations and also by the way many facilities are operated, that is, differently from what was originally designed. Realizing this opportunity, in order to optimize the process, this article intends to examine the possibility of gains evaluating alternatives regarding changes in the pump scheme configuration and non-use of pump stations at peak hours. Initially, an oil pipeline with potential to reduce energy costs was chosen being followed by a history analysis, in order to confirm if there was sufficient room to change the operation mode. After confirming the pipeline choice, the system is briefly described and the literature is reviewed, explaining how the energy cost is calculated and also the main characteristics of a pumping system in series and in parallel. In that sequence, technically feasible alternatives are studied in order to operate and also to negotiate the energy demand contract. Finally, costs are calculated to identify the most economical alternative, that is, for a scenario with no increase in the actual transported volume of the pipeline and for another scenario that considers an increase of about 20%. The conclusion of this study indicates that the chosen pipeline can achieve a reduction on energy costs of up to 25% without the need for investments in new

  2. Estimation of efficiency of hydrotransport pipelines polyurethane coating application in comparison with steel pipelines

    Science.gov (United States)

    Aleksandrov, V. I.; Vasilyeva, M. A.; Pomeranets, I. B.

    2017-10-01

    The paper presents analytical calculations of specific pressure loss in hydraulic transport of the Kachkanarsky GOK iron ore processing tailing slurry. The calculations are based on the results of the experimental studies on specific pressure loss dependence upon hydraulic roughness of pipelines internal surface lined with polyurethane coating. The experiments proved that hydraulic roughness of polyurethane coating is by the factor of four smaller than that of steel pipelines, resulting in a decrease of hydraulic resistance coefficients entered into calculating formula of specific pressure loss - the Darcy-Weisbach formula. Relative and equivalent roughness coefficients are calculated for pipelines with polyurethane coating and without it. Comparative calculations show that hydrotransport pipelines polyurethane coating application is conductive to a specific energy consumption decrease in hydraulic transport of the Kachkanarsky GOC iron ore processing tailings slurry by the factor of 1.5. The experiments were performed on a laboratory hydraulic test rig with a view to estimate the character and rate of physical roughness change in pipe samples with polyurethane coating. The experiments showed that during the following 484 hours of operation, roughness changed in all pipe samples inappreciably. As a result of processing of the experimental data by the mathematical statistics methods, an empirical formula was obtained for the calculation of operating roughness of polyurethane coating surface, depending on the pipeline operating duration with iron ore processing tailings slurry.

  3. Bicycle: a bioinformatics pipeline to analyze bisulfite sequencing data.

    Science.gov (United States)

    Graña, Osvaldo; López-Fernández, Hugo; Fdez-Riverola, Florentino; González Pisano, David; Glez-Peña, Daniel

    2018-04-15

    High-throughput sequencing of bisulfite-converted DNA is a technique used to measure DNA methylation levels. Although a considerable number of computational pipelines have been developed to analyze such data, none of them tackles all the peculiarities of the analysis together, revealing limitations that can force the user to manually perform additional steps needed for a complete processing of the data. This article presents bicycle, an integrated, flexible analysis pipeline for bisulfite sequencing data. Bicycle analyzes whole genome bisulfite sequencing data, targeted bisulfite sequencing data and hydroxymethylation data. To show how bicycle overtakes other available pipelines, we compared them on a defined number of features that are summarized in a table. We also tested bicycle with both simulated and real datasets, to show its level of performance, and compared it to different state-of-the-art methylation analysis pipelines. Bicycle is publicly available under GNU LGPL v3.0 license at http://www.sing-group.org/bicycle. Users can also download a customized Ubuntu LiveCD including bicycle and other bisulfite sequencing data pipelines compared here. In addition, a docker image with bicycle and its dependencies, which allows a straightforward use of bicycle in any platform (e.g. Linux, OS X or Windows), is also available. ograna@cnio.es or dgpena@uvigo.es. Supplementary data are available at Bioinformatics online.

  4. Development of high productivity pipeline girth welding

    International Nuclear Information System (INIS)

    Yapp, David; Liratzis, Theocharis

    2010-01-01

    The trend for increased oil and gas consumption implies a growth of long-distance pipeline installations. Welding is a critical factor in the installation of pipelines, both onshore and offshore, and the rate at which the pipeline can be laid is generally determined by the speed of welding. This has resulted in substantial developments in pipeline welding techniques. Arc welding is still the dominant process used in practice, and forge welding processes have had limited successful application to date, in spite of large investments in process development. Power beam processes have also been investigated in detail and the latest laser systems now show promise for practical application. In recent years the use of high strength steels has substantially reduced the cost of pipeline installation, with X70 and X80 being commonly used. This use of high strength pipeline produced by thermomechanical processing has also been researched. They must all meet three requirments, high productivity, satisfactory weld properties, and weld quality

  5. Sea water pipeline for nuclear power plant

    International Nuclear Information System (INIS)

    Ueno, Ken-ichi.

    1992-01-01

    Heating coils, for example, are wound around sea water pipelines as a heater. The outer wall surface of the sea water pipelines is heated by the heating coils. The inner wall surfaces of the sea water pipelines can be warmed to higher than a predetermined temperature by heating the outer wall surfaces to die out marine organisms deposited at the inner surfaces. Further, thermocouples for the external wall and the internal wall are disposed so that the temperature at the inner wall surface of the sea water pipelines can be controlled. Further, a temperature keeping material is disposed at the external surface of the sea water system pipelines. With such a constitution, the marine organisms deposited on the internal wall surface of the sea water system pipelines are died out to suppress the deposition amount of the marine organisms. Accordingly, the maintenance and the operation reliability is improved after maintenance. (I.N.)

  6. [Prescription annotations in Welfare Pharmacy].

    Science.gov (United States)

    Han, Yi

    2018-03-01

    Welfare Pharmacy contains medical formulas documented by the government and official prescriptions used by the official pharmacy in the pharmaceutical process. In the last years of Southern Song Dynasty, anonyms gave a lot of prescription annotations, made textual researches for the name, source, composition and origin of the prescriptions, and supplemented important historical data of medical cases and researched historical facts. The annotations of Welfare Pharmacy gathered the essence of medical theory, and can be used as precious materials to correctly understand the syndrome differentiation, compatibility regularity and clinical application of prescriptions. This article deeply investigated the style and form of the prescription annotations in Welfare Pharmacy, the name of prescriptions and the evolution of terminology, the major functions of the prescriptions, processing methods, instructions for taking medicine and taboos of prescriptions, the medical cases and clinical efficacy of prescriptions, the backgrounds, sources, composition and cultural meanings of prescriptions, proposed that the prescription annotations played an active role in the textual dissemination, patent medicine production and clinical diagnosis and treatment of Welfare Pharmacy. This not only helps understand the changes in the names and terms of traditional Chinese medicines in Welfare Pharmacy, but also provides the basis for understanding the knowledge sources, compatibility regularity, important drug innovations and clinical medications of prescriptions in Welfare Pharmacy. Copyright© by the Chinese Pharmaceutical Association.

  7. Functional Annotation of Ion Channel Structures by Molecular Simulation.

    Science.gov (United States)

    Trick, Jemma L; Chelvaniththilan, Sivapalan; Klesse, Gianni; Aryal, Prafulla; Wallace, E Jayne; Tucker, Stephen J; Sansom, Mark S P

    2016-12-06

    Ion channels play key roles in cell membranes, and recent advances are yielding an increasing number of structures. However, their functional relevance is often unclear and better tools are required for their functional annotation. In sub-nanometer pores such as ion channels, hydrophobic gating has been shown to promote dewetting to produce a functionally closed (i.e., non-conductive) state. Using the serotonin receptor (5-HT 3 R) structure as an example, we demonstrate the use of molecular dynamics to aid the functional annotation of channel structures via simulation of the behavior of water within the pore. Three increasingly complex simulation analyses are described: water equilibrium densities; single-ion free-energy profiles; and computational electrophysiology. All three approaches correctly predict the 5-HT 3 R crystal structure to represent a functionally closed (i.e., non-conductive) state. We also illustrate the application of water equilibrium density simulations to annotate different conformational states of a glycine receptor. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Black powder removal in a Mexico gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morrow, John R. [TDW Services, Inc., New Castle, DE (United States); Drysdale, Colin; Warterfield, Bob D. [T.D.Williamson, Inc., Tulsa, OK (United States)

    2008-07-01

    This paper focuses on the cleaning methodology and operational constrains involved with the removal of black powder in a high pressure natural gas transmission pipeline. In this case, the accumulation of black powder along the pipeline system over the seven year period since it was put into service was creating significant problems in the areas of maintenance, customer relations, and cost to the pipeline operator due to clogging of filters, reduced gas flow, and penalties as result of non-compliant delivery contracts. The pipeline cleaning project consisted of running cleaning pigs or scrappers with batches of cleaning solution through each section of the pipeline while dealing with such factors as three (3) pipeline section lengths in excess of 160 kms (100 miles), gas flow velocity fluctuations, shutdowns, and gas delivery schedule requirements. The cleaning program for the entire pipeline system included the use of chemical and diesel based cleaning solution, running multiple cleaning pigs, liquid injection and separation system, mobile storage tanks, various equipment and personnel for logistical support. Upon completion of the cleaning program, the level of black powder and other solids in all pipeline sections was reduced to approximately 0.5% liquid/solid ratio and the pipeline system returned to normal optimum operation. (author.

  9. New territory for NGL pipelines

    International Nuclear Information System (INIS)

    Turner, C.L.; Billings, F.E.

    1994-01-01

    Even though the NGL pipeline industry appears mature, new geographic territory exists for expansion of NGL pipelines. However, the most fertile territory that must be pursued is the collective opportunities to better link the existing NGL industry. Associations like the Gas Processors Association can not perform the role demanded by a need to share information between the links of the chain on a more real time basis. The Association can not substitute for picking up the phone or calling a meeting of industry participants to discuss proposed changes in policies and procedures. All stakeholders must participate in squeezing out the inefficiencies of the industry. Some expansion and extension of NGL pipelines will occur in the future without ownership participation or commitments from the supply and demand businesses. However, significant expansions linking new supply sources and demand markets will only be made as the supply and demand businesses share long-term strategies and help define the pipeline opportunity. The successful industries of the twenty-first century will not be dominated by a single profitable sector, but rather by those industries which foster cooperation as well as competition. A healthy NGL industry will be comprised of profitable supply businesses and profitable demand businesses, linked together by profitable pipeline businesses

  10. Crystallographic texture control helps improve pipeline steel resistance to hydrogen-induced cracking

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F; Hallen, J M; Herrera, O; Venegas, V [ESIQIE, Instituto Politecnico Nacional, Mexico, (Mexico); Baudin, T [Universite de Paris Sud, Orsay, (France)

    2010-07-01

    The resistance to HIC of sour service pipeline steels has been improved through several strategies but none have proven to be totally efficient in the preservation of HIC in difficult operating conditions. The crystallographic texture plays a significant role in determining the behavior of HIC in pipeline steels. The present study tried to prove that crystallographic texture control, through warm rolling schedules, helps improve pipeline steel resistance to HIC. Several samples of an API 5L X52 grade pipeline steel were produced using different thermomechanical processes (austenization, controlled rolling and recrystallization). These samples were subjected to cathodic charging. Scanning electron microscopy and automated FEG/EBSD were used to perform metallographic inspections and to collect microstructure data. The results showed that the strong y fiber texture significantly reduces or even prevents the HIC damage. It is possible to improve the HIC resistance of pipeline steels using crystallography texture control and grain boundary engineering.

  11. Annotating abstract pronominal anaphora in the DAD project

    DEFF Research Database (Denmark)

    Navarretta, Costanza; Olsen, Sussi Anni

    2008-01-01

    n this paper we present an extension of the MATE/GNOME annotation scheme for anaphora (Poesio 2004) which accounts for abstract anaphora in Danish and Italian. By abstract anaphora it is here meant pronouns whose linguistic antecedents are verbal phrases, clauses and discourse segments. The exten......n this paper we present an extension of the MATE/GNOME annotation scheme for anaphora (Poesio 2004) which accounts for abstract anaphora in Danish and Italian. By abstract anaphora it is here meant pronouns whose linguistic antecedents are verbal phrases, clauses and discourse segments....... The extended scheme, which we call the DAD annotation scheme, allows to annotate information about abstract anaphora which is important to investigate their use, see Webber (1988), Gundel et al. (2003), Navarretta (2004) and which can influence their automatic treatment. Intercoder agreement scores obtained...... by applying the DAD annotation scheme on texts and dialogues in the two languages are given and show that th information proposed in the scheme can be recognised in a reliable way....

  12. Annotated bibliography

    International Nuclear Information System (INIS)

    1997-08-01

    Under a cooperative agreement with the U.S. Department of Energy's Office of Science and Technology, Waste Policy Institute (WPI) is conducting a five-year research project to develop a research-based approach for integrating communication products in stakeholder involvement related to innovative technology. As part of the research, WPI developed this annotated bibliography which contains almost 100 citations of articles/books/resources involving topics related to communication and public involvement aspects of deploying innovative cleanup technology. To compile the bibliography, WPI performed on-line literature searches (e.g., Dialog, International Association of Business Communicators Public Relations Society of America, Chemical Manufacturers Association, etc.), consulted past years proceedings of major environmental waste cleanup conferences (e.g., Waste Management), networked with professional colleagues and DOE sites to gather reports or case studies, and received input during the August 1996 Research Design Team meeting held to discuss the project's research methodology. Articles were selected for annotation based upon their perceived usefulness to the broad range of public involvement and communication practitioners

  13. Supporting Keyword Search for Image Retrieval with Integration of Probabilistic Annotation

    Directory of Open Access Journals (Sweden)

    Tie Hua Zhou

    2015-05-01

    Full Text Available The ever-increasing quantities of digital photo resources are annotated with enriching vocabularies to form semantic annotations. Photo-sharing social networks have boosted the need for efficient and intuitive querying to respond to user requirements in large-scale image collections. In order to help users formulate efficient and effective image retrieval, we present a novel integration of a probabilistic model based on keyword query architecture that models the probability distribution of image annotations: allowing users to obtain satisfactory results from image retrieval via the integration of multiple annotations. We focus on the annotation integration step in order to specify the meaning of each image annotation, thus leading to the most representative annotations of the intent of a keyword search. For this demonstration, we show how a probabilistic model has been integrated to semantic annotations to allow users to intuitively define explicit and precise keyword queries in order to retrieve satisfactory image results distributed in heterogeneous large data sources. Our experiments on SBU (collected by Stony Brook University database show that (i our integrated annotation contains higher quality representatives and semantic matches; and (ii the results indicating annotation integration can indeed improve image search result quality.

  14. Millennium Pipeline Presentation : a new northeast passage

    International Nuclear Information System (INIS)

    Wolnik, J.

    1997-01-01

    Routes of the proposed Millennium Pipeline project were presented. The pipeline is to originate at the Empress gas field in Alberta and link up to eastern markets in the United States. One of the key advantages of the pipeline is that it will have the lowest proposed rates from Empress to Chicago and through links via affiliates to New York and other eastern markets. It will include 380 miles of new 36-inch pipeline and have a capacity of 650 million cubic feet per day. In many instances it will follow existing rights-of-way. The pipeline is expected to be in service for the 1999 winter heating season. The project sponsors are Columbia Gas Transmission, CMS Energy, MCN Energy, and Westcoast Energy. 6 figs

  15. AGA: Interactive pipeline for reproducible gene expression and DNA methylation data analyses [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Michael Considine

    2015-10-01

    Full Text Available Automated Genomics Analysis (AGA is an interactive program to analyze high-throughput genomic data sets on a variety of platforms. An easy to use, point and click, guided pipeline is implemented to combine, define, and compare datasets, and customize their outputs. In contrast to other automated programs, AGA enables flexible selection of sample groups for comparison from complex sample annotations. Batch correction techniques are also included to further enable the combination of datasets from diverse studies in this comparison. AGA also allows users to save plots, tables and data, and log files containing key portions of the R script run for reproducible analyses. The link between the interface and R supports collaborative research, enabling advanced R users to extend preliminary analyses generated from bioinformatics novices.

  16. Accuracy Limitations of Pipelined ADCs

    NARCIS (Netherlands)

    Quinn, P.J.; Roermund, van A.H.M.

    2005-01-01

    In this paper, the key characteristics of the main errors which affect the performance of a switched capacitor pipelined ADC are presented and their effects on the ADC transfer characteristics demonstrated. Clear and concise relationships are developed to aid optimized design of the pipeline ADC and

  17. Managing the market risk in pipeline capacity positions

    International Nuclear Information System (INIS)

    Simard, T.S.

    1998-01-01

    Managing the risk involved in adding new pipeline capacity was explored in this presentation. Topics discussed included: (1) pipeline capacity positions as basis swaps, (2) physical capacity versus basis transactions, (3) managing the market price risk in a capacity position, and (4) sharing of pipeline market risk. Pipeline owners were advised to recognize that pipeline capacity carries significant market price risk, that basis markets can sometimes be more volatile than outright markets, and to treat physical capacity market risk the same way as one would treat a financial basis position. 2 figs

  18. The Bakou-Ceyhan pipeline: paradoxes and coherence of the USA strategy of pipelines

    International Nuclear Information System (INIS)

    Jafalian, A.

    2004-01-01

    In 2002, the construction of the Bakou-Ceyhan pipeline, from the Caspian Sea to the Mediterranean Sea, is begun, in spite of the the controversies of industrialists against politicians and experts. The diplomatic USA activity in favor this pipeline largely contributes to the problems solution. The author presents the USA policy and strategy in the region, the economic constraints and the negotiations. (A.L.B.)

  19. 77 FR 32631 - Lion Oil Trading & Transportation, Inc., Magnolia Pipeline Company, and El Dorado Pipeline...

    Science.gov (United States)

    2012-06-01

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. OR12-13-000] Lion Oil... of the Commission's Rules of Practice and Procedure, 18 CFR 385.202 (2011), Lion Oil Trading & Transportation, Inc., Magnolia Pipeline Company, and El Dorado Pipeline Company, collectively, Lion Companies...

  20. Quick Pad Tagger : An Efficient Graphical User Interface for Building Annotated Corpora with Multiple Annotation Layers

    OpenAIRE

    Marc Schreiber; Kai Barkschat; Bodo Kraft; Albert Zundorf

    2015-01-01

    More and more domain specific applications in the internet make use of Natural Language Processing (NLP) tools (e. g. Information Extraction systems). The output quality of these applications relies on the output quality of the used NLP tools. Often, the quality can be increased by annotating a domain specific corpus. However, annotating a corpus is a time consuming and exhaustive task. To reduce the annota tion time we present...

  1. Alternatives for operational cost reduction in oil pipelines; Alternativas para reducao de custos energeticos operacionais em oleodutos

    Energy Technology Data Exchange (ETDEWEB)

    Krause, Philipe Barroso; Carneiro, Leonardo Motta; Pires, Luis Fernando Goncalves [Pontificia Universidade Catolica do Rio de Janeiro (SIMDUT/DEM/ PUC-Rio), RJ (Brazil). Dept. de Engenharia Mecancia. Nucleo de Simulacao Termo-Hidraulica de Dutos

    2012-07-01

    This paper intends to give a brief overview of some cost reduction alternatives in oil pipelines, to optimize the pipeline operation. Four different alternatives are presented, based on previous studies made on existing pipelines, to demonstrate the response obtained with these solutions. Pipeline operation, especially on mature ones, tends to have a high operational cost, be by tradition, the aging of the installation, change of operational characteristics - such as nominal flow, product, or even flow direction - for which the pipeline wasn't originally designed. The alternatives showed allow for an increase survival time of the pipeline, without resorting to major changes, such as replacement of pipes or adding pumping stations to the system. The alternative studied varies from no implementation cost to high installation cost or operational cost increase, depending on the system and the alternative chosen. From changing the pump arrays during operation or changing the products viscosity with different blends, that represent virtually no cost to the pipeline operation, to the use of VFDs, with a high installation cost or DRA, which increase the operational cost. (author)

  2. Nondestructive inspection of the condition of oil pipeline cleaning units

    International Nuclear Information System (INIS)

    Berdonosov, V.A.; Boiko, D.A.; Lapshin, B.M.; Chakhlov, V.L.

    1989-01-01

    One of the reasons for shutdowns of main oil pipelines is stoppage of the cleaning unit in cleaning of the inner surface of paraffin deposits caused by damage to the cleaning unit. The authors propose a method of searching for and determining the condition of the cleaning unit not requiring dismantling of the pipeline according to which the initial search for the cleaning unit is done with acoustic instruments (the increased acoustic noise at the point of stoppage of its is recorded) and subsequent inspection by a radiographic method. An experimental model of an instrument was developed making it possible to determine the location of a cleaning unit in an oil pipeline in stoppage of it from the acoustic noise. The instrument consists of two blocks, the remote sensor and the indicator block, which are connected to each other with a cable up to 10 m long. The design makes it possible to place the sensor at any accessible point of a linear part of the pipeline (in a pit, on a valve, etc.) while the indicator block may remain on the surface of the ground. The results obtained make it possible to adopt the optimum solutions on elimination of their malfunctioning and to prevent emergency situations without dismantling of the pipeline. With the equipment developed it is possible to inspect oil and gas pipelines with different reasons for a reduction in their throughput

  3. Pydpiper: a flexible toolkit for constructing novel registration pipelines.

    Science.gov (United States)

    Friedel, Miriam; van Eede, Matthijs C; Pipitone, Jon; Chakravarty, M Mallar; Lerch, Jason P

    2014-01-01

    Using neuroimaging technologies to elucidate the relationship between genotype and phenotype and brain and behavior will be a key contribution to biomedical research in the twenty-first century. Among the many methods for analyzing neuroimaging data, image registration deserves particular attention due to its wide range of applications. Finding strategies to register together many images and analyze the differences between them can be a challenge, particularly given that different experimental designs require different registration strategies. Moreover, writing software that can handle different types of image registration pipelines in a flexible, reusable and extensible way can be challenging. In response to this challenge, we have created Pydpiper, a neuroimaging registration toolkit written in Python. Pydpiper is an open-source, freely available software package that provides multiple modules for various image registration applications. Pydpiper offers five key innovations. Specifically: (1) a robust file handling class that allows access to outputs from all stages of registration at any point in the pipeline; (2) the ability of the framework to eliminate duplicate stages; (3) reusable, easy to subclass modules; (4) a development toolkit written for non-developers; (5) four complete applications that run complex image registration pipelines "out-of-the-box." In this paper, we will discuss both the general Pydpiper framework and the various ways in which component modules can be pieced together to easily create new registration pipelines. This will include a discussion of the core principles motivating code development and a comparison of Pydpiper with other available toolkits. We also provide a comprehensive, line-by-line example to orient users with limited programming knowledge and highlight some of the most useful features of Pydpiper. In addition, we will present the four current applications of the code.

  4. Metatranscriptomic analysis of diverse microbial communities reveals core metabolic pathways and microbiome-specific functionality.

    Science.gov (United States)

    Jiang, Yue; Xiong, Xuejian; Danska, Jayne; Parkinson, John

    2016-01-12

    Metatranscriptomics is emerging as a powerful technology for the functional characterization of complex microbial communities (microbiomes). Use of unbiased RNA-sequencing can reveal both the taxonomic composition and active biochemical functions of a complex microbial community. However, the lack of established reference genomes, computational tools and pipelines make analysis and interpretation of these datasets challenging. Systematic studies that compare data across microbiomes are needed to demonstrate the ability of such pipelines to deliver biologically meaningful insights on microbiome function. Here, we apply a standardized analytical pipeline to perform a comparative analysis of metatranscriptomic data from diverse microbial communities derived from mouse large intestine, cow rumen, kimchi culture, deep-sea thermal vent and permafrost. Sequence similarity searches allowed annotation of 19 to 76% of putative messenger RNA (mRNA) reads, with the highest frequency in the kimchi dataset due to its relatively low complexity and availability of closely related reference genomes. Metatranscriptomic datasets exhibited distinct taxonomic and functional signatures. From a metabolic perspective, we identified a common core of enzymes involved in amino acid, energy and nucleotide metabolism and also identified microbiome-specific pathways such as phosphonate metabolism (deep sea) and glycan degradation pathways (cow rumen). Integrating taxonomic and functional annotations within a novel visualization framework revealed the contribution of different taxa to metabolic pathways, allowing the identification of taxa that contribute unique functions. The application of a single, standard pipeline confirms that the rich taxonomic and functional diversity observed across microbiomes is not simply an artefact of different analysis pipelines but instead reflects distinct environmental influences. At the same time, our findings show how microbiome complexity and availability of

  5. Extending eScience Provenance with User-Submitted Semantic Annotations

    Science.gov (United States)

    Michaelis, J.; Zednik, S.; West, P.; Fox, P. A.; McGuinness, D. L.

    2010-12-01

    eScience based systems generate provenance of their data products, related to such things as: data processing, data collection conditions, expert evaluation, and data product quality. Recent advances in web-based technology offer users the possibility of making annotations to both data products and steps in accompanying provenance traces, thereby expanding the utility of such provenance for others. These contributing users may have varying backgrounds, ranging from system experts to outside domain experts to citizen scientists. Furthermore, such users may wish to make varying types of annotations - ranging from documenting the purpose of a provenance step to raising concerns about the quality of data dependencies. Semantic Web technologies allow for such kinds of rich annotations to be made to provenance through the use of ontology vocabularies for (i) organizing provenance, and (ii) organizing user/annotation classifications. Furthermore, through Linked Data practices, Semantic linkages may be made from provenance steps to external data of interest. A desire for Semantically-annotated provenance has been motivated by data management issues in the Mauna Loa Solar Observatory’s (MLSO) Advanced Coronal Observing System (ACOS). In ACOS, photomoeter-based readings are taken of solar activity and subsequently processed into final data products consumable by end users. At intermediate stages of ACOS processing, factors such as evaluations by human experts and weather conditions are logged, which could impact data product quality. If such factors are linked via user-submitted annotations to provenance, it could be significantly beneficial for other users. Likewise, the background of a user could impact the credibility of their annotations. For example, an annotation made by a citizen scientist describing the purpose of a provenance step may not be as reliable as a similar annotation made by an ACOS project member. For this work, we have developed a software package that

  6. Harnessing Collaborative Annotations on Online Formative Assessments

    Science.gov (United States)

    Lin, Jian-Wei; Lai, Yuan-Cheng

    2013-01-01

    This paper harnesses collaborative annotations by students as learning feedback on online formative assessments to improve the learning achievements of students. Through the developed Web platform, students can conduct formative assessments, collaboratively annotate, and review historical records in a convenient way, while teachers can generate…

  7. Integrity assessment of pipelines - additional remarks; Avaliacao da integridade de dutos - observacoes adicionais

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Luis F.C. [PETROBRAS S.A., Salvador, BA (Brazil). Unidade de Negocios. Exploracao e Producao

    2005-07-01

    Integrity assessment of pipelines is part of a process that aims to enhance the operating safety of pipelines. During this task, questions related to the interpretation of inspection reports and the way of regarding the impact of several parameters on the pipeline integrity normally come up. In order to satisfactorily answer such questions, the integrity assessment team must be able to suitably approach different subjects such as corrosion control and monitoring, assessment of metal loss and geometric anomalies, and third party activities. This paper presents additional remarks on some of these questions based on the integrity assessment of almost fifty pipelines that has been done at PETROBRAS/E and P Bahia over the past eight years. (author)

  8. Pipeline investigation report : crude oil pipeline-third party damage : Trans Mountain Pipeline LP 610 millimetre-diameter crude oil pipeline : kilometre post 3.10, Westridge dock transfer line, Burnaby, British Columbia

    International Nuclear Information System (INIS)

    2009-03-01

    This report discussed an oil spill which occurred in July 2007 when a contractor's excavator bucket punctured a pipeline during the excavation of a trench for a new storm sewer line at a location in Burnaby, British Columbia (BC). The puncture caused the release of approximately 234 cubic meters of crude oil, which flowed into Burrard Inlet Bay via a storm sewer system. Eleven houses were sprayed with crude oil, and many other properties required restoration. Approximately 250 residents left their homes. While emergency workers and firefighters responding to the incident were sprayed with crude oil, no explosions, fires, or injuries occurred. The report provided details of studies conducted to determine the placement of the sewer line, as well as attempts made by the contractors to determine the lateral connection of the crude oil pipeline. Discrepancies between the location of the pipeline design drawing and its actual location on other construction drawings were also noted by the contractor. Twenty-four minutes after the rupture, the terminal was fully isolated and the drain-down of the pipeline was completed within an hour. The cause of the accident was attributed to inaccurate construction drawings and inadequate communications between contractors and consulting companies. 3 figs

  9. Permanent cathodic protection monitoring systems for offshore pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Britton, Jim [Deepwater Corrosion Services Inc., Houston, TX (United States)

    2009-07-01

    Historically offshore pipeline cathodic protection monitoring has relied on the use of portable survey techniques. This has typically relied on ROV assisted or surface deployed survey methods. These methods have been shown to have technical as well as economic shortcomings, this is particularly true of buried offshore pipelines where accuracy is always questionable. As more focus is being placed on offshore pipeline integrity, it was time for a new method to emerge. The technology discussed involves the retro-placement of permanent clamp-on monitors onto the pipeline which can measure pipeline to seawater potential as well as current density. The sensors can be interrogated locally using light powered subsea voltage readouts. Application of the technology can be either during pipeline construction, during installation of life extension CP systems, or during routine subsea pipeline interventions. The new method eliminates the need for long cables or expensive acoustic or modulated data transfer and provides all the information required to fully verify CP system performance, thus eliminating the need for expensive close-interval surveys. Some deployment case histories will be presented along with feasibility of application on deep water pipelines and comparative economics. (author)

  10. Surface wave propagation effects on buried segmented pipelines

    Directory of Open Access Journals (Sweden)

    Peixin Shi

    2015-08-01

    Full Text Available This paper deals with surface wave propagation (WP effects on buried segmented pipelines. Both simplified analytical model and finite element (FE model are developed for estimating the axial joint pullout movement of jointed concrete cylinder pipelines (JCCPs of which the joints have a brittle tensile failure mode under the surface WP effects. The models account for the effects of peak ground velocity (PGV, WP velocity, predominant period of seismic excitation, shear transfer between soil and pipelines, axial stiffness of pipelines, joint characteristics, and cracking strain of concrete mortar. FE simulation of the JCCP interaction with surface waves recorded during the 1985 Michoacan earthquake results in joint pullout movement, which is consistent with the field observations. The models are expanded to estimate the joint axial pullout movement of cast iron (CI pipelines of which the joints have a ductile tensile failure mode. Simplified analytical equation and FE model are developed for estimating the joint pullout movement of CI pipelines. The joint pullout movement of the CI pipelines is mainly affected by the variability of the joint tensile capacity and accumulates at local weak joints in the pipeline.

  11. Assessing and preparing a pipeline for in line inspection

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Larry [T.D. Williamson Inc., Tulsa, OK (United States)

    2003-07-01

    In today's pipeline environment, operators around the world face new and emerging state and federal regulations requiring validation of their pipelines' integrity. In line inspection, or smart pigging, is generally the preferred methodology used to investigate metal loss and corrosion in pipelines. Although many pipelines can accommodate smart pigging, there are many pipelines that cannot, for various reasons. Those reasons can vary from not having pig launchers and receivers installed on the line to impassable bends or restrictions and general cleanliness of the pipeline itself. Pipeline cleanliness, more times than not, is one of the main reasons for inaccurate in line inspection data gathering or failed smart pig runs. (author)

  12. Crowdsourcing and annotating NER for Twitter #drift

    DEFF Research Database (Denmark)

    Fromreide, Hege; Hovy, Dirk; Søgaard, Anders

    2014-01-01

    We present two new NER datasets for Twitter; a manually annotated set of 1,467 tweets (kappa=0.942) and a set of 2,975 expert-corrected, crowdsourced NER annotated tweets from the dataset described in Finin et al. (2010). In our experiments with these datasets, we observe two important points: (a......) language drift on Twitter is significant, and while off-the-shelf systems have been reported to perform well on in-sample data, they often perform poorly on new samples of tweets, (b) state-of-the-art performance across various datasets can beobtained from crowdsourced annotations, making it more feasible...

  13. The Dangers of Pipeline Thinking: How the School-to-Prison Pipeline Metaphor Squeezes out Complexity

    Science.gov (United States)

    McGrew, Ken

    2016-01-01

    In this essay Ken McGrew critically examines the "school-to-prison pipeline" metaphor and associated literature. The origins and influence of the metaphor are compared with the origins and influence of the competing "prison industrial complex" concept. Specific weaknesses in the "pipeline literature" are examined.…

  14. SNAD: sequence name annotation-based designer

    Directory of Open Access Journals (Sweden)

    Gorbalenya Alexander E

    2009-08-01

    Full Text Available Abstract Background A growing diversity of biological data is tagged with unique identifiers (UIDs associated with polynucleotides and proteins to ensure efficient computer-mediated data storage, maintenance, and processing. These identifiers, which are not informative for most people, are often substituted by biologically meaningful names in various presentations to facilitate utilization and dissemination of sequence-based knowledge. This substitution is commonly done manually that may be a tedious exercise prone to mistakes and omissions. Results Here we introduce SNAD (Sequence Name Annotation-based Designer that mediates automatic conversion of sequence UIDs (associated with multiple alignment or phylogenetic tree, or supplied as plain text list into biologically meaningful names and acronyms. This conversion is directed by precompiled or user-defined templates that exploit wealth of annotation available in cognate entries of external databases. Using examples, we demonstrate how this tool can be used to generate names for practical purposes, particularly in virology. Conclusion A tool for controllable annotation-based conversion of sequence UIDs into biologically meaningful names and acronyms has been developed and placed into service, fostering links between quality of sequence annotation, and efficiency of communication and knowledge dissemination among researchers.

  15. An open annotation ontology for science on web 3.0.

    Science.gov (United States)

    Ciccarese, Paolo; Ocana, Marco; Garcia Castro, Leyla Jael; Das, Sudeshna; Clark, Tim

    2011-05-17

    There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . The Annotation Ontology meets critical requirements for

  16. ACID: annotation of cassette and integron data

    Directory of Open Access Journals (Sweden)

    Stokes Harold W

    2009-04-01

    Full Text Available Abstract Background Although integrons and their associated gene cassettes are present in ~10% of bacteria and can represent up to 3% of the genome in which they are found, very few have been properly identified and annotated in public databases. These genetic elements have been overlooked in comparison to other vectors that facilitate lateral gene transfer between microorganisms. Description By automating the identification of integron integrase genes and of the non-coding cassette-associated attC recombination sites, we were able to assemble a database containing all publicly available sequence information regarding these genetic elements. Specialists manually curated the database and this information was used to improve the automated detection and annotation of integrons and their encoded gene cassettes. ACID (annotation of cassette and integron data can be searched using a range of queries and the data can be downloaded in a number of formats. Users can readily annotate their own data and integrate it into ACID using the tools provided. Conclusion ACID is a community resource providing easy access to annotations of integrons and making tools available to detect them in novel sequence data. ACID also hosts a forum to prompt integron-related discussion, which can hopefully lead to a more universal definition of this genetic element.

  17. 78 FR 5866 - Pipeline Safety: Annual Reports and Validation

    Science.gov (United States)

    2013-01-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0319] Pipeline Safety: Annual Reports and Validation AGENCY: Pipeline and Hazardous Materials... 2012 gas transmission and gathering annual reports, remind pipeline owners and operators to validate...

  18. Potential impacts of OCS oil and gas activities on fisheries. Volume 2. Annotated bibliography for OCS oil and gas impact studies. Final report

    International Nuclear Information System (INIS)

    Tear, L.M.

    1989-10-01

    The volume is the second of two volumes to the final report, Potential Impacts of OCS Oil and Gas Activities on Fisheries. The volume presents an annotated bibliography of published and grey literature related to OCS oil and gas activity impacts of finfish and shellfish in marine and estuaring waters. The studies presented in the bibliography include those related to the following pollutants or impact-causing activities: Rig/reef effects, Drilling discharges (muds or cuttings), Oil (petroleum hydrocarbons), Trace metals, Produced water, Habitat alteration, Debris, Rig placement (avoidance), Pipelines, and Socioeconomic effects. The studies are listed alphabetically by the primary author's last name. An index is provided to help the reader identify studies related to a specific impact

  19. Automated and Accurate Estimation of Gene Family Abundance from Shotgun Metagenomes.

    Directory of Open Access Journals (Sweden)

    Stephen Nayfach

    2015-11-01

    Full Text Available Shotgun metagenomic DNA sequencing is a widely applicable tool for characterizing the functions that are encoded by microbial communities. Several bioinformatic tools can be used to functionally annotate metagenomes, allowing researchers to draw inferences about the functional potential of the community and to identify putative functional biomarkers. However, little is known about how decisions made during annotation affect the reliability of the results. Here, we use statistical simulations to rigorously assess how to optimize annotation accuracy and speed, given parameters of the input data like read length and library size. We identify best practices in metagenome annotation and use them to guide the development of the Shotgun Metagenome Annotation Pipeline (ShotMAP. ShotMAP is an analytically flexible, end-to-end annotation pipeline that can be implemented either on a local computer or a cloud compute cluster. We use ShotMAP to assess how different annotation databases impact the interpretation of how marine metagenome and metatranscriptome functional capacity changes across seasons. We also apply ShotMAP to data obtained from a clinical microbiome investigation of inflammatory bowel disease. This analysis finds that gut microbiota collected from Crohn's disease patients are functionally distinct from gut microbiota collected from either ulcerative colitis patients or healthy controls, with differential abundance of metabolic pathways related to host-microbiome interactions that may serve as putative biomarkers of disease.

  20. Location of leaks in pressurized underground pipelines

    International Nuclear Information System (INIS)

    Eckert, E.G.; Maresca, J.W. Jr.

    1993-01-01

    Millions of underground storage tanks (UST) are used to store petroleum and other chemicals. The pressurized underground pipelines associated with USTs containing petroleum motor fuels are typically 2 in. in diameter and 50 to 200 ft in length. These pipelines typically operate at pressures of 20 to 30 psi. Longer lines, with diameters up to 4 in., are found in some high-volume facilities. There are many systems that can be used to detect leaks in pressurized underground pipelines. When a leak is detected, the first step in the remediation process is to find its location. Passive-acoustic measurements, combined with advanced signal-processing techniques, provide a nondestructive method of leak location that is accurate and relatively simple, and that can be applied to a wide variety of pipelines and pipeline products