WorldWideScience

Sample records for graphics processing pipeline

  1. PipelineDog: a simple and flexible graphic pipeline construction and maintenance tool.

    Science.gov (United States)

    Zhou, Anbo; Zhang, Yeting; Sun, Yazhou; Xing, Jinchuan

    2018-05-01

    Analysis pipelines are an essential part of bioinformatics research, and ad hoc pipelines are frequently created by researchers for prototyping and proof-of-concept purposes. However, most existing pipeline management system or workflow engines are too complex for rapid prototyping or learning the pipeline concept. A lightweight, user-friendly and flexible solution is thus desirable. In this study, we developed a new pipeline construction and maintenance tool, PipelineDog. This is a web-based integrated development environment with a modern web graphical user interface. It offers cross-platform compatibility, project management capabilities, code formatting and error checking functions and an online repository. It uses an easy-to-read/write script system that encourages code reuse. With the online repository, it also encourages sharing of pipelines, which enhances analysis reproducibility and accountability. For most users, PipelineDog requires no software installation. Overall, this web application provides a way to rapidly create and easily manage pipelines. PipelineDog web app is freely available at http://web.pipeline.dog. The command line version is available at http://www.npmjs.com/package/pipelinedog and online repository at http://repo.pipeline.dog. ysun@kean.edu or xing@biology.rutgers.edu or ysun@diagnoa.com. Supplementary data are available at Bioinformatics online.

  2. Granatum: a graphical single-cell RNA-Seq analysis pipeline for genomics scientists.

    Science.gov (United States)

    Zhu, Xun; Wolfgruber, Thomas K; Tasato, Austin; Arisdakessian, Cédric; Garmire, David G; Garmire, Lana X

    2017-12-05

    Single-cell RNA sequencing (scRNA-Seq) is an increasingly popular platform to study heterogeneity at the single-cell level. Computational methods to process scRNA-Seq data are not very accessible to bench scientists as they require a significant amount of bioinformatic skills. We have developed Granatum, a web-based scRNA-Seq analysis pipeline to make analysis more broadly accessible to researchers. Without a single line of programming code, users can click through the pipeline, setting parameters and visualizing results via the interactive graphical interface. Granatum conveniently walks users through various steps of scRNA-Seq analysis. It has a comprehensive list of modules, including plate merging and batch-effect removal, outlier-sample removal, gene-expression normalization, imputation, gene filtering, cell clustering, differential gene expression analysis, pathway/ontology enrichment analysis, protein network interaction visualization, and pseudo-time cell series construction. Granatum enables broad adoption of scRNA-Seq technology by empowering bench scientists with an easy-to-use graphical interface for scRNA-Seq data analysis. The package is freely available for research use at http://garmiregroup.org/granatum/app.

  3. Three-dimensional range data compression using computer graphics rendering pipeline.

    Science.gov (United States)

    Zhang, Song

    2012-06-20

    This paper presents the idea of naturally encoding three-dimensional (3D) range data into regular two-dimensional (2D) images utilizing computer graphics rendering pipeline. The computer graphics pipeline provides a means to sample 3D geometry data into regular 2D images, and also to retrieve the depth information for each sampled pixel. The depth information for each pixel is further encoded into red, green, and blue color channels of regular 2D images. The 2D images can further be compressed with existing 2D image compression techniques. By this novel means, 3D geometry data obtained by 3D range scanners can be instantaneously compressed into 2D images, providing a novel way of storing 3D range data into its 2D counterparts. We will present experimental results to verify the performance of this proposed technique.

  4. CLAMP - a toolkit for efficiently building customized clinical natural language processing pipelines.

    Science.gov (United States)

    Soysal, Ergin; Wang, Jingqi; Jiang, Min; Wu, Yonghui; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua

    2017-11-24

    Existing general clinical natural language processing (NLP) systems such as MetaMap and Clinical Text Analysis and Knowledge Extraction System have been successfully applied to information extraction from clinical text. However, end users often have to customize existing systems for their individual tasks, which can require substantial NLP skills. Here we present CLAMP (Clinical Language Annotation, Modeling, and Processing), a newly developed clinical NLP toolkit that provides not only state-of-the-art NLP components, but also a user-friendly graphic user interface that can help users quickly build customized NLP pipelines for their individual applications. Our evaluation shows that the CLAMP default pipeline achieved good performance on named entity recognition and concept encoding. We also demonstrate the efficiency of the CLAMP graphic user interface in building customized, high-performance NLP pipelines with 2 use cases, extracting smoking status and lab test values. CLAMP is publicly available for research use, and we believe it is a unique asset for the clinical NLP community. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. The PC graphics handbook

    CERN Document Server

    Sanchez, Julio

    2003-01-01

    Part I - Graphics Fundamentals PC GRAPHICS OVERVIEW History and Evolution Short History of PC Video PS/2 Video Systems SuperVGA Graphics Coprocessors and Accelerators Graphics Applications State-of-the-Art in PC Graphics 3D Application Programming Interfaces POLYGONAL MODELING Vector and Raster Data Coordinate Systems Modeling with Polygons IMAGE TRANSFORMATIONS Matrix-based Representations Matrix Arithmetic 3D Transformations PROGRAMMING MATRIX TRANSFORMATIONS Numeric Data in Matrix Form Array Processing PROJECTIONS AND RENDERING Perspective The Rendering Pipeline LIGHTING AND SHADING Lightin

  6. A pipeline for comprehensive and automated processing of electron diffraction data in IPLT.

    Science.gov (United States)

    Schenk, Andreas D; Philippsen, Ansgar; Engel, Andreas; Walz, Thomas

    2013-05-01

    Electron crystallography of two-dimensional crystals allows the structural study of membrane proteins in their native environment, the lipid bilayer. Determining the structure of a membrane protein at near-atomic resolution by electron crystallography remains, however, a very labor-intense and time-consuming task. To simplify and accelerate the data processing aspect of electron crystallography, we implemented a pipeline for the processing of electron diffraction data using the Image Processing Library and Toolbox (IPLT), which provides a modular, flexible, integrated, and extendable cross-platform, open-source framework for image processing. The diffraction data processing pipeline is organized as several independent modules implemented in Python. The modules can be accessed either from a graphical user interface or through a command line interface, thus meeting the needs of both novice and expert users. The low-level image processing algorithms are implemented in C++ to achieve optimal processing performance, and their interface is exported to Python using a wrapper. For enhanced performance, the Python processing modules are complemented with a central data managing facility that provides a caching infrastructure. The validity of our data processing algorithms was verified by processing a set of aquaporin-0 diffraction patterns with the IPLT pipeline and comparing the resulting merged data set with that obtained by processing the same diffraction patterns with the classical set of MRC programs. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  8. Standardization process aligned to integrated management system: the case of TRANSPETRO's Oil Pipelines and Terminals Unit

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio/ITUC), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Labrunie, Charles; Araujo, Dario Doria de [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil). Diretoria de Terminais e Oleodutos

    2009-07-01

    This paper presents the implementation by PETROBRAS Transporte S.A. - TRANSPETRO of its Oil Pipelines and Terminals Standardization Program (PRONOT) within the scope of the 'Integrated Management System' (IMS). This program, launched in 2006 in the regions where the company operates, aims at standardizing all of its oil pipeline and terminal operations. Its implementation was planned in two phases: the first, already successfully concluded, refers to pipeline operations, industrial maintenance and right-of-way activities management; and the second, initiated in 2009, encompasses cross-sectional activities including health, safety and environment (HSE); training and development of oil pipeline workforce; communication with stake holders; oil pipeline integrity; and engineering project requirements. The documental structures of TRANSPETRO IMS and PRONOT are described and represented graphically to emphasize the intentional alignment of the standardization process carried out by the Oil Pipelines and Terminals Unit to the corporate IMS, based upon national and international literature review and through practical research focusing on the best international practices. (author)

  9. Applications of the pipeline environment for visual informatics and genomics computations

    Directory of Open Access Journals (Sweden)

    Genco Alex

    2011-07-01

    Full Text Available Abstract Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The

  10. Data processing pipeline for Herschel HIFI

    Science.gov (United States)

    Shipman, R. F.; Beaulieu, S. F.; Teyssier, D.; Morris, P.; Rengel, M.; McCoey, C.; Edwards, K.; Kester, D.; Lorenzani, A.; Coeur-Joly, O.; Melchior, M.; Xie, J.; Sanchez, E.; Zaal, P.; Avruch, I.; Borys, C.; Braine, J.; Comito, C.; Delforge, B.; Herpin, F.; Hoac, A.; Kwon, W.; Lord, S. D.; Marston, A.; Mueller, M.; Olberg, M.; Ossenkopf, V.; Puga, E.; Akyilmaz-Yabaci, M.

    2017-12-01

    Context. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation had to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. Aims: The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. Methods: A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. Results: The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. Conclusions: Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness. Herschel was an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  11. Flooding simulation of hilly pipeline commisionning process

    Energy Technology Data Exchange (ETDEWEB)

    Nan, Zhang [China National Oil and Gas Exploration and Development Corporation and China University of Petroleum, Beijing (China); Jing, Gong [China University of Petroleum, Beijing (China); Baoli, Zhu [China National Oil and Gas Exploration and Development Corporation, Beijing (China); Lin, Zheng [CNPC Oil and Gas Control Center, Beijing (China)

    2010-07-01

    When the construction of a pipeline has been completed, the pipeline flooding is done as part of the pipeline commissioning process. This method consists of filling the empty pipe with water or oil. In a pipeline situated in hilly terrain, air entrapped in the fluid causes problems with the flooding process and it is necessary to discharge the accumulated air to address this issue. The aim of this paper is to provide a model for predicting the location and volume of air pockets in a pipeline. This model was developed based on the fundamentals of mass balance and momentum transfer in multiphase flow and was then applied to a pipeline in China and compared with the SCADA data. Results showed a good match between the model's predictions of hydraulic movement and the real data from SCADA. The two flow model developed can predict hydraulic movement during pipeline flooding in a hilly area and thus it can be used to predict water front location and air pocket movement in the pipe.

  12. TESS Data Processing and Quick-look Pipeline

    Science.gov (United States)

    Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office

    2018-01-01

    We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.

  13. ARTIP: Automated Radio Telescope Image Processing Pipeline

    Science.gov (United States)

    Sharma, Ravi; Gyanchandani, Dolly; Kulkarni, Sarang; Gupta, Neeraj; Pathak, Vineet; Pande, Arti; Joshi, Unmesh

    2018-02-01

    The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.

  14. Collection Of Software For Computer Graphics

    Science.gov (United States)

    Hibbard, Eric A.; Makatura, George

    1990-01-01

    Ames Research Graphics System (ARCGRAPH) collection of software libraries and software utilities assisting researchers in generating, manipulating, and visualizing graphical data. Defines metafile format containing device-independent graphical data. File format used with various computer-graphics-manipulation and -animation software packages at Ames, including SURF (COSMIC Program ARC-12381) and GAS (COSMIC Program ARC-12379). Consists of two-stage "pipeline" used to put out graphical primitives. ARCGRAPH libraries developed on VAX computer running VMS.

  15. Analysis of buried pipelines at Kozloduy

    International Nuclear Information System (INIS)

    Asfura, A.

    1999-01-01

    This paper describes the analysis of the buried pipelines at Kozloduy NPP. It involves the description of the studied pipelines, their properties, a detailed description of the methodology applied, and the evaluation of the soil strain field as well as the graphical representation of the results obtained

  16. Graphical Language for Data Processing

    Science.gov (United States)

    Alphonso, Keith

    2011-01-01

    A graphical language for processing data allows processing elements to be connected with virtual wires that represent data flows between processing modules. The processing of complex data, such as lidar data, requires many different algorithms to be applied. The purpose of this innovation is to automate the processing of complex data, such as LIDAR, without the need for complex scripting and programming languages. The system consists of a set of user-interface components that allow the user to drag and drop various algorithmic and processing components onto a process graph. By working graphically, the user can completely visualize the process flow and create complex diagrams. This innovation supports the nesting of graphs, such that a graph can be included in another graph as a single step for processing. In addition to the user interface components, the system includes a set of .NET classes that represent the graph internally. These classes provide the internal system representation of the graphical user interface. The system includes a graph execution component that reads the internal representation of the graph (as described above) and executes that graph. The execution of the graph follows the interpreted model of execution in that each node is traversed and executed from the original internal representation. In addition, there are components that allow external code elements, such as algorithms, to be easily integrated into the system, thus making the system infinitely expandable.

  17. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  18. Graphic Design in Libraries: A Conceptual Process

    Science.gov (United States)

    Ruiz, Miguel

    2014-01-01

    Providing successful library services requires efficient and effective communication with users; therefore, it is important that content creators who develop visual materials understand key components of design and, specifically, develop a holistic graphic design process. Graphic design, as a form of visual communication, is the process of…

  19. The Dark Energy Survey Image Processing Pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, E.; et al.

    2018-01-09

    The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.

  20. Runtime Modifications of Spark Data Processing Pipelines

    NARCIS (Netherlands)

    Lazovik, E.; Medema, M.; Albers, T.; Langius, E.A.F.; Lazovik, A.

    2017-01-01

    Distributed data processing systems are the standard means for large-scale data analysis in the Big Data field. These systems are based on processing pipelines where the processing is done via a composition of multiple elements or steps. In current distributed data processing systems, the code and

  1. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  2. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  3. The Kepler Science Data Processing Pipeline Source Code Road Map

    Science.gov (United States)

    Wohler, Bill; Jenkins, Jon M.; Twicken, Joseph D.; Bryson, Stephen T.; Clarke, Bruce Donald; Middour, Christopher K.; Quintana, Elisa Victoria; Sanderfer, Jesse Thomas; Uddin, Akm Kamal; Sabale, Anima; hide

    2016-01-01

    We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed, developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center, the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at Ames Research Center, software development and operations departments, and a data center which hosts the computers required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler Science Processing Pipeline, including, the software algorithms. We present the high-performance, parallel computing software modules of the pipeline that perform transit photometry, pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument characterization.

  4. The Analysis of Task and Data Characteristic and the Collaborative Processing Method in Real-Time Visualization Pipeline of Urban 3DGIS

    Directory of Open Access Journals (Sweden)

    Dongbo Zhou

    2017-03-01

    Full Text Available Parallel processing in the real-time visualization of three-dimensional Geographic Information Systems (3DGIS has tended to concentrate on algorithm levels in recent years, and most of the existing methods employ multiple threads in a Central Processing Unit (CPU or kernel in a Graphics Processing Unit (GPU to improve efficiency in the computation of the Level of Details (LODs for three-dimensional (3D Models and in the display of Digital Elevation Models (DEMs and Digital Orthphoto Maps (DOMs. The systematic analysis of the task and data characteristics of parallelism in the real-time visualization of 3DGIS continues to fall behind the development of hardware. In this paper, the basic procedures of real-time visualization of urban 3DGIS are first reviewed, and then the real-time visualization pipeline is analyzed. Further, the pipeline is decomposed into different task stages based on the task order and the input-output dependency. Based on the analysis of task parallelism in different pipeline stages, the data parallelism characteristics in each task are summarized by studying the involved algorithms. Finally, this paper proposes a parallel co-processing mode and a collaborative strategy for real-time visualization of urban 3DGIS. It also provides a fundamental basis for developing parallel algorithms and strategies in 3DGIS.

  5. Kepler Science Operations Center Pipeline Framework

    Science.gov (United States)

    Klaus, Todd C.; McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Middour, Christopher; Caldwell, Douglas A.; Jenkins, Jon M.

    2010-01-01

    The Kepler mission is designed to continuously monitor up to 170,000 stars at a 30 minute cadence for 3.5 years searching for Earth-size planets. The data are processed at the Science Operations Center (SOC) at NASA Ames Research Center. Because of the large volume of data and the memory and CPU-intensive nature of the analysis, significant computing hardware is required. We have developed generic pipeline framework software that is used to distribute and synchronize the processing across a cluster of CPUs and to manage the resulting products. The framework is written in Java and is therefore platform-independent, and scales from a single, standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized control of the unit of work without the need to modify the framework itself. Distributed transaction services provide for atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic parameter management and data accountability services are provided to record the parameter values, software versions, and other meta-data used for each pipeline execution. A graphical console allows for the configuration, execution, and monitoring of pipelines. An alert and metrics subsystem is used to monitor the health and performance of the pipeline. The framework was developed for the Kepler project based on Kepler requirements, but the framework itself is generic and could be used for a variety of applications where these features are needed.

  6. Development of Protective Coatings for Co-Sequestration Processes and Pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Bierwagen, Gordon; Huang, Yaping

    2011-11-30

    The program, entitled Development of Protective Coatings for Co-Sequestration Processes and Pipelines, examined the sensitivity of existing coating systems to supercritical carbon dioxide (SCCO2) exposure and developed new coating system to protect pipelines from their corrosion under SCCO2 exposure. A literature review was also conducted regarding pipeline corrosion sensors to monitor pipes used in handling co-sequestration fluids. Research was to ensure safety and reliability for a pipeline involving transport of SCCO2 from the power plant to the sequestration site to mitigate the greenhouse gas effect. Results showed that one commercial coating and one designed formulation can both be supplied as potential candidates for internal pipeline coating to transport SCCO2.

  7. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    Science.gov (United States)

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  8. A midas plugin to enable construction of reproducible web-based image processing pipelines.

    Science.gov (United States)

    Grauer, Michael; Reynolds, Patrick; Hoogstoel, Marion; Budin, Francois; Styner, Martin A; Oguz, Ipek

    2013-01-01

    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  9. A Midas Plugin to Enable Construction of Reproducible Web-based Image Processing Pipelines

    Directory of Open Access Journals (Sweden)

    Michael eGrauer

    2013-12-01

    Full Text Available Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based UI, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline.

  10. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  11. Desain Layout 1-Stage ADC Pipeline 80Msps dengan Mentor Graphics 0,35µm untuk Aplikasi Kamera Kecepatan Tinggi

    Directory of Open Access Journals (Sweden)

    Hamzah Afandi

    2012-10-01

    Full Text Available Design layout 1-stage pipeline is part of the 8-stage pipeline 80 Msps ADC. Layout 1-stage pipeline consists of 3 units : op-amp, switch capacitor, precision comparator with latch. Pipeline ADC works gradually and requires synchronization of digital output 8 stage by using a unit delay circuit (D-FF. Pipeline ADC requires pulse rate (clockgenerator to support its work. Units OP-AMP transconductance CMOS components designed with the correct specification ADC applications with capacitive loads, with a large input impedance and minimize noise. The precision comparator has Vos(offset voltage approximately equal to 0V. The capacitor switch designs use NMOS switch as a switch for the sampling and multiplying. In the sampling phase and multiplying processes, the ADC requires a clock pulse with a non-intersect mode (lapping. The width of non-overlapping period was adjusted to the time of constance in the sampling process and multiplying. The total number of each pulse period equal to 12.5 ns or equal to the frequency of 80MHz. In the 1-stage layout an additional correction capacitor was required to correct residual voltage. The total area of the layout 1-stage pipeline ADC is 1-bit 200 μm x 98μm.

  12. GRAPHIC INTERFACES FOR ENGINEERING APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Ion PANA,

    2012-05-01

    Full Text Available Using effective the method of calculating Fitness for Service requires the achievement of graphical interfaces. This paper presents an example of such interfaces, made with Visual Basic program and used in the evaluation of pipelines in a research contract [4

  13. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    Gade-Nielsen, Nicolai Fog; Jørgensen, John Bagterp; Dammann, Bernd

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires ree...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards.......The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...

  14. A Relational Reasoning Approach to Text-Graphic Processing

    Science.gov (United States)

    Danielson, Robert W.; Sinatra, Gale M.

    2017-01-01

    We propose that research on text-graphic processing could be strengthened by the inclusion of relational reasoning perspectives. We briefly outline four aspects of relational reasoning: "analogies," "anomalies," "antinomies", and "antitheses". Next, we illustrate how text-graphic researchers have been…

  15. Identification of Learning Processes by Means of Computer Graphics.

    Science.gov (United States)

    Sorensen, Birgitte Holm

    1993-01-01

    Describes a development project for the use of computer graphics and video in connection with an inservice training course for primary education teachers in Denmark. Topics addressed include research approaches to computers; computer graphics in learning processes; activities relating to computer graphics; the role of the teacher; and student…

  16. ToTem: a tool for variant calling pipeline optimization.

    Science.gov (United States)

    Tom, Nikola; Tom, Ondrej; Malcikova, Jitka; Pavlova, Sarka; Kubesova, Blanka; Rausch, Tobias; Kolarik, Miroslav; Benes, Vladimir; Bystry, Vojtech; Pospisilova, Sarka

    2018-06-26

    High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. ToTem is a tool for automated pipeline optimization which is freely available as a web application at  https://totem.software .

  17. Sequanix: a dynamic graphical interface for Snakemake workflows.

    Science.gov (United States)

    Desvillechabrol, Dimitri; Legendre, Rachel; Rioualen, Claire; Bouchier, Christiane; van Helden, Jacques; Kennedy, Sean; Cokelaer, Thomas

    2018-06-01

    We designed a PyQt graphical user interface-Sequanix-aimed at democratizing the use of Snakemake pipelines in the NGS space and beyond. By default, Sequanix includes Sequana NGS pipelines (Snakemake format) (http://sequana.readthedocs.io), and is also capable of loading any external Snakemake pipeline. New users can easily, visually, edit configuration files of expert-validated pipelines and can interactively execute these production-ready workflows. Sequanix will be useful to both Snakemake developers in exposing their pipelines and to a wide audience of users. Source on http://github.com/sequana/sequana, bio-containers on http://bioconda.github.io and Singularity hub (http://singularity-hub.org). dimitri.desvillechabrol@pasteur.fr or thomas.cokelaer@pasteur.fr. Supplementary data are available at Bioinformatics online.

  18. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Todd, Richard A. [RIS Corp.; Radford, David C. [ORNL Physics Div.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arrays such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.

  19. Pipeline Processing for VISTA

    Science.gov (United States)

    Lewis, J. R.; Irwin, M.; Bunclark, P.

    2010-12-01

    The VISTA telescope is a 4 metre instrument which has recently been commissioned at Paranal, Chile. Equipped with an infrared camera, 16 2Kx2K Raytheon detectors and a 1.7 square degree field of view, VISTA represents a huge leap in infrared survey capability in the southern hemisphere. Pipeline processing of IR data is far more technically challenging than for optical data. IR detectors are inherently more unstable, while the sky emission is over 100 times brighter than most objects of interest, and varies in a complex spatial and temporal manner. To compensate for this, exposure times are kept short, leading to high nightly data rates. VISTA is expected to generate an average of 250 GB of data per night over the next 5-10 years, which far exceeds the current total data rate of all 8m-class telescopes. In this presentation we discuss the pipelines that have been developed to deal with IR imaging data from VISTA and discuss the primary issues involved in an end-to-end system capable of: robustly removing instrument and night sky signatures; monitoring data quality and system integrity; providing astrometric and photometric calibration; and generating photon noise-limited images and science-ready astronomical catalogues.

  20. Pipeline operators training and certification using thermohydraulic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Barreto, Claudio V.; Plasencia C, Jose [Pontificia Universidade Catolica (PUC-Rio), Rio de Janeiro, RJ (Brazil). Nucleo de Simulacao Termohidraulica de Dutos (SIMDUT); Montalvao, Filipe; Costa, Luciano [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    The continuous pipeline operators training and certification of the TRANSPETRO's Pipeline National Operations Control Center (CNCO) is an essential task aiming the efficiency and safety of the oil and derivatives transport operations through the Brazilian pipeline network. For this objective, a hydraulic simulator is considered an excellent tool that allows the creation of different operational scenarios for training the pipeline hydraulic behavior as well as for testing the operator's responses to normal and abnormal real time operational conditions. The hydraulic simulator is developed based on a pipeline simulation software that supplies the hydraulic responses normally acquired from the pipeline remote units in the field. The pipeline simulation software has a communication interface system that sends and receives data to the SCADA supervisory system database. Using the SCADA graphical interface to create and to customize human machine interfaces (HMI) from which the operator/instructor has total control of the pipeline/system and instrumentation by sending commands. Therefore, it is possible to have realistic training outside of the real production systems, while acquiring experience during training hours with the operation of a real pipeline. A pilot Project was initiated at TRANSPETRO - CNCO targeting to evaluate the hydraulic simulators advantages in pipeline operators training and certification programs. The first part of the project was the development of three simulators for different pipelines. The excellent results permitted the project expansion for a total of twenty different pipelines, being implemented in training programs for pipelines presently operated by CNCO as well as for the new ones that are being migrated. The main objective of this paper is to present an overview of the implementation process and the development of a training environment through a pipe simulation environment using commercial software. This paper also presents

  1. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    Science.gov (United States)

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone

  2. High-throughput bioinformatics with the Cyrille2 pipeline system

    Directory of Open Access Journals (Sweden)

    de Groot Joost CW

    2008-02-01

    Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.

  3. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  4. Standardization process for pipeline right-of-way activities: the case of TRANSPETRO's Oil Pipeline and Terminals Business Unit

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Kassandra Senra de Morais M.; Goncalves, Bruno Martins [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil). Diretoria de Terminais e Oleodutos; Almeida, Maria Fatima Ludovico de [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio/ITUC), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico

    2009-07-01

    This paper describes the experience of PETROBRAS Transporte S.A. concerning the standardization process for its pipeline right-of-way (ROW) activities. This standardization initiative has been carried out within the Oil Pipelines and Terminals Standardization Program (PRONOT), focusing on planning, standardization and implementation of all norms and corporate procedures referring to TRANSPETRO's right-of-way activities. The process promoted the integration of isolated regional initiatives, a sense of unity and the creation of a learning network consisting of 60 employees. This paper presents the last phase's results concerning implementation of corporate standards, based upon achievements of previous phases. It covers the following topics: a general view of the whole process by way of introduction; the potential of integration of recent standardization results with TRANSPETRO's corporate management tools and information systems; definition of four performance indicators and their metrics related to pipeline right-of-way management, as well as a corporate standard for the requirements for contracting services related to rights-of-way inspection, maintenance and communication; challenges, barriers and benefits perceived by the team responsible for formulating and implementing standards and procedures in TRANSPETRO's Oil Pipelines and Terminals Unit. (author)

  5. HiCUP: pipeline for mapping and processing Hi-C data.

    Science.gov (United States)

    Wingett, Steven; Ewels, Philip; Furlan-Magaril, Mayra; Nagano, Takashi; Schoenfelder, Stefan; Fraser, Peter; Andrews, Simon

    2015-01-01

    HiCUP is a pipeline for processing sequence data generated by Hi-C and Capture Hi-C (CHi-C) experiments, which are techniques used to investigate three-dimensional genomic organisation. The pipeline maps data to a specified reference genome and removes artefacts that would otherwise hinder subsequent analysis. HiCUP also produces an easy-to-interpret yet detailed quality control (QC) report that assists in refining experimental protocols for future studies. The software is freely available and has already been used for processing Hi-C and CHi-C data in several recently published peer-reviewed studies.

  6. IN-SITU TEST OF PRESSURE PIPELINE VIBRATION BASED ON DATA ACQUISITION AND SIGNAL PROCESSING

    OpenAIRE

    Hou, Huimin; Xu, Cundong; Liu, Hui; Wang, Rongrong; Jie, Junkun; Ding, Lianying

    2015-01-01

    Pipeline vibration of high frequency and large amplitude is an important factor that impacts the safe operation of pumping station and the efficiency of the pumps. Through conducting the vibration in-situ test of pipeline system in the pumping station, we can objectively analyze the mechanism of pipeline vibration and evaluate the stability of pipeline operation. By using DASP (data acquisition & signal processing) in the in-situ test on the 2# pipeline of the third pumping station in the gen...

  7. Research on numerical simulation and protection of transient process in long-distance slurry transportation pipelines

    International Nuclear Information System (INIS)

    Lan, G; Jiang, J; Li, D D; Yi, W S; Zhao, Z; Nie, L N

    2013-01-01

    The calculation of water-hammer pressure phenomenon of single-phase liquid is already more mature for a pipeline of uniform characteristics, but less research has addressed the calculation of slurry water hammer pressure in complex pipelines with slurry flows carrying solid particles. In this paper, based on the developments of slurry pipelines at home and abroad, the fundamental principle and method of numerical simulation of transient processes are presented, and several boundary conditions are given. Through the numerical simulation and analysis of transient processes of a practical engineering of long-distance slurry transportation pipeline system, effective protection measures and operating suggestions are presented. A model for calculating the water impact of solid and fluid phases is established based on a practical engineering of long-distance slurry pipeline transportation system. After performing a numerical simulation of the transient process, analyzing and comparing the results, effective protection measures and operating advice are recommended, which has guiding significance to the design and operating management of practical engineering of longdistance slurry pipeline transportation system

  8. Research on numerical simulation and protection of transient process in long-distance slurry transportation pipelines

    Science.gov (United States)

    Lan, G.; Jiang, J.; Li, D. D.; Yi, W. S.; Zhao, Z.; Nie, L. N.

    2013-12-01

    The calculation of water-hammer pressure phenomenon of single-phase liquid is already more mature for a pipeline of uniform characteristics, but less research has addressed the calculation of slurry water hammer pressure in complex pipelines with slurry flows carrying solid particles. In this paper, based on the developments of slurry pipelines at home and abroad, the fundamental principle and method of numerical simulation of transient processes are presented, and several boundary conditions are given. Through the numerical simulation and analysis of transient processes of a practical engineering of long-distance slurry transportation pipeline system, effective protection measures and operating suggestions are presented. A model for calculating the water impact of solid and fluid phases is established based on a practical engineering of long-distance slurry pipeline transportation system. After performing a numerical simulation of the transient process, analyzing and comparing the results, effective protection measures and operating advice are recommended, which has guiding significance to the design and operating management of practical engineering of longdistance slurry pipeline transportation system.

  9. Discrete pre-processing step effects in registration-based pipelines, a preliminary volumetric study on T1-weighted images.

    Science.gov (United States)

    Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock

    2017-01-01

    Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.

  10. Printing--Graphic Arts--Graphic Communications

    Science.gov (United States)

    Hauenstein, A. Dean

    1975-01-01

    Recently, "graphic arts" has shifted from printing skills to a conceptual approach of production processes. "Graphic communications" must embrace the total system of communication through graphic media, to serve broad career education purposes; students taught concepts and principles can be flexible and adaptive. The author…

  11. Development of the spent fuel disassembling process by utilizing the 3D graphic design technology

    International Nuclear Information System (INIS)

    Song, T. K.; Lee, J. Y.; Kim, S. H.; Yun, J. S.

    2001-01-01

    For developing the spent fuel disassembling process, the 3D graphic simulation has been established by utilizing the 3D graphic design technology which is widely used in the industry. The spent fuel disassembling process consists of a downender, a rod extraction device, a rod cutting device, a pellet extracting device and a skeleton compaction device. In this study, the 3D graphical design model of these devices is implemented by conceptual design and established the virtual workcell within kinematics to motion of each device. By implementing this graphic simulation, all the unit process involved in the spent fuel disassembling processes are analyzed and optimized. The 3D graphical model and the 3D graphic simulation can be effectively used for designing the process equipment, as well as the optimized process and maintenance process

  12. Integration of a neuroimaging processing pipeline into a pan-canadian computing grid

    International Nuclear Information System (INIS)

    Lavoie-Courchesne, S; Chouinard-Decorte, F; Doyon, J; Bellec, P; Rioux, P; Sherif, T; Rousseau, M-E; Das, S; Adalat, R; Evans, A C; Craddock, C; Margulies, D; Chu, C; Lyttelton, O

    2012-01-01

    The ethos of the neuroimaging field is quickly moving towards the open sharing of resources, including both imaging databases and processing tools. As a neuroimaging database represents a large volume of datasets and as neuroimaging processing pipelines are composed of heterogeneous, computationally intensive tools, such open sharing raises specific computational challenges. This motivates the design of novel dedicated computing infrastructures. This paper describes an interface between PSOM, a code-oriented pipeline development framework, and CBRAIN, a web-oriented platform for grid computing. This interface was used to integrate a PSOM-compliant pipeline for preprocessing of structural and functional magnetic resonance imaging into CBRAIN. We further tested the capacity of our infrastructure to handle a real large-scale project. A neuroimaging database including close to 1000 subjects was preprocessed using our interface and publicly released to help the participants of the ADHD-200 international competition. This successful experiment demonstrated that our integrated grid-computing platform is a powerful solution for high-throughput pipeline analysis in the field of neuroimaging.

  13. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  14. Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.

    2006-01-01

    Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.

  15. A software pipeline for processing and identification of fungal ITS sequences

    Directory of Open Access Journals (Sweden)

    Kristiansson Erik

    2009-01-01

    Full Text Available Abstract Background Fungi from environmental samples are typically identified to species level through DNA sequencing of the nuclear ribosomal internal transcribed spacer (ITS region for use in BLAST-based similarity searches in the International Nucleotide Sequence Databases. These searches are time-consuming and regularly require a significant amount of manual intervention and complementary analyses. We here present software – in the form of an identification pipeline for large sets of fungal ITS sequences – developed to automate the BLAST process and several additional analysis steps. The performance of the pipeline was evaluated on a dataset of 350 ITS sequences from fungi growing as epiphytes on building material. Results The pipeline was written in Perl and uses a local installation of NCBI-BLAST for the similarity searches of the query sequences. The variable subregion ITS2 of the ITS region is extracted from the sequences and used for additional searches of higher sensitivity. Multiple alignments of each query sequence and its closest matches are computed, and query sequences sharing at least 50% of their best matches are clustered to facilitate the evaluation of hypothetically conspecific groups. The pipeline proved to speed up the processing, as well as enhance the resolution, of the evaluation dataset considerably, and the fungi were found to belong chiefly to the Ascomycota, with Penicillium and Aspergillus as the two most common genera. The ITS2 was found to indicate a different taxonomic affiliation than did the complete ITS region for 10% of the query sequences, though this figure is likely to vary with the taxonomic scope of the query sequences. Conclusion The present software readily assigns large sets of fungal query sequences to their respective best matches in the international sequence databases and places them in a larger biological context. The output is highly structured to be easy to process, although it still needs

  16. FPGA Implementation of a Simple 3D Graphics Pipeline

    Directory of Open Access Journals (Sweden)

    Vladimir Kasik

    2015-01-01

    Full Text Available Conventional methods for computing 3D projects are nowadays usually implemented on standard or graphics processors. The performance of these devices is limited especially by the used architecture, which to some extent works in a sequential manner. In this article we describe a project which utilizes parallel computation for simple projection of a wireframe 3D model. The algorithm is optimized for a FPGA-based implementation. The design of the numerical logic is described in VHDL with the use of several basic IP cores used especially for computing trigonometric functions. The implemented algorithms allow smooth rotation of the model in two axes (azimuth and elevation and a change of the viewing angle. Tests carried out on a FPGA Xilinx Spartan-6 development board have resulted in real-time rendering at over 5000fps. In the conclusion of the article, we discuss additional possibilities for increasing the computational output in graphics applications via the use of HPC (High Performance Computing.

  17. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform......The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...

  18. Commercial Off-The-Shelf (COTS) Graphics Processing Board (GPB) Radiation Test Evaluation Report

    Science.gov (United States)

    Salazar, George A.; Steele, Glen F.

    2013-01-01

    Large round trip communications latency for deep space missions will require more onboard computational capabilities to enable the space vehicle to undertake many tasks that have traditionally been ground-based, mission control responsibilities. As a result, visual display graphics will be required to provide simpler vehicle situational awareness through graphical representations, as well as provide capabilities never before done in a space mission, such as augmented reality for in-flight maintenance or Telepresence activities. These capabilities will require graphics processors and associated support electronic components for high computational graphics processing. In an effort to understand the performance of commercial graphics card electronics operating in the expected radiation environment, a preliminary test was performed on five commercial offthe- shelf (COTS) graphics cards. This paper discusses the preliminary evaluation test results of five COTS graphics processing cards tested to the International Space Station (ISS) low earth orbit radiation environment. Three of the five graphics cards were tested to a total dose of 6000 rads (Si). The test articles, test configuration, preliminary results, and recommendations are discussed.

  19. Functional graphical languages for process control

    International Nuclear Information System (INIS)

    1996-01-01

    A wide variety of safety systems are in use today in the process industries. Most of these systems rely on control software using procedural programming languages. This study investigates the use of functional graphical languages for controls in the process industry. Different vendor proprietary software and languages are investigated and evaluation criteria are outlined based on ability to meet regulatory requirements, reference sites involving applications with similar safety concerns, QA/QC procedures, community of users, type and user-friendliness of the man-machine interface, performance of operational code, and degree of flexibility. (author) 16 refs., 4 tabs

  20. A novel process for small-scale pipeline natural gas liquefaction

    International Nuclear Information System (INIS)

    He, T.B.; Ju, Y.L.

    2014-01-01

    Highlights: • A novel process was proposed to liquefy natural gas by utilizing the pressure exergy. • The process is zero energy consumption. • The maximum liquefaction rate of the process is 12.61%. • The maximum exergy utilization rate is 0.1961. • The economic analysis showed that the payback period of the process is quit short. - Abstract: A novel process for small-scale pipeline natural gas liquefaction is designed and presented. The novel process can utilize the pressure exergy of the pipeline to liquefy a part of natural gas without any energy consumption. The thermodynamic analysis including mass, energy balance and exergy analysis are adopted in this paper. The liquefaction rate and exergy utilization rate are chosen as the objective functions. Several key parameters are optimized to approach the maximum liquefaction rate and exergy utilization rate. The optimization results showed that the maximum liquefaction rate is 12.61% and the maximum exergy utilization rate is 0.1961. What is more, the economic performances of the process are also discussed and compared by using the maximum liquefaction rate and exergy utilization rate as indexes. In conclusion, the novel process is suitable for pressure exergy utilization due to its simplicity, zero energy consumption and short payback period

  1. Pipeline leak detection and location by on-line-correlation with a process computer

    International Nuclear Information System (INIS)

    Siebert, H.; Isermann, R.

    1977-01-01

    A method for leak detection using a correlation technique in pipelines is described. For leak detection and also for leak localisation and estimation of the leak flow recursive estimation algorithms are used. The efficiency of the methods is demonstrated with a process computer and a pipeline model operating on-line. It is shown that very small leaks can be detected. (orig.) [de

  2. Real-time colouring and filtering with graphics shaders

    Science.gov (United States)

    Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.

    2017-11-01

    Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).

  3. Mathematical simulation for compensation capacities area of pipeline routes in ship systems

    Science.gov (United States)

    Ngo, G. V.; Sakhno, K. N.

    2018-05-01

    In this paper, the authors considered the problem of manufacturability’s enhancement of ship systems pipeline at the designing stage. The analysis of arrangements and possibilities for compensation of deviations for pipeline routes has been carried out. The task was set to produce the “fit pipe” together with the rest of the pipes in the route. It was proposed to compensate for deviations by movement of the pipeline route during pipe installation and to calculate maximum values of these displacements in the analyzed path. Theoretical bases of deviation compensation for pipeline routes using rotations of parallel section pairs of pipes are assembled. Mathematical and graphical simulations of compensation area capacities of pipeline routes with various configurations are completed. Prerequisites have been created for creating an automated program that will allow one to determine values of the compensatory capacities area for pipeline routes and to assign quantities of necessary allowances.

  4. Automated processing pipeline for neonatal diffusion MRI in the developing Human Connectome Project.

    Science.gov (United States)

    Bastiani, Matteo; Andersson, Jesper L R; Cordero-Grande, Lucilio; Murgasova, Maria; Hutter, Jana; Price, Anthony N; Makropoulos, Antonios; Fitzgibbon, Sean P; Hughes, Emer; Rueckert, Daniel; Victor, Suresh; Rutherford, Mary; Edwards, A David; Smith, Stephen M; Tournier, Jacques-Donald; Hajnal, Joseph V; Jbabdi, Saad; Sotiropoulos, Stamatios N

    2018-05-28

    The developing Human Connectome Project is set to create and make available to the scientific community a 4-dimensional map of functional and structural cerebral connectivity from 20 to 44 weeks post-menstrual age, to allow exploration of the genetic and environmental influences on brain development, and the relation between connectivity and neurocognitive function. A large set of multi-modal MRI data from fetuses and newborn infants is currently being acquired, along with genetic, clinical and developmental information. In this overview, we describe the neonatal diffusion MRI (dMRI) image processing pipeline and the structural connectivity aspect of the project. Neonatal dMRI data poses specific challenges, and standard analysis techniques used for adult data are not directly applicable. We have developed a processing pipeline that deals directly with neonatal-specific issues, such as severe motion and motion-related artefacts, small brain sizes, high brain water content and reduced anisotropy. This pipeline allows automated analysis of in-vivo dMRI data, probes tissue microstructure, reconstructs a number of major white matter tracts, and includes an automated quality control framework that identifies processing issues or inconsistencies. We here describe the pipeline and present an exemplar analysis of data from 140 infants imaged at 38-44 weeks post-menstrual age. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Passivation process of X80 pipeline steel in bicarbonate solutions

    Science.gov (United States)

    Zhou, Jian-Long; Li, Xiao-Gang; Du, Cui-Wei; Pan, Ying; Li, Tao; Liu, Qian

    2011-04-01

    The passivation process of X80 pipeline steel in bicarbonate solutions was investigated using potentiodynamic, dynamic electrochemical impedance spectroscopy (DEIS), and Mott-Schottky measurements. The results show that the shape of polarization curves changes with HCO{3/-} concentration. The critical `passive' concentration is 0.009 mol/L HCO{3/-} for X80 pipeline steel in bicarbonate solutions. No anodic current peak exists in HCO3/- solutions when the concentration is lower than 0.009 mol/L, whereas there are one and two anodic current peaks when the HCO3/- concentration ranges from 0.009 to 0.05 mol/L and is higher than 0.1 mol/L, respectively. DEIS measurements show that there exist active dissolution range, transition range, pre-passive range, passive layer formation range, passive range, and trans-passive range for X80 pipeline steel in the 0.1 mol/L HCO{3/-} solutions. The results of DEIS measurements are in complete agreement with the potentiodynamic diagram. An equivalent circuit containing three sub-layers is used to explain the Nyquist plots in the passive range. Analyses are well made for explaining the corresponding fitted capacitance and impedance. The Mott-Schottky plots show that the passive film of X80 pipeline steel is an n-type semiconductor, and capacitance measurements are in good accordance with the results of DEIS experiment.

  6. Stochastic Analysis of a Queue Length Model Using a Graphics Processing Unit

    Czech Academy of Sciences Publication Activity Database

    Přikryl, Jan; Kocijan, J.

    2012-01-01

    Roč. 5, č. 2 (2012), s. 55-62 ISSN 1802-971X R&D Projects: GA MŠk(CZ) MEB091015 Institutional support: RVO:67985556 Keywords : graphics processing unit * GPU * Monte Carlo simulation * computer simulation * modeling Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-stochastic analysis of a queue length model using a graphics processing unit.pdf

  7. Micromagnetic simulations using Graphics Processing Units

    International Nuclear Information System (INIS)

    Lopez-Diaz, L; Aurelio, D; Torres, L; Martinez, E; Hernandez-Lopez, M A; Gomez, J; Alejos, O; Carpentieri, M; Finocchio, G; Consolo, G

    2012-01-01

    The methodology for adapting a standard micromagnetic code to run on graphics processing units (GPUs) and exploit the potential for parallel calculations of this platform is discussed. GPMagnet, a general purpose finite-difference GPU-based micromagnetic tool, is used as an example. Speed-up factors of two orders of magnitude can be achieved with GPMagnet with respect to a serial code. This allows for running extensive simulations, nearly inaccessible with a standard micromagnetic solver, at reasonable computational times. (topical review)

  8. The Pan-STARRS PS1 Image Processing Pipeline

    Science.gov (United States)

    Magnier, E.

    The Pan-STARRS PS1 Image Processing Pipeline (IPP) performs the image processing and data analysis tasks needed to enable the scientific use of the images obtained by the Pan-STARRS PS1 prototype telescope. The primary goals of the IPP are to process the science images from the Pan-STARRS telescopes and make the results available to other systems within Pan-STARRS. It also is responsible for combining all of the science images in a given filter into a single representation of the non-variable component of the night sky defined as the "Static Sky". To achieve these goals, the IPP also performs other analysis functions to generate the calibrations needed in the science image processing, and to occasionally use the derived data to generate improved astrometric and photometric reference catalogs. It also provides the infrastructure needed to store the incoming data and the resulting data products. The IPP inherits lessons learned, and in some cases code and prototype code, from several other astronomy image analysis systems, including Imcat (Kaiser), the Sloan Digital Sky Survey (REF), the Elixir system (Magnier & Cuillandre), and Vista (Tonry). Imcat and Vista have a large number of robust image processing functions. SDSS has demonstrated a working analysis pipeline and large-scale databasesystem for a dedicated project. The Elixir system has demonstrated an automatic image processing system and an object database system for operational usage. This talk will present an overview of the IPP architecture, functional flow, code development structure, and selected analysis algorithms. Also discussed is the HW highly parallel HW configuration necessary to support PS1 operational requirements. Finally, results are presented of the processing of images collected during PS1 early commissioning tasks utilizing the Pan-STARRS Test Camera #3.

  9. Pipeline Processing with an Iterative, Context-Based Detection Model

    Science.gov (United States)

    2016-01-22

    wave precursor artifacts. Distortion definitely is reduced with the addition of more channels to the processed data stream (comparing trace 3 to...limitations of fully automatic hypothesis evaluation with a test case of two events in Central Asia – a deep Hindu Kush earthquake and a shallow earthquake in...AFRL-RV-PS- AFRL-RV-PS- TR-2016-0080 TR-2016-0080 PIPELINE PROCESSING WITH AN ITERATIVE, CONTEXT-BASED DETECTION MODEL T. Kværna, et al

  10. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    Science.gov (United States)

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  11. Development and Applications of Pipeline Steel in Long-Distance Gas Pipeline of China

    Science.gov (United States)

    Chunyong, Huo; Yang, Li; Lingkang, Ji

    In past decades, with widely utilizing of Microalloying and Thermal Mechanical Control Processing (TMCP) technology, the good matching of strength, toughness, plasticity and weldability on pipeline steel has been reached so that oil and gas pipeline has been greatly developed in China to meet the demand of strong domestic consumption of energy. In this paper, development history of pipeline steel and gas pipeline in china is briefly reviewed. The microstructure characteristic and mechanical performance of pipeline steel used in some representative gas pipelines of china built in different stage are summarized. Through the analysis on the evolution of pipeline service environment, some prospective development trend of application of pipeline steel in China is also presented.

  12. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  13. Leak detection in gas pipeline by acoustic and signal processing - A review

    Science.gov (United States)

    Adnan, N. F.; Ghazali, M. F.; Amin, M. M.; Hamat, A. M. A.

    2015-12-01

    The pipeline system is the most important part in media transport in order to deliver fluid to another station. The weak maintenance and poor safety will contribute to financial losses in term of fluid waste and environmental impacts. There are many classifications of techniques to make it easier to show their specific method and application. This paper's discussion about gas leak detection in pipeline system using acoustic method will be presented in this paper. The wave propagation in the pipeline is a key parameter in acoustic method when the leak occurs and the pressure balance of the pipe will generated by the friction between wall in the pipe. The signal processing is used to decompose the raw signal and show in time- frequency. Findings based on the acoustic method can be used for comparative study in the future. Acoustic signal and HHT is the best method to detect leak in gas pipelines. More experiments and simulation need to be carried out to get the fast result of leaking and estimation of their location.

  14. Visualisation for Stochastic Process Algebras: The Graphic Truth

    DEFF Research Database (Denmark)

    Smith, Michael James Andrew; Gilmore, Stephen

    2011-01-01

    and stochastic activity networks provide an automaton-based view of the model, which may be easier to visualise, at the expense of portability. In this paper, we argue that we can achieve the benefits of both approaches by generating a graphical view of a stochastic process algebra model, which is synchronised...

  15. iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM

    International Nuclear Information System (INIS)

    Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.

    2011-01-01

    A new graphical user interface to the MOSFLM program has been developed to simplify the processing of macromolecular diffraction data. The interface, iMOSFLM, allows data processing via a series of clearly defined tasks and provides visual feedback on the progress of each stage. iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed

  16. Development of the updated system of city underground pipelines based on Visual Studio

    Science.gov (United States)

    Zhang, Jianxiong; Zhu, Yun; Li, Xiangdong

    2009-10-01

    Our city has owned the integrated pipeline network management system with ArcGIS Engine 9.1 as the bottom development platform and with Oracle9i as basic database for storaging data. In this system, ArcGIS SDE9.1 is applied as the spatial data engine, and the system was a synthetic management software developed with Visual Studio visualization procedures development tools. As the pipeline update function of the system has the phenomenon of slower update and even sometimes the data lost, to ensure the underground pipeline data can real-time be updated conveniently and frequently, and the actuality and integrity of the underground pipeline data, we have increased a new update module in the system developed and researched by ourselves. The module has the powerful data update function, and can realize the function of inputting and outputting and rapid update volume of data. The new developed module adopts Visual Studio visualization procedures development tools, and uses access as the basic database to storage data. We can edit the graphics in AutoCAD software, and realize the database update using link between the graphics and the system. Practice shows that the update module has good compatibility with the original system, reliable and high update efficient of the database.

  17. Spectra processing with computer graphics

    International Nuclear Information System (INIS)

    Kruse, H.

    1979-01-01

    A program of processng gamma-ray spectra in rock analysis is described. The peak search was performed by applying a cross-correlation function. The experimental data were approximated by an analytical function represented by the sum of a polynomial and a multiple peak function. The latter is Gaussian, joined with the low-energy side by an exponential. A modified Gauss-Newton algorithm is applied for the purpose of fitting the data to the function. The processing of the values derived from a lunar sample demonstrates the effect of different choices of polynomial orders for approximating the background for various fitting intervals. Observations on applications of interactive graphics are presented. 3 figures, 1 table

  18. Fully automated rodent brain MR image processing pipeline on a Midas server: from acquired images to region-based statistics.

    Science.gov (United States)

    Budin, Francois; Hoogstoel, Marion; Reynolds, Patrick; Grauer, Michael; O'Leary-Moore, Shonagh K; Oguz, Ipek

    2013-01-01

    Magnetic resonance imaging (MRI) of rodent brains enables study of the development and the integrity of the brain under certain conditions (alcohol, drugs etc.). However, these images are difficult to analyze for biomedical researchers with limited image processing experience. In this paper we present an image processing pipeline running on a Midas server, a web-based data storage system. It is composed of the following steps: rigid registration, skull-stripping, average computation, average parcellation, parcellation propagation to individual subjects, and computation of region-based statistics on each image. The pipeline is easy to configure and requires very little image processing knowledge. We present results obtained by processing a data set using this pipeline and demonstrate how this pipeline can be used to find differences between populations.

  19. The ALFALFA Extragalactic Catalog and Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Haynes, Martha P.; Giovanelli, Riccardo; ALFALFA Team

    2018-06-01

    The Arecibo Legacy Fast ALFA 21cm HI Survey has reached completion. The observations and data are used by team members and the astronomical community in a variety of scientific initiatives with gas-rich galaxies, cluster environments, and studies of low redshift cosmology. The survey covers nearly 7000 square degrees of high galactic latitude sky visible from Arecibo, Puerto Rico and ~4400 hours of observations from 2005 to 2011. We present the extragalactic HI source catalog of over ~31,000 detections, their measured properties, and associated derived parameters. The observations were carefully reduced using a custom made data reduction pipeline and interface. Team members interacted with this pipeline through observation planning, calibration, imaging, source extraction, and cataloging. We describe this processing workflow as it pertains to the complexities of the single-dish multi-feed data reduction as well as known caveats of the source catalog and spectra for use in future astronomical studies and analysis. The ALFALFA team at Cornell has been supported by NSF grants AST-0607007, AST-1107390 and AST-1714828 and by grants from the Brinson Foundation.

  20. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  1. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed; Anciaux-Sedrakian, Ani; Rozanska, Xavier; Klahr, Diego; Guignon, Thomas; Fleurat-Lessard, Paul

    2012-01-01

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  2. 78 FR 32010 - Pipeline Safety: Public Workshop on Integrity Verification Process

    Science.gov (United States)

    2013-05-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Hazardous Materials Safety Administration, DOT. ACTION: Notice of public meeting. SUMMARY: This notice is announcing a public workshop to be held on the concept of ``Integrity Verification Process.'' The Integrity...

  3. The developing human connectome project: A minimal processing pipeline for neonatal cortical surface reconstruction.

    Science.gov (United States)

    Makropoulos, Antonios; Robinson, Emma C; Schuh, Andreas; Wright, Robert; Fitzgibbon, Sean; Bozek, Jelena; Counsell, Serena J; Steinweg, Johannes; Vecchiato, Katy; Passerat-Palmbach, Jonathan; Lenz, Gregor; Mortari, Filippo; Tenev, Tencho; Duff, Eugene P; Bastiani, Matteo; Cordero-Grande, Lucilio; Hughes, Emer; Tusor, Nora; Tournier, Jacques-Donald; Hutter, Jana; Price, Anthony N; Teixeira, Rui Pedro A G; Murgasova, Maria; Victor, Suresh; Kelly, Christopher; Rutherford, Mary A; Smith, Stephen M; Edwards, A David; Hajnal, Joseph V; Jenkinson, Mark; Rueckert, Daniel

    2018-06-01

    The Developing Human Connectome Project (dHCP) seeks to create the first 4-dimensional connectome of early life. Understanding this connectome in detail may provide insights into normal as well as abnormal patterns of brain development. Following established best practices adopted by the WU-MINN Human Connectome Project (HCP), and pioneered by FreeSurfer, the project utilises cortical surface-based processing pipelines. In this paper, we propose a fully automated processing pipeline for the structural Magnetic Resonance Imaging (MRI) of the developing neonatal brain. This proposed pipeline consists of a refined framework for cortical and sub-cortical volume segmentation, cortical surface extraction, and cortical surface inflation, which has been specifically designed to address considerable differences between adult and neonatal brains, as imaged using MRI. Using the proposed pipeline our results demonstrate that images collected from 465 subjects ranging from 28 to 45 weeks post-menstrual age (PMA) can be processed fully automatically; generating cortical surface models that are topologically correct, and correspond well with manual evaluations of tissue boundaries in 85% of cases. Results improve on state-of-the-art neonatal tissue segmentation models and significant errors were found in only 2% of cases, where these corresponded to subjects with high motion. Downstream, these surfaces will enhance comparisons of functional and diffusion MRI datasets, supporting the modelling of emerging patterns of brain connectivity. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. A low-cost system for graphical process monitoring with colour video symbol display units

    International Nuclear Information System (INIS)

    Grauer, H.; Jarsch, V.; Mueller, W.

    1977-01-01

    A system for computer controlled graphic process supervision, using color symbol video displays is described. It has the following characteristics: - compact unit: no external memory for image storage - problem oriented simple descriptive cut to the process program - no restriction of the graphical representation of process variables - computer and display independent, by implementation of colours and parameterized code creation for the display. (WB) [de

  5. Freezing around a pipeline carrying cooled gas in flooded areas

    Energy Technology Data Exchange (ETDEWEB)

    Koval' kov, V P; Krivoshein, B L

    1978-12-01

    The USSR's NIPIESUneftegazstroi mathematically analyzed the problem of ice formation around a subcooled-gas pipeline submerged in water in cold regions and derived charts for determining heat-transfer coefficients and the rate of ice formation for various water and gas temperatures. Because the ice halo that forms around these pipelines necessitates additional anchoring of the line, NIPIESUneftegazstroi sought to quantify the weight required in order to minimize the cost and material needed. The differential heat-transfer equations given can be used to calculate heat-transfer coefficients and the specific heat flux from the water to the ice halo, as well as the radius of the ice halo. Values of the ice-halo radius are plotted graphically as parabolic function of time (to 15,000 h) for pipeline surface temperatures of 30.2, 27.5, 23, 18.5, and 14/sup 0/F. An equation indicates the limiting value of the temperature of the transported gas at which icing of an insulated pipeline will not occur.

  6. CCDLAB: A Graphical User Interface FITS Image Data Reducer, Viewer, and Canadian UVIT Data Pipeline

    Science.gov (United States)

    Postma, Joseph E.; Leahy, Denis

    2017-11-01

    CCDLAB was originally developed as a FITS image data reducer and viewer, and development was then continued to provide ground support for the development of the UVIT detector system provided by the Canadian Space Agency to the Indian Space Research Organization’s ASTROSAT satellite and UVIT telescopes. After the launch of ASTROSAT and during UVIT’s first-light and PV phase starting in 2015 December, necessity required the development of a data pipeline to produce scientific images out of the Level 1 format data produced for UVIT by ISRO. Given the previous development of CCDLAB for UVIT ground support, the author provided a pipeline for the new Level 1 format data to be run through CCDLAB with the additional satellite-dependent reduction operations required to produce scientific data. Features of the pipeline are discussed with focus on the relevant data-reduction challenges intrinsic to UVIT data.

  7. Rapid data processing for ultrafast X-ray computed tomography using scalable and modular CUDA based pipelines

    Science.gov (United States)

    Frust, Tobias; Wagner, Michael; Stephan, Jan; Juckeland, Guido; Bieberle, André

    2017-10-01

    Ultrafast X-ray tomography is an advanced imaging technique for the study of dynamic processes basing on the principles of electron beam scanning. A typical application case for this technique is e.g. the study of multiphase flows, that is, flows of mixtures of substances such as gas-liquidflows in pipelines or chemical reactors. At Helmholtz-Zentrum Dresden-Rossendorf (HZDR) a number of such tomography scanners are operated. Currently, there are two main points limiting their application in some fields. First, after each CT scan sequence the data of the radiation detector must be downloaded from the scanner to a data processing machine. Second, the current data processing is comparably time-consuming compared to the CT scan sequence interval. To enable online observations or use this technique to control actuators in real-time, a modular and scalable data processing tool has been developed, consisting of user-definable stages working independently together in a so called data processing pipeline, that keeps up with the CT scanner's maximal frame rate of up to 8 kHz. The newly developed data processing stages are freely programmable and combinable. In order to achieve the highest processing performance all relevant data processing steps, which are required for a standard slice image reconstruction, were individually implemented in separate stages using Graphics Processing Units (GPUs) and NVIDIA's CUDA programming language. Data processing performance tests on different high-end GPUs (Tesla K20c, GeForce GTX 1080, Tesla P100) showed excellent performance. Program Files doi:http://dx.doi.org/10.17632/65sx747rvm.1 Licensing provisions: LGPLv3 Programming language: C++/CUDA Supplementary material: Test data set, used for the performance analysis. Nature of problem: Ultrafast computed tomography is performed with a scan rate of up to 8 kHz. To obtain cross-sectional images from projection data computer-based image reconstruction algorithms must be applied. The

  8. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1998-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  9. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J. [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1997-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  10. PGPB's pipeline integrity management system

    Energy Technology Data Exchange (ETDEWEB)

    Urencio, Claudio; Sanchez, Luis; Moreno, Carlos [PGPB - Pemex Gas y Petroquimica Basica (Mexico)

    2005-07-01

    Pemex Gas has 12,134 km of natural gas transmission pipelines, 1,835 Km for LPG and 1,216 Km for basic petrochemicals. The must part of this infrastructure was built in the 70's and reaching their 35 years of operating life. To manage the integrity of the three systems, Pemex Gas has a portfolio of technological tools. This tools allow the Company to improves the decision taking, align the budget with their strategic goals, achieve efficient asset utilization, and increase the value generation. The process of integrity management starts with the risk evaluation on assets, with the use of a software called IAP (Integrity Assessment Program). This information is integrated to the SIIA (Assets Identification System). The results of both software are used to the construction of the Risk Atlas, which identifies graphically each pipeline segment, with their related risk and factors that influence their behavior. The Risk Atlas gives us information about the consequences to the people, environment and facilities, so we can design customized plans to prevent or mitigate emergencies. Finally a detailed analysis of the resulting information and scenarios simulations help us to determine the best investment projects that will minimize the risk through all our assets. (author)

  11. Pigging the unpiggable: a total integrated maintenance approach of the Progreso Process Pipelines in Yucatan, Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez Graciano, Luis [PEMEX Refinacion, Mexico, MX (Mexico); Gonzalez, Oscar L. [NDT Systems and Services, Stutensee (Germany)

    2009-07-01

    Pemex Refinacion and NDT Systems and Services, executed a Total Integrated Maintenance Program of the Process Pipeline System in the Yucatan Peninsula in Mexico, in order to modernize, enhance and bring the pipeline system up to the best industry standards and ensure the integrity, reliability and safe operation of the system. This approach consisted in using multi-diameter ultrasonic inspection technology to determine the current status of the pipelines, repair every 'integrity diminishing' feature present on the system and establish a Certified Maintenance Program to ensure the future reliability and safety of the pipelines. Due to the complex nature of the pipeline construction, dated from 1984, several special modifications, integrations and solutions were necessary to improve the in line inspection survey as for all traditionally unpiggable systems. The Progreso Pipeline System consists in 3 major pipelines which transport diesel, jet fuel and gasoline respectively. The outside diameter of two pipelines varies along its length between 12 inches - 14 inches - 16 inches, making the inspection survey more difficult and particularly demanding an Inspection Tool solution. It is located on the coast of the Yucatan Peninsula, at the Mexican Caribbean, and its main purpose is to transport the product from the docked tanker ships to the Pemex Storage and Distribution Terminal. (author)

  12. Visual operations management tools applied to the oil pipelines and terminals standardization process: the experience of TRANSPETRO

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio/ITUC), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Santiago, Adilson; Ribeiro, Kassandra Senra; Arruda, Daniela Mendonca [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper describes the process by which visual operations management (VOM) tools were implemented, concerning standards and operational procedures in TRANSPETRO's Oil Pipelines and Terminals Unit. It provides: a brief literature review of visual operations management tools applied to total quality management and the standardization processes; a discussion of the assumptions from the second level of VOM (visual standards) upon which TRANSPETRO's oil pipelines and terminals business processes and operational procedures are based; and a description of the VOM implementation process involving more than 100 employees and one illustrative example of 'Quick Guides' for right-of- way management activities. Finally, it discusses the potential impacts and benefits of using VOM tools in the current practices in TRANSPETRO's Oil Pipelines and Terminals Unit, reinforcing the importance of such visual guides as vital to implement regional and corporate procedures, focusing on the main operational processes. (author)

  13. Development of high productivity pipeline girth welding

    International Nuclear Information System (INIS)

    Yapp, David; Liratzis, Theocharis

    2010-01-01

    The trend for increased oil and gas consumption implies a growth of long-distance pipeline installations. Welding is a critical factor in the installation of pipelines, both onshore and offshore, and the rate at which the pipeline can be laid is generally determined by the speed of welding. This has resulted in substantial developments in pipeline welding techniques. Arc welding is still the dominant process used in practice, and forge welding processes have had limited successful application to date, in spite of large investments in process development. Power beam processes have also been investigated in detail and the latest laser systems now show promise for practical application. In recent years the use of high strength steels has substantially reduced the cost of pipeline installation, with X70 and X80 being commonly used. This use of high strength pipeline produced by thermomechanical processing has also been researched. They must all meet three requirments, high productivity, satisfactory weld properties, and weld quality

  14. Graphic Arts: Book Three. The Press and Related Processes.

    Science.gov (United States)

    Farajollahi, Karim; And Others

    The third of a three-volume set of instructional materials for a graphic arts course, this manual consists of nine instructional units dealing with presses and related processes. Covered in the units are basic press fundamentals, offset press systems, offset press operating procedures, offset inks and dampening chemistry, preventive maintenance…

  15. The Use of Computer Graphics in the Design Process.

    Science.gov (United States)

    Palazzi, Maria

    This master's thesis examines applications of computer technology to the field of industrial design and ways in which technology can transform the traditional process. Following a statement of the problem, the history and applications of the fields of computer graphics and industrial design are reviewed. The traditional industrial design process…

  16. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  17. A Theoretical Analysis of Learning with Graphics--Implications for Computer Graphics Design.

    Science.gov (United States)

    ChanLin, Lih-Juan

    This paper reviews the literature pertinent to learning with graphics. The dual coding theory provides explanation about how graphics are stored and precessed in semantic memory. The level of processing theory suggests how graphics can be employed in learning to encourage deeper processing. In addition to dual coding theory and level of processing…

  18. Pipelines : moving biomass and energy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, A. [Alberta Univ., Edmonton, AB (Canada). Dept. of Mechanical Engineering

    2006-07-01

    Moving biomass and energy through pipelines was presented. Field sourced biomass utilization for fuel was discussed in terms of competing cost factors; economies of scale; and differing fuel plant sizes. The cost versus scale in a bioenergy facility was illustrated in chart format. The transportation cost of biomass was presented as it is a major component of total biomass processing cost and is in the typical range of 25-45 per cent of total processing costs for truck transport of biomass. Issues in large scale biomass utilization, scale effects in transportation, and components of transport cost were identified. Other topics related to transportation issues included approaches to pipeline transport; cost of wood chips in pipeline transport; and distance variable cost of transporting wood chips by pipeline. Practical applications were also offered. In addition, the presentation provided and illustrated a model for an ethanol plant supplied by truck transport as well as a sample configuration for 19 truck based ethanol plants versus one large facility supplied by truck plus 18 pipelines. Last, pipeline transport of bio-oil and pipeline transport of syngas was discussed. It was concluded that pipeline transport can help in reducing congestion issues in large scale biomass utilization and that it can offer a means to achieve large plant size. Some current research at the University of Alberta on pipeline transport of raw biomass, bio-oil and hydrogen production from biomass for oil sands and pipeline transport was also presented. tabs., figs.

  19. Visual operations management tools applied to the oil pipelines and terminals standardization process: the experience of TRANSPETRO

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Maria Fatima Ludovico de [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio/ITUC), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Santiago, Adilson; Ribeiro, Kassandra Senra; Arruda, Daniela Mendonca [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper describes the process by which visual operations management (VOM) tools were implemented, concerning standards and operational procedures in TRANSPETRO's Oil Pipelines and Terminals Unit. It provides: a brief literature review of visual operations management tools applied to total quality management and the standardization processes; a discussion of the assumptions from the second level of VOM (visual standards) upon which TRANSPETRO's oil pipelines and terminals business processes and operational procedures are based; and a description of the VOM implementation process involving more than 100 employees and one illustrative example of 'Quick Guides' for right-of- way management activities. Finally, it discusses the potential impacts and benefits of using VOM tools in the current practices in TRANSPETRO's Oil Pipelines and Terminals Unit, reinforcing the importance of such visual guides as vital to implement regional and corporate procedures, focusing on the main operational processes. (author)

  20. Extending the Fermi-LAT Data Processing Pipeline to the Grid

    Science.gov (United States)

    Zimmer, S.; Arrabito, L.; Glanzman, T.; Johnson, T.; Lavalley, C.; Tsaregorodtsev, A.

    2012-12-01

    The Data Handling Pipeline (“Pipeline”) has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. In addition it receives heavy use in performing production Monte Carlo tasks. In daily use it receives a new data download every 3 hours and launches about 2000 jobs to process each download, typically completing the processing of the data before the next download arrives. The need for manual intervention has been reduced to less than 0.01% of submitted jobs. The Pipeline software is written almost entirely in Java and comprises several modules. The software comprises web-services that allow online monitoring and provides charts summarizing work flow aspects and performance information. The server supports communication with several batch systems such as LSF and BQS and recently also Sun Grid Engine and Condor. This is accomplished through dedicated job control services that for Fermi are running at SLAC and the other computing site involved in this large scale framework, the Lyon computing center of IN2P3. While being different in the logic of a task, we evaluate a separate interface to the Dirac system in order to communicate with EGI sites to utilize Grid resources, using dedicated Grid optimized systems rather than developing our own. More recently the Pipeline and its associated data catalog have been generalized for use by other experiments, and are

  1. Measuring Cognitive Load in Test Items: Static Graphics versus Animated Graphics

    Science.gov (United States)

    Dindar, M.; Kabakçi Yurdakul, I.; Inan Dönmez, F.

    2015-01-01

    The majority of multimedia learning studies focus on the use of graphics in learning process but very few of them examine the role of graphics in testing students' knowledge. This study investigates the use of static graphics versus animated graphics in a computer-based English achievement test from a cognitive load theory perspective. Three…

  2. PANDA: a pipeline toolbox for analyzing brain diffusion images

    Directory of Open Access Journals (Sweden)

    Zaixu eCui

    2013-02-01

    Full Text Available Diffusion magnetic resonance imaging (dMRI is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named Pipeline for Analyzing braiN Diffusion imAges (PANDA for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL, Pipeline System for Octave and Matlab (PSOM, Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics (e.g., FA and MD that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI, allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.

  3. PANDA: a pipeline toolbox for analyzing brain diffusion images.

    Science.gov (United States)

    Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang

    2013-01-01

    Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.

  4. Business process modeling applied to oil pipeline and terminal processes: a proposal for TRANSPETRO's oil pipelines and terminals in Rio de Janeiro and Minas Gerais

    Energy Technology Data Exchange (ETDEWEB)

    Santiago, Adilson da Silva [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil); Caulliraux, Heitor Mansur [Universidade Federal do Rio de Janeiro (COPPE/UFRJ/GPI), RJ (Brazil). Coordenacao de Pos-graduacao em Engenharia. Grupo de Producao Integrada; Almeida, Maria Fatima Ludovico de [Pontificia Universidade Catolica do Rio de Janeiro (PUC-Rio/ITUC), Rio de Janeiro, RJ (Brazil). Instituto Tecnologico; Felippe, Adriana Vieira de Oliveira [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Business process modeling (BPM) using event driven process chain diagrams (EPCs) to lay out business process work flows is now widely adopted around the world. The EPC method was developed within the framework of the ARIS Toolset developed by Prof. Wilhelm-August Scheer at the Institut fur Wirtschaftsinformatik at the Universitat des Saarlandes, in the early 1990s. It is used by many companies to model, analyze and redesign business processes. As such it forms the core technique for modeling in ARIS, which serves to link the different aspects of the so-called control view, which is discussed in the section on ARIS business process modeling. This paper describes a proposal made to TRANSPETRO's Oil Pipelines and Terminals Division in the states of Rio de Janeiro and Minas Gerais, which will be jointly developed by specialists and managers from TRANSPETRO and from COPPETEC, the collaborative research arm of Rio de Janeiro Federal University (UFRJ). The proposal is based on ARIS business process modeling and is presented here according to its seven phases, as follows: information survey and definition of the project structure; mapping and analysis of Campos Eliseos Terminal (TECAM) processes; validation of TECAM process maps; mapping and analysis of the remaining organizational units' processes; validation of the remaining organizational units' process maps; proposal of a business process model for all organizational units of TRANSPETRO's Oil Pipelines and Terminals Division in Rio de Janeiro and Minas Gerais; critical analysis of the process itself and the results and potential benefits of BPM. (author)

  5. Recent developments in pipeline welding practice

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    Fourteen chapters are included: overview of pipeline welding systems and quality assurance, CRC automatic welding system, H.C. Price Co. automatic welding system, semi-automatic MIG-welding process, partial penetration welding of steel pipes for gas distribution, construction procedures and quality control in offshore pipeline construction, welding in repair and maintenance of gas transmission pipelines, British Gas studies of welding on pressurized gas transmission pipelines, hot tapping pipelines, underwater welding for offshore pipelines and associated equipment, radial friction welding, material composition vs weld properties, review of NDT of pipeline welds, and safety assurance in pipeline construction. A bibliography of approximately 150 references is included, arranged according to subject and year.

  6. Pipeline system operability review

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, Kjell [Det Norske Veritas (Norway); Davies, Ray [CC Technologies, Dublin, OH (United States)

    2005-07-01

    Pipeline operators are continuously working to improve the safety of their systems and operations. In the US both liquid and gas pipeline operators have worked with the regulators over many years to develop more systematic approaches to pipeline integrity management. To successfully manage pipeline integrity, vast amounts of data from different sources needs to be collected, overlaid and analyzed in order to assess the current condition and predict future degradation. The efforts undertaken by the operators has had a significant impact on pipeline safety, nevertheless, during recent years we have seen a number of major high profile accidents. One can therefore ask how effective the pipeline integrity management systems and processes are. This paper will present one methodology 'The Pipeline System Operability Review' that can evaluate and rate the effectiveness of both the management systems and procedures, as well as the technical condition of the hardware. The result from the review can be used to compare the performance of different pipelines within one operating company, as well as benchmark with international best practices. (author)

  7. Pipeline system operability review

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, Kjell [Det Norske Veritas (Norway); Davies, Ray [CC Technologies, Dublin, OH (United States)

    2005-07-01

    Pipeline operators are continuously working to improve the safety of their systems and operations. In the US both liquid and gas pipeline operators have worked with the regulators over many years to develop more systematic approaches to pipeline integrity management. To successfully manage pipeline integrity, vast amounts of data from different sources needs to be collected, overlaid and analyzed in order to assess the current condition and predict future degradation. The efforts undertaken by the operators has had a significant impact on pipeline safety, nevertheless, during recent years we have seen a number of major high profile accidents. One can therefore ask how effective the pipeline integrity management systems and processes are. This paper will present one methodology 'The Pipeline System Operability Review' that can evaluate and rate the effectiveness of both the management systems and procedures, as well as the technical condition of the hardware. The result from the review can be used to compare the performance of different pipelines within one operating company, as well as benchmark with international best practices. (author)

  8. SimVascular 2.0: an Integrated Open Source Pipeline for Image-Based Cardiovascular Modeling and Simulation

    Science.gov (United States)

    Lan, Hongzhi; Merkow, Jameson; Updegrove, Adam; Schiavazzi, Daniele; Wilson, Nathan; Shadden, Shawn; Marsden, Alison

    2015-11-01

    SimVascular (www.simvascular.org) is currently the only fully open source software package that provides a complete pipeline from medical image based modeling to patient specific blood flow simulation and analysis. It was initially released in 2007 and has contributed to numerous advances in fundamental hemodynamics research, surgical planning, and medical device design. However, early versions had several major barriers preventing wider adoption by new users, large-scale application in clinical and research studies, and educational access. In the past years, SimVascular 2.0 has made significant progress by integrating open source alternatives for the expensive commercial libraries previously required for anatomic modeling, mesh generation and the linear solver. In addition, it simplified the across-platform compilation process, improved the graphical user interface and launched a comprehensive documentation website. Many enhancements and new features have been incorporated for the whole pipeline, such as 3-D segmentation, Boolean operation for discrete triangulated surfaces, and multi-scale coupling for closed loop boundary conditions. In this presentation we will briefly overview the modeling/simulation pipeline and advances of the new SimVascular 2.0.

  9. MIC in long oil pipelines: diagnosis, treatment and monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Jenneman, Gary; Harris, Jennifer; Webb, Robert [ConocoPhillips (Canada)

    2011-07-01

    The paper presents the diagnosis, treatment and monitoring of microbial influenced corrosion (MIC) in long oil pipelines. The presence of inorganic solids, bacteria, gases and organic acids in produced water in the oil pipelines causes MIC, which is hard to detect or test and it does not produce any unique type of corrosion. Chemical analysis of water from pig runs is presented in a tabular form and a graphical analysis of pig sludge solids is shown. From the biometabolite analysis, 23 putative hydrocarbon biometabolites were identified and biometabolites for the anaerobic biodegradation of aromatic HC were also detected. Operational considerations include the history of MIC in upstream pipelines, water slugging, and presence of suspended solids, among others. From microbiological, chemical, metallurgical and operational evidence it was suggested that MIC is a likely mechanism. The mitigation program is described and suggestions for successful mitigation measures include removal of oxygen sources, scale inhibitor injection, and increasing CO2 inhibitor concentration.

  10. Prospects of Frequency-Time Correlation Analysis for Detecting Pipeline Leaks by Acoustic Emission Method

    International Nuclear Information System (INIS)

    Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E

    2014-01-01

    In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit

  11. Stochastic process corrosion growth models for pipeline reliability

    International Nuclear Information System (INIS)

    Bazán, Felipe Alexander Vargas; Beck, André Teófilo

    2013-01-01

    Highlights: •Novel non-linear stochastic process corrosion growth model is proposed. •Corrosion rate modeled as random Poisson pulses. •Time to corrosion initiation and inherent time-variability properly represented. •Continuous corrosion growth histories obtained. •Model is shown to precisely fit actual corrosion data at two time points. -- Abstract: Linear random variable corrosion models are extensively employed in reliability analysis of pipelines. However, linear models grossly neglect well-known characteristics of the corrosion process. Herein, a non-linear model is proposed, where corrosion rate is represented as a Poisson square wave process. The resulting model represents inherent time-variability of corrosion growth, produces continuous growth and leads to mean growth at less-than-one power of time. Different corrosion models are adjusted to the same set of actual corrosion data for two inspections. The proposed non-linear random process corrosion growth model leads to the best fit to the data, while better representing problem physics

  12. Pipeline four-dimension management is the trend of pipeline integrity management in the future

    Energy Technology Data Exchange (ETDEWEB)

    Shaohua, Dong; Feifan; Zhongchen, Han [China National Petroleum Corporation (CNPC), Beijing (China)

    2009-07-01

    Pipeline integrity management is essential for today's operators to operate their pipelines safety and cost effectively. The latest developments of pipeline integrity management around the world are involved with change of regulation, industry standard and innovation of technology. And who know the trend of PIM in the future, which can be answered in the paper. As a result, the concept of P4DM was set up firstly in the world. The paper analyzed the pipeline HSE management, pipeline integrity management (PIM) and asset integrity management (AIM), the problem of management was produced, and also the Pipeline 4-dimension Management (P4DM) theory was brought forward. According to P4DM, from the hierarchy of P4DM, the management elements, fields, space and time was analyzed. The main content is P4DM integrate the space geography location and time, control and manage the pipeline system in whole process, anywhere and anytime. It includes the pipeline integrity, pipeline operation and emergency, which is integrated by IT system. It come true that the idea, solution, technology, organization, manager alternately intelligently control the process of management. What the paper talks about included the definition of pipeline 4D management, the research develop of P4DM, the theory of P4DM, the relationship between P4DM and PIM, the technology basis of P4DM, how to perform the P4DM and conclusion. The P4DM was produced, which provide the development direction of PIM in the future, and also provide the new ideas for PetroChina in the field of technology and management. (author)

  13. Tumor image signatures and habitats: a processing pipeline of multimodality metabolic and physiological images.

    Science.gov (United States)

    You, Daekeun; Kim, Michelle M; Aryal, Madhava P; Parmar, Hemant; Piert, Morand; Lawrence, Theodore S; Cao, Yue

    2018-01-01

    To create tumor "habitats" from the "signatures" discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix [Formula: see text] containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix [Formula: see text] of the image parameters to discover major image "signatures;" (4) clustering the superpixels and organizing the parameter order of the [Formula: see text] matrix according to the one found in step 3; (5) creating "habitats" in the image space from the superpixels associated with the "signatures;" and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image "signatures" were identified. The three major "habitats" plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined "signatures" and "habitats," the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined "signatures" and "habitats" was revealed. These image-defined "signatures" and "habitats" can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.

  14. Enhancement of Hydrodynamic Processes in Oil Pipelines Considering Rheologically Complex High-Viscosity Oils

    Science.gov (United States)

    Konakhina, I. A.; Khusnutdinova, E. M.; Khamidullina, G. R.; Khamidullina, A. F.

    2016-06-01

    This paper describes a mathematical model of flow-related hydrodynamic processes for rheologically complex high-viscosity bitumen oil and oil-water suspensions and presents methods to improve the design and performance of oil pipelines.

  15. Partial wave analysis using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  16. The method of predicting the process of condensation of moisture and hydrate formation in the gas pipeline

    OpenAIRE

    Хвостова, Олена Вікторівна

    2014-01-01

    The problem of ensuring the required value of one of the natural gas quality indicators during its transportation to the consumer - moisture content is considered in the paper. The method for predicting possible moisture condensation and hydrate formation processes in gas pipelines considering mixing gas flows with different moisture content was developed.Predicting the moisture condensation and hydrate formation in gas pipelines is an actual task since a timely prevention of these processes ...

  17. Gathering pipeline methane emissions in Fayetteville shale pipelines and scoping guidelines for future pipeline measurement campaigns

    Directory of Open Access Journals (Sweden)

    Daniel J. Zimmerle

    2017-11-01

    Full Text Available Gathering pipelines, which transport gas from well pads to downstream processing, are a sector of the natural gas supply chain for which little measured methane emissions data are available. This study performed leak detection and measurement on 96 km of gathering pipeline and the associated 56 pigging facilities and 39 block valves. The study found one underground leak accounting for 83% (4.0 kg CH4/hr of total measured emissions. Methane emissions for the 4684 km of gathering pipeline in the study area were estimated at 402 kg CH4/hr [95 to 1065 kg CH4/hr, 95% CI], or 1% [0.2% to 2.6%] of all methane emissions measured during a prior aircraft study of the same area. Emissions estimated by this study fall within the uncertainty range of emissions estimated using emission factors from EPA’s 2015 Greenhouse Inventory and study activity estimates. While EPA’s current inventory is based upon emission factors from distribution mains measured in the 1990s, this study indicates that using emission factors from more recent distribution studies could significantly underestimate emissions from gathering pipelines. To guide broader studies of pipeline emissions, we also estimate the fraction of the pipeline length within a basin that must be measured to constrain uncertainty of pipeline emissions estimates to within 1% of total basin emissions. The study provides both substantial insight into the mix of emission sources and guidance for future gathering pipeline studies, but since measurements were made in a single basin, the results are not sufficiently representative to provide methane emission factors at the regional or national level.

  18. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    Science.gov (United States)

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  19. Graphics gems II

    CERN Document Server

    Arvo, James

    1991-01-01

    Graphics Gems II is a collection of articles shared by a diverse group of people that reflect ideas and approaches in graphics programming which can benefit other computer graphics programmers.This volume presents techniques for doing well-known graphics operations faster or easier. The book contains chapters devoted to topics on two-dimensional and three-dimensional geometry and algorithms, image processing, frame buffer techniques, and ray tracing techniques. The radiosity approach, matrix techniques, and numerical and programming techniques are likewise discussed.Graphics artists and comput

  20. Estimation of efficiency of hydrotransport pipelines polyurethane coating application in comparison with steel pipelines

    Science.gov (United States)

    Aleksandrov, V. I.; Vasilyeva, M. A.; Pomeranets, I. B.

    2017-10-01

    The paper presents analytical calculations of specific pressure loss in hydraulic transport of the Kachkanarsky GOK iron ore processing tailing slurry. The calculations are based on the results of the experimental studies on specific pressure loss dependence upon hydraulic roughness of pipelines internal surface lined with polyurethane coating. The experiments proved that hydraulic roughness of polyurethane coating is by the factor of four smaller than that of steel pipelines, resulting in a decrease of hydraulic resistance coefficients entered into calculating formula of specific pressure loss - the Darcy-Weisbach formula. Relative and equivalent roughness coefficients are calculated for pipelines with polyurethane coating and without it. Comparative calculations show that hydrotransport pipelines polyurethane coating application is conductive to a specific energy consumption decrease in hydraulic transport of the Kachkanarsky GOC iron ore processing tailings slurry by the factor of 1.5. The experiments were performed on a laboratory hydraulic test rig with a view to estimate the character and rate of physical roughness change in pipe samples with polyurethane coating. The experiments showed that during the following 484 hours of operation, roughness changed in all pipe samples inappreciably. As a result of processing of the experimental data by the mathematical statistics methods, an empirical formula was obtained for the calculation of operating roughness of polyurethane coating surface, depending on the pipeline operating duration with iron ore processing tailings slurry.

  1. Studies on the Exergy Transfer Law for the Irreversible Process in the Waxy Crude Oil Pipeline Transportation

    Directory of Open Access Journals (Sweden)

    Qinglin Cheng

    2018-04-01

    Full Text Available With the increasing demand of oil products in China, the energy consumption of pipeline operation will continue to rise greatly, as well as the cost of oil transportation. In the field of practical engineering, saving energy, reducing energy consumption and adapting to the international oil situation are the development trends and represent difficult problems. Based on the basic principle of non-equilibrium thermodynamics, this paper derives the field equilibrium equations of non-equilibrium thermodynamic process for pipeline transportation. To seek the bilinear form of “force” and “flow” in the non-equilibrium thermodynamics of entropy generation rate, the oil pipeline exergy balance equation and the exergy transfer pipeline dynamic equation of the irreversibility were established. The exergy balance equation was applied to energy balance evaluation system, which makes the system more perfect. The exergy flow transfer law of the waxy oil pipeline were explored deeply from the directions of dynamic exergy, pressure exergy, thermal exergy and diffusion exergy. Taking an oil pipeline as an example, the influence factors of exergy transfer coefficient and exergy flow density were analyzed separately.

  2. Graphics processing unit based computation for NDE applications

    Science.gov (United States)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  3. Fishing activity near offshore pipelines, 2017

    NARCIS (Netherlands)

    Machiels, Marcel

    2018-01-01

    On the North Sea bottom lie numerous pipelines to link oil- or gas offshore drilling units, - platforms and processing stations on land. Although pipeline tubes are coated and covered with protective layers, the pipelines risk being damaged through man-made hazards like anchor dropping and fishing

  4. Research of processes of heat exchange in horizontal pipeline

    Science.gov (United States)

    Nikolaev, A. K.; Dokoukin, V. P.; Lykov, Y. V.; Fetisov, V. G.

    2018-03-01

    The energy crisis, which becomes more evident in Russia, stems in many respects from unjustified high consumption of energy resources. Development and exploitation of principal oil and gas deposits located in remote areas with severe climatic conditions require considerable investments increasing essentially the cost of power generation. Account should be taken also of the fact that oil and gas resources are nonrenewable. An alternative fuel for heat and power generation is coal, the reserves of which in Russia are quite substantial. For this reason the coal extraction by 2020 will amount to 450-550 million tons. The use of coal, as a solid fuel for heat power plants and heating plants, is complicated by its transportation from extraction to processing and consumption sites. Remoteness of the principal coal mining areas (Kuzbass, Kansk-Achinsk field, Vorkuta) from the main centers of its consumption in the European part of the country, Siberia and Far East makes the problem of coal transportation urgent. Of all possible transportation methods (railway, conveyor, pipeline), the most efficient is hydrotransport which provides continuous transportation at comparatively low capital and working costs, as confirmed by construction and operation of extended coal pipelines in many countries.

  5. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  6. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation.

    Science.gov (United States)

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline

  7. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation

    Science.gov (United States)

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline

  8. ANALYSIS ON TECHNOLOGICAL PROCESSES CLEANING OIL PIPELINES

    Directory of Open Access Journals (Sweden)

    Mariana PǍTRAŞCU

    2015-05-01

    Full Text Available In this paper the researches are presented concerning the technological processes of oil pipelines.We know several technologies and materials used for cleaning the sludge deposits, iron and manganese oxides, dross, stone, etc.de on the inner walls of drinking water pipes or industries.For the oil industry, methods of removal of waste materials and waste pipes and liquid and gas transport networks are operations known long, tedious and expensive. The main methods and associated problems can be summarized as follows: 1 Blowing with compressed air.2 manual or mechanical brushing, sanding with water or dry.3 Wash with water jet of high pressure, solvent or chemical solution to remove the stone and hard deposits.4 The combined methods of cleaning machines that use water jets, cutters, chains, rotary heads cutters, etc.

  9. Graphics workflow optimization when editing standard tasks using modern graphics editing programs

    OpenAIRE

    Khabirova, Maja

    2012-01-01

    This work focuses on the description and characteristics of common problems which graphic designers face daily when working for advertising agencies. This work describes tasks and organises them according to the type of graphic being processed and the types of output. In addition, this work describes the ways these common tasks can be completed using modern graphics editing software. It also provides a practical definition of a graphic designer and graphic agency. The aim of this work is to m...

  10. Development of the Write Process for Pipeline-Ready Heavy Oil

    Energy Technology Data Exchange (ETDEWEB)

    Lee Brecher; Charles Mones; Frank Guffey

    2009-03-07

    Work completed under this program advances the goal of demonstrating Western Research Institute's (WRI's) WRITE{trademark} process for upgrading heavy oil at field scale. MEG Energy Corporation (MEG) located in Calgary, Alberta, Canada supported efforts at WRI to develop the WRITE{trademark} process as an oil sands, field-upgrading technology through this Task 51 Jointly Sponsored Research project. The project consisted of 6 tasks: (1) optimization of the distillate recovery unit (DRU), (2) demonstration and design of a continuous coker, (3) conceptual design and cost estimate for a commercial facility, (4) design of a WRITE{trademark} pilot plant, (5) hydrotreating studies, and (6) establish a petroleum analysis laboratory. WRITE{trademark} is a heavy oil and bitumen upgrading process that produces residuum-free, pipeline ready oil from heavy material with undiluted density and viscosity that exceed prevailing pipeline specifications. WRITE{trademark} uses two processing stages to achieve low and high temperature conversion of heavy oil or bitumen. The first stage DRU operates at mild thermal cracking conditions, yielding a light overhead product and a heavy residuum or bottoms material. These bottoms flow to the second stage continuous coker that operates at severe pyrolysis conditions, yielding light pyrolyzate and coke. The combined pyrolyzate and mildly cracked overhead streams form WRITE{trademark}'s synthetic crude oil (SCO) production. The main objectives of this project were to (1) complete testing and analysis at bench scale with the DRU and continuous coker reactors and provide results to MEG for process evaluation and scale-up determinations and (2) complete a technical and economic assessment of WRITE{trademark} technology to determine its viability. The DRU test program was completed and a processing envelope developed. These results were used for process assessment and for scaleup. Tests in the continuous coker were intended to

  11. High-Throughput Tabular Data Processor - Platform independent graphical tool for processing large data sets.

    Science.gov (United States)

    Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz

    2018-01-01

    High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).

  12. A study on an autonomous pipeline maintenance robot, 5

    International Nuclear Information System (INIS)

    Fukuda, Toshio; Hosokai, Hidemi; Otsuka, Masashi.

    1989-01-01

    The path planning is very important for the pipeline maintenance robot because there are many obstacles on pipeline such as flanges and T-joints and others, and because pipelines are constructed as a connected network in a very complicated way. Furthermore the maintenance robot Mark III previously reported has the ability to transit from one pipe to another the path planner should consider. The expert system especially aimed for path planning, named PPES (Path Planning Expert System), is described in this paper. A human-operator has only to give some tasks to this system. This system automatically replies with the optimal path, which is based on the calculation of the task levels and list of some control commands. Task level is a criterion to determine one optimal path. It consists of the difference of potential energies, the static joint torques, velocity of the robot, step numbers of the grippers' or body's movement, which the robot requires. This system also has graphic illustrations, so that the operator can easily check and understand the plant map and the result of the path planning. (author)

  13. Slurry pipeline design approach

    Energy Technology Data Exchange (ETDEWEB)

    Betinol, Roy; Navarro R, Luis [Brass Chile S.A., Santiago (Chile)

    2009-12-19

    Compared to other engineering technologies, the design of a commercial long distance Slurry Pipeline design is a relatively new engineering concept which gained more recognition in the mid 1960 's. Slurry pipeline was first introduced to reduce cost in transporting coal to power generating units. Since then this technology has caught-up worldwide to transport other minerals such as limestone, copper, zinc and iron. In South America, the use of pipeline is commonly practiced in the transport of Copper (Chile, Peru and Argentina), Iron (Chile and Brazil), Zinc (Peru) and Bauxite (Brazil). As more mining operations expand and new mine facilities are opened, the design of the long distance slurry pipeline will continuously present a commercially viable option. The intent of this paper is to present the design process and discuss any new techniques and approach used today to ensure a better, safer and economical slurry pipeline. (author)

  14. Graphics Processing Units for HEP trigger systems

    International Nuclear Information System (INIS)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.

    2016-01-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  15. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  16. A Pipeline for Large Data Processing Using Regular Sampling for Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Adhinarayanan, Vignesh [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Turton, Terece [Univ. of Texas, Austin, TX (United States); Feng, Wu [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Rogers, David Honegger [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-12

    Large simulation data requires a lot of time and computational resources to compute, store, analyze, visualize, and run user studies. Today, the largest cost of a supercomputer is not hardware but maintenance, in particular energy consumption. Our goal is to balance energy consumption and cognitive value of visualizations of resulting data. This requires us to go through the entire processing pipeline, from simulation to user studies. To reduce the amount of resources, data can be sampled or compressed. While this adds more computation time, the computational overhead is negligible compared to the simulation time. We built a processing pipeline at the example of regular sampling. The reasons for this choice are two-fold: using a simple example reduces unnecessary complexity as we know what to expect from the results. Furthermore, it provides a good baseline for future, more elaborate sampling methods. We measured time and energy for each test we did, and we conducted user studies in Amazon Mechanical Turk (AMT) for a range of different results we produced through sampling.

  17. A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines

    Directory of Open Access Journals (Sweden)

    José Pablo Arias-Nicolás

    2015-06-01

    Full Text Available In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.

  18. Best practices for the abandonment of pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Mackean, M; Reed, R; Snow, B [Nabors Canada, Calgary, AB (Canada). Abandonrite Service

    2006-07-01

    Pipeline regulations implemented in 2006 require that licensees register all pipelines. Training must also be provided for ground disturbance supervisors. In addition, signage must be maintained on abandoned pipelines, and discontinued pipelines must be properly isolated. Corrosion control and internal inhibition is required for discontinued lines. However, pipelines are often neglected during the well abandonment process. This presentation provided recommendations for coordinating well and pipeline abandonment processes. Pipeline ends can be located, depressurized, flushed and purged while wells are being abandoned. Contaminated soils around the wells can also be identified prior to reclamation activities. Administrative reviews must be conducted in order to provide accurate information on pipeline location, reclamation certification, and line break history. Field operation files must be reviewed before preliminary field work is conducted. Site inspections should be used to determine if all ends of the line are accessible. Landowners and occupants near the line must also be notified, and relevant documentation must be obtained. Skilled technicians must be used to assess the lines for obstructions as well as to cut and cap the lines after removing risers. The presentation also examined issues related to pressure change, movement, cold tapping, and live dead legs. tabs., figs.

  19. Mathematics of shape description a morphological approach to image processing and computer graphics

    CERN Document Server

    Ghosh, Pijush K

    2009-01-01

    Image processing problems are often not well defined because real images are contaminated with noise and other uncertain factors. In Mathematics of Shape Description, the authors take a mathematical approach to address these problems using the morphological and set-theoretic approach to image processing and computer graphics by presenting a simple shape model using two basic shape operators called Minkowski addition and decomposition. This book is ideal for professional researchers and engineers in Information Processing, Image Measurement, Shape Description, Shape Representation and Computer Graphics. Post-graduate and advanced undergraduate students in pure and applied mathematics, computer sciences, robotics and engineering will also benefit from this book.  Key FeaturesExplains the fundamental and advanced relationships between algebraic system and shape description through the set-theoretic approachPromotes interaction of image processing geochronology and mathematics in the field of algebraic geometryP...

  20. Pipeline engineering

    CERN Document Server

    Liu, Henry

    2003-01-01

    PART I: PIPE FLOWSINTRODUCTIONDefinition and Scope Brief History of PipelinesExisting Major PipelinesImportance of PipelinesFreight (Solids) Transport by PipelinesTypes of PipelinesComponents of PipelinesAdvantages of PipelinesReferencesSINGLE-PHASE INCOMPRESSIBLE NEWTONIAN FLUIDIntroductionFlow RegimesLocal Mean Velocity and Its Distribution (Velocity Profile)Flow Equations for One-Dimensional AnalysisHydraulic and Energy Grade LinesCavitation in Pipeline SystemsPipe in Series and ParallelInterconnected ReservoirsPipe NetworkUnsteady Flow in PipeSINGLE-PHASE COMPRESSIBLE FLOW IN PIPEFlow Ana

  1. Location of leaks in pressurized underground pipelines

    International Nuclear Information System (INIS)

    Eckert, E.G.; Maresca, J.W. Jr.

    1993-01-01

    Millions of underground storage tanks (UST) are used to store petroleum and other chemicals. The pressurized underground pipelines associated with USTs containing petroleum motor fuels are typically 2 in. in diameter and 50 to 200 ft in length. These pipelines typically operate at pressures of 20 to 30 psi. Longer lines, with diameters up to 4 in., are found in some high-volume facilities. There are many systems that can be used to detect leaks in pressurized underground pipelines. When a leak is detected, the first step in the remediation process is to find its location. Passive-acoustic measurements, combined with advanced signal-processing techniques, provide a nondestructive method of leak location that is accurate and relatively simple, and that can be applied to a wide variety of pipelines and pipeline products

  2. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  3. Slurry pipeline technology: an overview

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, Jay P. [Pipeline Systems Incorporated (PSI), Belo Horizonte, MG (Brazil); Lima, Rafael; Pinto, Daniel; Vidal, Alisson [Ausenco do Brasil Engenharia Ltda., Nova Lima, MG (Brazil). PSI Div.

    2009-12-19

    Slurry pipelines represent an economical and environmentally friendly transportation means for many solid materials. This paper provides an over-view of the technology, its evolution and current Brazilian activity. Mineral resources are increasingly moving farther away from ports, processing plants and end use points, and slurry pipelines are an important mode of solids transport. Application guidelines are discussed. State-of-the-Art technical solutions such as pipeline system simulation, pipe materials, pumps, valves, automation, telecommunications, and construction techniques that have made the technology successful are presented. A discussion of where long distant slurry pipelines fit in a picture that also includes thickened and paste materials pipe lining is included. (author)

  4. Knowledge Pipeline: A Task Oriented Way to Implement Knowledge Management

    International Nuclear Information System (INIS)

    Pan Jiajie

    2014-01-01

    Concept of knowledge pipeline: There are many pipelines named by tasks or business processes in an organization. Knowledge contributors put knowledge to its corresponding pipelines. A maintenance team could keep the knowledge in pipelines clear and valid. Users could get knowledge just like opening a faucet in terms of their tasks or business processes

  5. Impact of memory bottleneck on the performance of graphics processing units

    Science.gov (United States)

    Son, Dong Oh; Choi, Hong Jun; Kim, Jong Myon; Kim, Cheol Hong

    2015-12-01

    Recent graphics processing units (GPUs) can process general-purpose applications as well as graphics applications with the help of various user-friendly application programming interfaces (APIs) supported by GPU vendors. Unfortunately, utilizing the hardware resource in the GPU efficiently is a challenging problem, since the GPU architecture is totally different to the traditional CPU architecture. To solve this problem, many studies have focused on the techniques for improving the system performance using GPUs. In this work, we analyze the GPU performance varying GPU parameters such as the number of cores and clock frequency. According to our simulations, the GPU performance can be improved by 125.8% and 16.2% on average as the number of cores and clock frequency increase, respectively. However, the performance is saturated when memory bottleneck problems incur due to huge data requests to the memory. The performance of GPUs can be improved as the memory bottleneck is reduced by changing GPU parameters dynamically.

  6. LiGRO: a graphical user interface for protein-ligand molecular dynamics.

    Science.gov (United States)

    Kagami, Luciano Porto; das Neves, Gustavo Machado; da Silva, Alan Wilter Sousa; Caceres, Rafael Andrade; Kawano, Daniel Fábio; Eifler-Lima, Vera Lucia

    2017-10-04

    To speed up the drug-discovery process, molecular dynamics (MD) calculations performed in GROMACS can be coupled to docking simulations for the post-screening analyses of large compound libraries. This requires generating the topology of the ligands in different software, some basic knowledge of Linux command lines, and a certain familiarity in handling the output files. LiGRO-the python-based graphical interface introduced here-was designed to overcome these protein-ligand parameterization challenges by allowing the graphical (non command line-based) control of GROMACS (MD and analysis), ACPYPE (ligand topology builder) and PLIP (protein-binder interactions monitor)-programs that can be used together to fully perform and analyze the outputs of complex MD simulations (including energy minimization and NVT/NPT equilibration). By allowing the calculation of linear interaction energies in a simple and quick fashion, LiGRO can be used in the drug-discovery pipeline to select compounds with a better protein-binding interaction profile. The design of LiGRO allows researchers to freely download and modify the software, with the source code being available under the terms of a GPLv3 license from http://www.ufrgs.br/lasomfarmacia/ligro/ .

  7. Graphic filter library implemented in CUDA language

    OpenAIRE

    Peroutková, Hedvika

    2009-01-01

    This thesis deals with the problem of reducing computation time of raster image processing by parallel computing on graphics processing unit. Raster image processing thereby refers to the application of graphic filters, which can be applied in sequence with different settings. This thesis evaluates the suitability of using parallelization on graphic card for raster image adjustments based on multicriterial choice. Filters are implemented for graphics processing unit in CUDA language. Opacity ...

  8. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  9. Graphics gems

    CERN Document Server

    Heckbert, Paul S

    1994-01-01

    Graphics Gems IV contains practical techniques for 2D and 3D modeling, animation, rendering, and image processing. The book presents articles on polygons and polyhedral; a mix of formulas, optimized algorithms, and tutorial information on the geometry of 2D, 3D, and n-D space; transformations; and parametric curves and surfaces. The text also includes articles on ray tracing; shading 3D models; and frame buffer techniques. Articles on image processing; algorithms for graphical layout; basic interpolation methods; and subroutine libraries for vector and matrix algebra are also demonstrated. Com

  10. Neutron backscattered application in investigation for Pipeline Intelligent Gauge (PIG) tracking in RAYMINTEX matrix pipeline

    International Nuclear Information System (INIS)

    Mohd Fakarudin Badul Rahman; Ismail Mustapha; Nor Paiza Mohd Hasan; Pairu Ibrahim; Airwan Affandi Mahmood; Mior Ahmad Khusaini Adnan; Najib Mohammed Zakey

    2012-01-01

    The Radiation Vulcanized Natural Rubber Latex (RVNRL) process plants such RAYMINTEX, pipelines are used extensively to transfer a latex product from storage vessel and being irradiated to produce a high quality of latex. A hydraulically activated Pipeline Intelligent Gauge (PIG) was held back against the latex flow. Consequently, the stuck PIG in pipeline was subjected to interrupt plant operation. The investigation was carried out using the neutron backscattered technique scanner to track the stuck PIG in pipeline of RVNRL plant. The 50 mCi Americium Beryllium (AmBe 241 ) fast neutron emitter source in the range 0.5-11 MeV has been used and thermal neutrons in the 30 eV- 0.5 MeV was detected using Helium-3 (He 3 ) detector. It is observed that there is unambiguous relationship between vapour and RVNRL consequence of diverse hydrogen concentration in pipeline. Thus, neutron backscattered technique was capable to determine the location of stuck PIG in a RVNRL pipeline. (author)

  11. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    OpenAIRE

    Anzt, H.; Dongarra, J.; Heuveline, Vincent; Tomov, S.

    2011-01-01

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the r...

  12. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  13. CloVR-Comparative: automated, cloud-enabled comparative microbial genome sequence analysis pipeline.

    Science.gov (United States)

    Agrawal, Sonia; Arze, Cesar; Adkins, Ricky S; Crabtree, Jonathan; Riley, David; Vangala, Mahesh; Galens, Kevin; Fraser, Claire M; Tettelin, Hervé; White, Owen; Angiuoli, Samuel V; Mahurkar, Anup; Fricke, W Florian

    2017-04-27

    The benefit of increasing genomic sequence data to the scientific community depends on easy-to-use, scalable bioinformatics support. CloVR-Comparative combines commonly used bioinformatics tools into an intuitive, automated, and cloud-enabled analysis pipeline for comparative microbial genomics. CloVR-Comparative runs on annotated complete or draft genome sequences that are uploaded by the user or selected via a taxonomic tree-based user interface and downloaded from NCBI. CloVR-Comparative runs reference-free multiple whole-genome alignments to determine unique, shared and core coding sequences (CDSs) and single nucleotide polymorphisms (SNPs). Output includes short summary reports and detailed text-based results files, graphical visualizations (phylogenetic trees, circular figures), and a database file linked to the Sybil comparative genome browser. Data up- and download, pipeline configuration and monitoring, and access to Sybil are managed through CloVR-Comparative web interface. CloVR-Comparative and Sybil are distributed as part of the CloVR virtual appliance, which runs on local computers or the Amazon EC2 cloud. Representative datasets (e.g. 40 draft and complete Escherichia coli genomes) are processed in genomics projects, while eliminating the need for on-site computational resources and expertise.

  14. DIGITALIZATION CULTURE VS ARCHAEOLOGICAL VISUALIZATION: INTEGRATION OF PIPELINES AND OPEN ISSUES

    Directory of Open Access Journals (Sweden)

    L. Cipriani

    2017-02-01

    Full Text Available Scholars with different backgrounds have carried out extensive surveys centred on how 3D digital models, data acquisition and processing have changed over the years in fields of archaeology and architecture and more in general in the Cultural Heritage panorama: the current framework focused on reality-based modelling is then split in several branches: acquisition, communication and analysis of buildings (Pintus et alii, 2014. Despite the wide set of well-structured and all-encompassing surveys on the IT application in Cultural Heritage, several open issues still seem to be present, in particular once the purpose of digital simulacra is the one to fit with the “pre-informatics" legacy of architectural/archaeological representation (historical drawings with their graphic codes and aesthetics. Starting from a series of heterogeneous matters that came up studying two Italian UNESCO sites, this paper aims at underlining the importance of integrating different pipelines from different technological fields, in order to achieve multipurpose models, capable to comply with graphic codes of traditional survey, as well as semantic enrichment, and last but not least, data compression/portability and texture reliability under different lighting simulation.

  15. Pipelines in power plants

    International Nuclear Information System (INIS)

    Oude-Hengel, H.H.

    1978-01-01

    Since the end of the Sixties, steam-transporting pipelines are given great attention, as pipeline components often fail, partially even long before their designed operation time is over. Thus, experts must increasingly deal with questions of pipelines and their components. Design and calculation, production and operation of pipelines are included in the discussion. Within the frame of this discussion, planners, producers, operators, and technical surveillance personnel must be able to offer a homogenous 'plan for assuring the quality of pipelines' in fossil and nuclear power plants. This book tries to make a contribution to this topic. 'Quality assuring' means efforts made for meeting the demands of quality (reliability). The book does not intend to complete with well-known manuals, as for as a complete covering of the topic is concerned. A substantial part of its sections serves to show how quality assurance of pipelines can be at least partially obtained by surveillance measures beginning with the planning, covering the production, and finally accompanying the operation. There is hardly need to mention that the sort of planning, production, and operation has an important influence on the quality. This is why another part of the sections contain process aspects from the view of the planners, producers, and operators. (orig.) [de

  16. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  17. Mechanical properties of bovine cortical bone based on the automated ball indentation technique and graphics processing method.

    Science.gov (United States)

    Zhang, Airong; Zhang, Song; Bian, Cuirong

    2018-02-01

    Cortical bone provides the main form of support in humans and other vertebrates against various forces. Thus, capturing its mechanical properties is important. In this study, the mechanical properties of cortical bone were investigated by using automated ball indentation and graphics processing at both the macroscopic and microstructural levels under dry conditions. First, all polished samples were photographed under a metallographic microscope, and the area ratio of the circumferential lamellae and osteons was calculated through the graphics processing method. Second, fully-computer-controlled automated ball indentation (ABI) tests were performed to explore the micro-mechanical properties of the cortical bone at room temperature and a constant indenter speed. The indentation defects were examined with a scanning electron microscope. Finally, the macroscopic mechanical properties of the cortical bone were estimated with the graphics processing method and mixture rule. Combining ABI and graphics processing proved to be an effective tool to obtaining the mechanical properties of the cortical bone, and the indenter size had a significant effect on the measurement. The methods presented in this paper provide an innovative approach to acquiring the macroscopic mechanical properties of cortical bone in a nondestructive manner. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Pipelines. Economy's veins; Pipelines. Adern der Wirtschaft

    Energy Technology Data Exchange (ETDEWEB)

    Feizlmayr, Adolf; Goestl, Stefan [ILF Beratende Ingenieure, Muenchen (Germany)

    2011-02-15

    According to the existing prognoses more than 1 million km of gas pipelines, oil pipelines and water pipelines are built up to the year 2030. The predominant portion is from gas pipelines. The safe continued utilization of the aging pipelines is a large challenge. In addition, the diagnostic technology, the evaluation and risk assessment have to be developed further. With the design of new oil pipelines and gas pipelines, aspects of environmental protection, the energy efficiency of transport and thus the emission reduction of carbon dioxide, the public acceptance and the market strategy of the exporters gain in importance. With the offshore pipelines one soon will exceed the present border of 2,000 m depth of water and penetrate into larger sea depths.

  19. Workflows for microarray data processing in the Kepler environment

    Science.gov (United States)

    2012-01-01

    Background Microarray data analysis has been the subject of extensive and ongoing pipeline development due to its complexity, the availability of several options at each analysis step, and the development of new analysis demands, including integration with new data sources. Bioinformatics pipelines are usually custom built for different applications, making them typically difficult to modify, extend and repurpose. Scientific workflow systems are intended to address these issues by providing general-purpose frameworks in which to develop and execute such pipelines. The Kepler workflow environment is a well-established system under continual development that is employed in several areas of scientific research. Kepler provides a flexible graphical interface, featuring clear display of parameter values, for design and modification of workflows. It has capabilities for developing novel computational components in the R, Python, and Java programming languages, all of which are widely used for bioinformatics algorithm development, along with capabilities for invoking external applications and using web services. Results We developed a series of fully functional bioinformatics pipelines addressing common tasks in microarray processing in the Kepler workflow environment. These pipelines consist of a set of tools for GFF file processing of NimbleGen chromatin immunoprecipitation on microarray (ChIP-chip) datasets and more comprehensive workflows for Affymetrix gene expression microarray bioinformatics and basic primer design for PCR experiments, which are often used to validate microarray results. Although functional in themselves, these workflows can be easily customized, extended, or repurposed to match the needs of specific projects and are designed to be a toolkit and starting point for specific applications. These workflows illustrate a workflow programming paradigm focusing on local resources (programs and data) and therefore are close to traditional shell scripting or

  20. Workflows for microarray data processing in the Kepler environment

    Directory of Open Access Journals (Sweden)

    Stropp Thomas

    2012-05-01

    Full Text Available Abstract Background Microarray data analysis has been the subject of extensive and ongoing pipeline development due to its complexity, the availability of several options at each analysis step, and the development of new analysis demands, including integration with new data sources. Bioinformatics pipelines are usually custom built for different applications, making them typically difficult to modify, extend and repurpose. Scientific workflow systems are intended to address these issues by providing general-purpose frameworks in which to develop and execute such pipelines. The Kepler workflow environment is a well-established system under continual development that is employed in several areas of scientific research. Kepler provides a flexible graphical interface, featuring clear display of parameter values, for design and modification of workflows. It has capabilities for developing novel computational components in the R, Python, and Java programming languages, all of which are widely used for bioinformatics algorithm development, along with capabilities for invoking external applications and using web services. Results We developed a series of fully functional bioinformatics pipelines addressing common tasks in microarray processing in the Kepler workflow environment. These pipelines consist of a set of tools for GFF file processing of NimbleGen chromatin immunoprecipitation on microarray (ChIP-chip datasets and more comprehensive workflows for Affymetrix gene expression microarray bioinformatics and basic primer design for PCR experiments, which are often used to validate microarray results. Although functional in themselves, these workflows can be easily customized, extended, or repurposed to match the needs of specific projects and are designed to be a toolkit and starting point for specific applications. These workflows illustrate a workflow programming paradigm focusing on local resources (programs and data and therefore are close to

  1. Workflows for microarray data processing in the Kepler environment.

    Science.gov (United States)

    Stropp, Thomas; McPhillips, Timothy; Ludäscher, Bertram; Bieda, Mark

    2012-05-17

    Microarray data analysis has been the subject of extensive and ongoing pipeline development due to its complexity, the availability of several options at each analysis step, and the development of new analysis demands, including integration with new data sources. Bioinformatics pipelines are usually custom built for different applications, making them typically difficult to modify, extend and repurpose. Scientific workflow systems are intended to address these issues by providing general-purpose frameworks in which to develop and execute such pipelines. The Kepler workflow environment is a well-established system under continual development that is employed in several areas of scientific research. Kepler provides a flexible graphical interface, featuring clear display of parameter values, for design and modification of workflows. It has capabilities for developing novel computational components in the R, Python, and Java programming languages, all of which are widely used for bioinformatics algorithm development, along with capabilities for invoking external applications and using web services. We developed a series of fully functional bioinformatics pipelines addressing common tasks in microarray processing in the Kepler workflow environment. These pipelines consist of a set of tools for GFF file processing of NimbleGen chromatin immunoprecipitation on microarray (ChIP-chip) datasets and more comprehensive workflows for Affymetrix gene expression microarray bioinformatics and basic primer design for PCR experiments, which are often used to validate microarray results. Although functional in themselves, these workflows can be easily customized, extended, or repurposed to match the needs of specific projects and are designed to be a toolkit and starting point for specific applications. These workflows illustrate a workflow programming paradigm focusing on local resources (programs and data) and therefore are close to traditional shell scripting or R

  2. Multibeam Gpu Transient Pipeline for the Medicina BEST-2 Array

    Science.gov (United States)

    Magro, A.; Hickish, J.; Adami, K. Z.

    2013-09-01

    Radio transient discovery using next generation radio telescopes will pose several digital signal processing and data transfer challenges, requiring specialized high-performance backends. Several accelerator technologies are being considered as prototyping platforms, including Graphics Processing Units (GPUs). In this paper we present a real-time pipeline prototype capable of processing multiple beams concurrently, performing Radio Frequency Interference (RFI) rejection through thresholding, correcting for the delay in signal arrival times across the frequency band using brute-force dedispersion, event detection and clustering, and finally candidate filtering, with the capability of persisting data buffers containing interesting signals to disk. This setup was deployed at the BEST-2 SKA pathfinder in Medicina, Italy, where several benchmarks and test observations of astrophysical transients were conducted. These tests show that on the deployed hardware eight 20 MHz beams can be processed simultaneously for 640 Dispersion Measure (DM) values. Furthermore, the clustering and candidate filtering algorithms employed prove to be good candidates for online event detection techniques. The number of beams which can be processed increases proportionally to the number of servers deployed and number of GPUs, making it a viable architecture for current and future radio telescopes.

  3. The Harvard Automated Processing Pipeline for Electroencephalography (HAPPE): Standardized Processing Software for Developmental and High-Artifact Data.

    Science.gov (United States)

    Gabard-Durnam, Laurel J; Mendez Leal, Adriana S; Wilkinson, Carol L; Levin, April R

    2018-01-01

    Electroenchephalography (EEG) recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE) as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact contamination and

  4. The Harvard Automated Processing Pipeline for Electroencephalography (HAPPE: Standardized Processing Software for Developmental and High-Artifact Data

    Directory of Open Access Journals (Sweden)

    Laurel J. Gabard-Durnam

    2018-02-01

    Full Text Available Electroenchephalography (EEG recordings collected with developmental populations present particular challenges from a data processing perspective. These EEGs have a high degree of artifact contamination and often short recording lengths. As both sample sizes and EEG channel densities increase, traditional processing approaches like manual data rejection are becoming unsustainable. Moreover, such subjective approaches preclude standardized metrics of data quality, despite the heightened importance of such measures for EEGs with high rates of initial artifact contamination. There is presently a paucity of automated resources for processing these EEG data and no consistent reporting of data quality measures. To address these challenges, we propose the Harvard Automated Processing Pipeline for EEG (HAPPE as a standardized, automated pipeline compatible with EEG recordings of variable lengths and artifact contamination levels, including high-artifact and short EEG recordings from young children or those with neurodevelopmental disorders. HAPPE processes event-related and resting-state EEG data from raw files through a series of filtering, artifact rejection, and re-referencing steps to processed EEG suitable for time-frequency-domain analyses. HAPPE also includes a post-processing report of data quality metrics to facilitate the evaluation and reporting of data quality in a standardized manner. Here, we describe each processing step in HAPPE, perform an example analysis with EEG files we have made freely available, and show that HAPPE outperforms seven alternative, widely-used processing approaches. HAPPE removes more artifact than all alternative approaches while simultaneously preserving greater or equivalent amounts of EEG signal in almost all instances. We also provide distributions of HAPPE's data quality metrics in an 867 file dataset as a reference distribution and in support of HAPPE's performance across EEG data with variable artifact

  5. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    Science.gov (United States)

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-09-04

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.

  6. 78 FR 70623 - Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory...

    Science.gov (United States)

    2013-11-26

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2009-0203] Pipeline Safety: Meeting of the Gas Pipeline Advisory Committee and the Liquid Pipeline Advisory Committee AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. [[Page...

  7. The VLITE Post-Processing Pipeline

    Science.gov (United States)

    Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.

    2018-01-01

    A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE; http://vlite.nrao.edu/>) on the Karl G. Jansky Very Large Array (VLA). In contrast to other radio sky surveys, the commensal observing mode of VLITE results in varying depths, sensitivities, and spatial resolutions across the sky based on the configuration of the VLA, location on the sky, and time on source specified by the primary observer for their independent science objectives. Therefore, previously developed tools and methods for generating source catalogs and survey statistics are not always appropriate for VLITE's diverse and growing set of data. A raw catalog of point sources extracted from every VLITE image will be created from source fit parameters stored in a queryable database. Point sources will be measured using the Python Blob Detector and Source Finder software (PyBDSF; Mohan & Rafferty 2015). Sources in the raw catalog will be associated with previous VLITE detections in a resolution- and sensitivity-dependent manner, and cross-matched to other radio sky surveys to aid in the detection of transient and variable sources. Final data products will include separate, tiered point source catalogs grouped by sensitivity limit and spatial resolution.

  8. Overview of interstate hydrogen pipeline systems

    International Nuclear Information System (INIS)

    Gillette, J.L.; Kolpa, R.L.

    2008-01-01

    The use of hydrogen in the energy sector of the United States is projected to increase significantly in the future. Current uses are predominantly in the petroleum refining sector, with hydrogen also being used in the manufacture of chemicals and other specialized products. Growth in hydrogen consumption is likely to appear in the refining sector, where greater quantities of hydrogen will be required as the quality of the raw crude decreases, and in the mining and processing of tar sands and other energy resources that are not currently used at a significant level. Furthermore, the use of hydrogen as a transportation fuel has been proposed both by automobile manufacturers and the federal government. Assuming that the use of hydrogen will significantly increase in the future, there would be a corresponding need to transport this material. A variety of production technologies are available for making hydrogen, and there are equally varied raw materials. Potential raw materials include natural gas, coal, nuclear fuel, and renewables such as solar, wind, or wave energy. As these raw materials are not uniformly distributed throughout the United States, it would be necessary to transport either the raw materials or the hydrogen long distances to the appropriate markets. While hydrogen may be transported in a number of possible forms, pipelines currently appear to be the most economical means of moving it in large quantities over great distances. One means of controlling hydrogen pipeline costs is to use common rights-of-way (ROWs) whenever feasible. For that reason, information on hydrogen pipelines is the focus of this document. Many of the features of hydrogen pipelines are similar to those of natural gas pipelines. Furthermore, as hydrogen pipeline networks expand, many of the same construction and operating features of natural gas networks would be replicated. As a result, the description of hydrogen pipelines will be very similar to that of natural gas pipelines

  9. Overview of interstate hydrogen pipeline systems.

    Energy Technology Data Exchange (ETDEWEB)

    Gillette, J .L.; Kolpa, R. L

    2008-02-01

    The use of hydrogen in the energy sector of the United States is projected to increase significantly in the future. Current uses are predominantly in the petroleum refining sector, with hydrogen also being used in the manufacture of chemicals and other specialized products. Growth in hydrogen consumption is likely to appear in the refining sector, where greater quantities of hydrogen will be required as the quality of the raw crude decreases, and in the mining and processing of tar sands and other energy resources that are not currently used at a significant level. Furthermore, the use of hydrogen as a transportation fuel has been proposed both by automobile manufacturers and the federal government. Assuming that the use of hydrogen will significantly increase in the future, there would be a corresponding need to transport this material. A variety of production technologies are available for making hydrogen, and there are equally varied raw materials. Potential raw materials include natural gas, coal, nuclear fuel, and renewables such as solar, wind, or wave energy. As these raw materials are not uniformly distributed throughout the United States, it would be necessary to transport either the raw materials or the hydrogen long distances to the appropriate markets. While hydrogen may be transported in a number of possible forms, pipelines currently appear to be the most economical means of moving it in large quantities over great distances. One means of controlling hydrogen pipeline costs is to use common rights-of-way (ROWs) whenever feasible. For that reason, information on hydrogen pipelines is the focus of this document. Many of the features of hydrogen pipelines are similar to those of natural gas pipelines. Furthermore, as hydrogen pipeline networks expand, many of the same construction and operating features of natural gas networks would be replicated. As a result, the description of hydrogen pipelines will be very similar to that of natural gas pipelines

  10. Touch-sensitive graphics terminal applied to process control

    International Nuclear Information System (INIS)

    Bennion, S.I.; Creager, J.D.; VanHouten, R.D.

    1981-01-01

    Limited initial demonstrations of the system described took place during September 1980. A single CRT was used an an input device in the control center while operating a furnace and a pellet inspection gage. These two process line devices were completely controlled, despite the longer than desired response times noted, using a single control station located in the control center. The operator could conveniently execute any function from this remote location which could be performed locally at the hard-wired control panels. With the installation of the enhancements, the integrated touchscreen/graphics terminal will provide a preferable alternative to normal keyboard command input devices

  11. StreakDet data processing and analysis pipeline for space debris optical observations

    Science.gov (United States)

    Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri

    We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the

  12. The graphics future in scientific applications-trends and developments in computer graphics

    CERN Document Server

    Enderle, G

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations will appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education.

  13. Explicet: graphical user interface software for metadata-driven management, analysis and visualization of microbiome data.

    Science.gov (United States)

    Robertson, Charles E; Harris, J Kirk; Wagner, Brandie D; Granger, David; Browne, Kathy; Tatem, Beth; Feazel, Leah M; Park, Kristin; Pace, Norman R; Frank, Daniel N

    2013-12-01

    Studies of the human microbiome, and microbial community ecology in general, have blossomed of late and are now a burgeoning source of exciting research findings. Along with the advent of next-generation sequencing platforms, which have dramatically increased the scope of microbiome-related projects, several high-performance sequence analysis pipelines (e.g. QIIME, MOTHUR, VAMPS) are now available to investigators for microbiome analysis. The subject of our manuscript, the graphical user interface-based Explicet software package, fills a previously unmet need for a robust, yet intuitive means of integrating the outputs of the software pipelines with user-specified metadata and then visualizing the combined data.

  14. Graphics Gems III IBM version

    CERN Document Server

    Kirk, David

    1994-01-01

    This sequel to Graphics Gems (Academic Press, 1990), and Graphics Gems II (Academic Press, 1991) is a practical collection of computer graphics programming tools and techniques. Graphics Gems III contains a larger percentage of gems related to modeling and rendering, particularly lighting and shading. This new edition also covers image processing, numerical and programming techniques, modeling and transformations, 2D and 3D geometry and algorithms,ray tracing and radiosity, rendering, and more clever new tools and tricks for graphics programming. Volume III also includes a

  15. Image processing and computer graphics in radiology. Pt. A

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  16. Image processing and computer graphics in radiology. Pt. B

    International Nuclear Information System (INIS)

    Toennies, K.D.

    1993-01-01

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG) [de

  17. High-Performance Pseudo-Random Number Generation on Graphics Processing Units

    OpenAIRE

    Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair

    2011-01-01

    This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...

  18. MAP3D: a media processor approach for high-end 3D graphics

    Science.gov (United States)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  19. COMPUTER GRAPHICS IN ENGINEERING GRAPHICS DEPARTMENT OF MOSCOW AVIATION INSTITUTE EDUCATIONAL PROCESS

    OpenAIRE

    Ludmila P. Bobrik; Leonid V. Markin

    2013-01-01

    Current state of technical universities students’ engineering grounding and “Engineering graphics” course place in MAI are analyzed in this paper. Also bachelor degree problems and experience of creation of issuing specialty based on «Engineering graphics» department are considered. 

  20. Distributed acoustic sensing for pipeline monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Hill, David; McEwen-King, Magnus [OptaSense, QinetiQ Ltd., London (United Kingdom)

    2009-07-01

    Optical fibre is deployed widely across the oil and gas industry. As well as being deployed regularly to provide high bandwidth telecommunications and infrastructure for SCADA it is increasingly being used to sense pressure, temperature and strain along buried pipelines, on subsea pipelines and downhole. In this paper we present results from the latest sensing capability using standard optical fibre to detect acoustic signals along the entire length of a pipeline. In Distributed Acoustic Sensing (DAS) an optical fibre is used for both sensing and telemetry. In this paper we present results from the OptaSense{sup TM} system which has been used to detect third party intervention (TPI) along buried pipelines. In a typical deployment the system is connected to an existing standard single-mode fibre, up to 50km in length, and was used to independently listen to the acoustic / seismic activity at every 10 meter interval. We will show that through the use of advanced array processing of the independent, simultaneously sampled channels it is possible to detect and locate activity within the vicinity of the pipeline and through sophisticated acoustic signal processing to obtain the acoustic signature to classify the type of activity. By combining spare fibre capacity in existing buried fibre optic cables; processing and display techniques commonly found in sonar; and state-of-the-art in fibre-optic distributed acoustic sensing, we will describe the new monitoring capabilities that are available to the pipeline operator. Without the expense of retrofitting sensors to the pipeline, this technology can provide a high performance, rapidly deployable and cost effective method of providing gapless and persistent monitoring of a pipeline. We will show how this approach can be used to detect, classify and locate activity such as; third party interference (including activity indicative of illegal hot tapping); real time tracking of pigs; and leak detection. We will also show how an

  1. Generating pipeline networks for corrosion assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J. [Cimarron Engineering Ltd., Calgary, AB (Canada)

    2008-07-01

    Production characteristics and gas-fluid compositions of fluids must be known in order to assess pipelines for internal corrosion risk. In this study, a gathering system pipeline network was built in order to determine corrosion risk for gathering system pipelines. Connections were established between feeder and collector lines in order measure upstream production and the weighted average of the upstream composition of each pipeline in the system. A Norsok M-506 carbon dioxide (CO{sub 2}) corrosion rate model was used to calculate corrosion rates. A spreadsheet was then used to tabulate the obtained data. The analysis used straight lines drawn between the 'from' and 'to' legal sub-division (LSD) endpoints in order to represent pipelines on an Alberta township system (ATS) and identify connections between pipelines. Well connections were established based on matching surface hole location and 'from' LSDs. Well production, composition, pressure, and temperature data were sourced and recorded as well attributes. XSL hierarchical computations were used to determine the production and composition properties of the commingled inflows. It was concluded that the corrosion assessment process can identify locations within the pipeline network where potential deadlegs branched off from flowing pipelines. 4 refs., 2 tabs., 2 figs.

  2. Identification of natural images and computer-generated graphics based on statistical and textural features.

    Science.gov (United States)

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  3. COMPUTER GRAPHICS IN ENGINEERING GRAPHICS DEPARTMENT OF MOSCOW AVIATION INSTITUTE EDUCATIONAL PROCESS

    Directory of Open Access Journals (Sweden)

    Ludmila P. Bobrik

    2013-01-01

    Full Text Available Current state of technical universities students’ engineering grounding and “Engineering graphics” course place in MAI are analyzed in this paper. Also bachelor degree problems and experience of creation of issuing specialty based on «Engineering graphics» department are considered. 

  4. Energy Level Composite Curves-a new graphical methodology for the integration of energy intensive processes

    International Nuclear Information System (INIS)

    Anantharaman, Rahul; Abbas, Own Syed; Gundersen, Truls

    2006-01-01

    Pinch Analysis, Exergy Analysis and Optimization have all been used independently or in combination for the energy integration of process plants. In order to address the issue of energy integration, taking into account composition and pressure effects, the concept of energy level as proposed by [X. Feng, X.X. Zhu, Combining pinch and exergy analysis for process modifications, Appl. Therm. Eng. 17 (1997) 249] has been modified and expanded in this work. We have developed a strategy for energy integration that uses process simulation tools to define the interaction between the various subsystems in the plant and a graphical technique to help the engineer interpret the results of the simulation with physical insights that point towards exploring possible integration schemes to increase energy efficiency. The proposed graphical representation of energy levels of processes is very similar to the Composite Curves of Pinch Analysis-the interpretation of the Energy Level Composite Curves reduces to the Pinch Analysis case when dealing with heat transfer. Other similarities and differences are detailed in this work. Energy integration of a methanol plant is taken as a case study to test the efficacy of this methodology. Potential integration schemes are identified that would have been difficult to visualize without the help of the new graphical representation

  5. XPIWIT--an XML pipeline wrapper for the Insight Toolkit.

    Science.gov (United States)

    Bartschat, Andreas; Hübner, Eduard; Reischl, Markus; Mikut, Ralf; Stegmaier, Johannes

    2016-01-15

    The Insight Toolkit offers plenty of features for multidimensional image analysis. Current implementations, however, often suffer either from a lack of flexibility due to hard-coded C++ pipelines for a certain task or by slow execution times, e.g. caused by inefficient implementations or multiple read/write operations for separate filter execution. We present an XML-based wrapper application for the Insight Toolkit that combines the performance of a pure C++ implementation with an easy-to-use graphical setup of dynamic image analysis pipelines. Created XML pipelines can be interpreted and executed by XPIWIT in console mode either locally or on large clusters. We successfully applied the software tool for the automated analysis of terabyte-scale, time-resolved 3D image data of zebrafish embryos. XPIWIT is implemented in C++ using the Insight Toolkit and the Qt SDK. It has been successfully compiled and tested under Windows and Unix-based systems. Software and documentation are distributed under Apache 2.0 license and are publicly available for download at https://bitbucket.org/jstegmaier/xpiwit/downloads/. johannes.stegmaier@kit.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  7. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  8. SraTailor: graphical user interface software for processing and visualizing ChIP-seq data.

    Science.gov (United States)

    Oki, Shinya; Maehara, Kazumitsu; Ohkawa, Yasuyuki; Meno, Chikara

    2014-12-01

    Raw data from ChIP-seq (chromatin immunoprecipitation combined with massively parallel DNA sequencing) experiments are deposited in public databases as SRAs (Sequence Read Archives) that are publically available to all researchers. However, to graphically visualize ChIP-seq data of interest, the corresponding SRAs must be downloaded and converted into BigWig format, a process that involves complicated command-line processing. This task requires users to possess skill with script languages and sequence data processing, a requirement that prevents a wide range of biologists from exploiting SRAs. To address these challenges, we developed SraTailor, a GUI (Graphical User Interface) software package that automatically converts an SRA into a BigWig-formatted file. Simplicity of use is one of the most notable features of SraTailor: entering an accession number of an SRA and clicking the mouse are the only steps required to obtain BigWig-formatted files and to graphically visualize the extents of reads at given loci. SraTailor is also able to make peak calls, generate files of other formats, process users' own data, and accept various command-line-like options. Therefore, this software makes ChIP-seq data fully exploitable by a wide range of biologists. SraTailor is freely available at http://www.devbio.med.kyushu-u.ac.jp/sra_tailor/, and runs on both Mac and Windows machines. © 2014 The Authors Genes to Cells © 2014 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.

  9. Smart Sound Processing for Defect Sizing in Pipelines Using EMAT Actuator Based Multi-Frequency Lamb Waves

    Directory of Open Access Journals (Sweden)

    Joaquín García-Gómez

    2018-03-01

    Full Text Available Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP. These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE.

  10. Smart Sound Processing for Defect Sizing in Pipelines Using EMAT Actuator Based Multi-Frequency Lamb Waves.

    Science.gov (United States)

    García-Gómez, Joaquín; Gil-Pita, Roberto; Rosa-Zurera, Manuel; Romero-Camacho, Antonio; Jiménez-Garrido, Jesús Antonio; García-Benavides, Víctor

    2018-03-07

    Pipeline inspection is a topic of particular interest to the companies. Especially important is the defect sizing, which allows them to avoid subsequent costly repairs in their equipment. A solution for this issue is using ultrasonic waves sensed through Electro-Magnetic Acoustic Transducer (EMAT) actuators. The main advantage of this technology is the absence of the need to have direct contact with the surface of the material under investigation, which must be a conductive one. Specifically interesting is the meander-line-coil based Lamb wave generation, since the directivity of the waves allows a study based in the circumferential wrap-around received signal. However, the variety of defect sizes changes the behavior of the signal when it passes through the pipeline. Because of that, it is necessary to apply advanced techniques based on Smart Sound Processing (SSP). These methods involve extracting useful information from the signals sensed with EMAT at different frequencies to obtain nonlinear estimations of the depth of the defect, and to select the features that better estimate the profile of the pipeline. The proposed technique has been tested using both simulated and real signals in steel pipelines, obtaining good results in terms of Root Mean Square Error (RMSE).

  11. Effort problem of chemical pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Okrajni, J.; Ciesla, M.; Mutwil, K. [Silesian Technical University, Katowice (Poland)

    1998-12-31

    The problem of the technical state assessment of the chemical pipelines working under mechanical and thermal loading has been shown in the paper. The pipelines effort after the long time operating period has been analysed. Material geometrical and loading conditions of the crack initiation and crack growth process in the chosen object has been discussed. Areas of the maximal effort have been determined. The material structure charges after the long time operating period have been described. Mechanisms of the crack initiation and crack growth in the pipeline elements have been analysed and mutual relations between the chemical and mechanical influences have been shown. (orig.) 16 refs.

  12. Structural reliability analysis applied to pipeline risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, M. [GL Industrial Services, Loughborough (United Kingdom); Mendes, Renato F.; Donato, Guilherme V.P. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Quantitative Risk Assessment (QRA) of pipelines requires two main components to be provided. These are models of the consequences that follow from some loss of containment incident, and models for the likelihood of such incidents occurring. This paper describes how PETROBRAS have used Structural Reliability Analysis for the second of these, to provide pipeline- and location-specific predictions of failure frequency for a number of pipeline assets. This paper presents an approach to estimating failure rates for liquid and gas pipelines, using Structural Reliability Analysis (SRA) to analyze the credible basic mechanisms of failure such as corrosion and mechanical damage. SRA is a probabilistic limit state method: for a given failure mechanism it quantifies the uncertainty in parameters to mathematical models of the load-resistance state of a structure and then evaluates the probability of load exceeding resistance. SRA can be used to benefit the pipeline risk management process by optimizing in-line inspection schedules, and as part of the design process for new construction in pipeline rights of way that already contain multiple lines. A case study is presented to show how the SRA approach has recently been used on PETROBRAS pipelines and the benefits obtained from it. (author)

  13. Beyond the pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Barnsley, J.; Ellis, D.; McIntosh, J.

    1979-12-01

    A study was conducted on the lives of women and their families in Fort Nelson, British Columbia, and Whitehorse, Yukon Territory, two communities which are to be affected by the proposed construction of the Alaska Highway gas pipeline. The womens' socio-economic concerns resulting from the proposed construction were examined by means of interviews with samples of women living in the two communities. Results from the study include descriptions of the communities and their basic services, community planning and housing, women's work in the home and for wages, and the perceived impact of the pipeline on such matters as employment, social services, living costs, business, housing, crime, and the overall community. Recommendations are made to improve the planning process for the pipeline to include the taking into account of womens' needs in such areas as training, health care, housing, and community services. 213 refs., 4 figs., 2 tabs.

  14. 76 FR 53086 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Science.gov (United States)

    2011-08-25

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2011-0023] RIN 2137-AE72 Pipeline Safety: Safety of Gas Transmission Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), Department of Transportation (DOT...

  15. Hierarchical data structures for graphics program languages

    International Nuclear Information System (INIS)

    Gonauser, M.; Schinner, P.; Weiss, J.

    1978-01-01

    Graphic data processing with a computer makes exacting demands on the interactive capability of the program language and the management of the graphic data. A description of the structure of a graphics program language which has been shown by initial practical experiments to possess a particularly favorable interactive capability is followed by the evaluation of various data structures (list, tree, ring) with respect to their interactive capability in processing graphics. A practical structure is proposed. (orig.) [de

  16. Determination of surgical variables for a brain shift correction pipeline using an Android application

    Science.gov (United States)

    Vijayan, Rohan; Conley, Rebekah H.; Thompson, Reid C.; Clements, Logan W.; Miga, Michael I.

    2016-03-01

    Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict "brain shift" based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient's head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application's exported data was determined by comparing it to data acquired from the physical execution of the surgeon's plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient's head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.

  17. The JCSG high-throughput structural biology pipeline

    International Nuclear Information System (INIS)

    Elsliger, Marc-André; Deacon, Ashley M.; Godzik, Adam; Lesley, Scott A.; Wooley, John; Wüthrich, Kurt; Wilson, Ian A.

    2010-01-01

    The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years and has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe. The Joint Center for Structural Genomics high-throughput structural biology pipeline has delivered more than 1000 structures to the community over the past ten years. The JCSG has made a significant contribution to the overall goal of the NIH Protein Structure Initiative (PSI) of expanding structural coverage of the protein universe, as well as making substantial inroads into structural coverage of an entire organism. Targets are processed through an extensive combination of bioinformatics and biophysical analyses to efficiently characterize and optimize each target prior to selection for structure determination. The pipeline uses parallel processing methods at almost every step in the process and can adapt to a wide range of protein targets from bacterial to human. The construction, expansion and optimization of the JCSG gene-to-structure pipeline over the years have resulted in many technological and methodological advances and developments. The vast number of targets and the enormous amounts of associated data processed through the multiple stages of the experimental pipeline required the development of variety of valuable resources that, wherever feasible, have been converted to free-access web-based tools and applications

  18. THREE-DIMENSIONAL MODELING TOOLS IN THE PROCESS OF FORMATION OF GRAPHIC COMPETENCE OF THE FUTURE BACHELOR OF COMPUTER SCIENCE

    Directory of Open Access Journals (Sweden)

    Kateryna P. Osadcha

    2017-12-01

    Full Text Available The article is devoted to some aspects of the formation of future bachelor's graphic competence in computer sciences while teaching the fundamentals for working with three-dimensional modelling means. The analysis, classification and systematization of three-dimensional modelling means are given. The aim of research consists in investigating the set of instruments and classification of three-dimensional modelling means and correlation of skills, which are being formed, concerning inquired ones at the labour market in order to use them further in the process of forming graphic competence during training future bachelors in computer sciences. The peculiarities of the process of forming future bachelor's graphic competence in computer sciences by means of revealing, analyzing and systematizing three-dimensional modelling means and types of three-dimensional graphics at present stage of the development of informational technologies are traced a line round. The result of the research is a soft-ware choice in three-dimensional modelling for the process of training future bachelors in computer sciences.

  19. Storyboard dalam Pembuatan Motion Graphic

    Directory of Open Access Journals (Sweden)

    Satrya Mahardhika

    2013-10-01

    Full Text Available Motion graphics is one category in the animation that makes animation with lots of design elements in each component. Motion graphics needs long process including preproduction, production, and postproduction. Preproduction has an important role so that the next stage may provide guidance or instructions for the production process or the animation process. Preproduction includes research, making the story, script, screenplay, character, environment design and storyboards. The storyboard will be determined through camera angles, blocking, sets, and many supporting roles involved in a scene. Storyboard is also useful as a production reference in recording or taping each scene in sequence or as an efficient priority. The example used is an ad creation using motion graphic animation storyboard which has an important role as a blueprint for every scene and giving instructions to make the transition movement, layout, blocking, and defining camera movement that everything should be done periodically in animation production. Planning before making the animation or motion graphic will make the job more organized, presentable, and more efficient in the process.  

  20. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  1. 76 FR 70953 - Pipeline Safety: Safety of Gas Transmission Pipelines

    Science.gov (United States)

    2011-11-16

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket ID PHMSA-2011-0023] RIN 2137-AE72 Pipeline Safety: Safety of Gas Transmission Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Advance notice of...

  2. Storyboard dalam Pembuatan Motion Graphic

    OpenAIRE

    Satrya Mahardhika; A.F. Choiril Anam Fathoni

    2013-01-01

    Motion graphics is one category in the animation that makes animation with lots of design elements in each component. Motion graphics needs long process including preproduction, production, and postproduction. Preproduction has an important role so that the next stage may provide guidance or instructions for the production process or the animation process. Preproduction includes research, making the story, script, screenplay, character, environment design and storyboards. The storyboard will ...

  3. Supertracker: A Programmable Parallel Pipeline Arithmetic Processor For Auto-Cueing Target Processing

    Science.gov (United States)

    Mack, Harold; Reddi, S. S.

    1980-04-01

    Supertracker represents a programmable parallel pipeline computer architecture that has been designed to meet the real time image processing requirements of auto-cueing target data processing. The prototype bread-board currently under development will be designed to perform input video preprocessing and processing for 525-line and 875-line TV formats FLIR video, automatic display gain and contrast control, and automatic target cueing, classification, and tracking. The video preprocessor is capable of performing operations full frames of video data in real time, e.g., frame integration, storage, 3 x 3 convolution, and neighborhood processing. The processor architecture is being implemented using bit-slice microprogrammable arithmetic processors, operating in parallel. Each processor is capable of up to 20 million operations per second. Multiple frame memories are used for additional flexibility.

  4. Alaska-Canada Pipeline Project : getting it done

    Energy Technology Data Exchange (ETDEWEB)

    Brintnell, R. [Enbridge Pipelines Inc., Calgary, AB (Canada)

    2005-07-01

    Enbridge's unique qualifications for the proposed Alaska-Canada pipeline that will extend from Prudhoe Bay, Alaska to Fort Saskatchewan, Alberta was discussed. Enbridge is Canada's largest local distribution company (LDC), handling approximately 14 bcf of natural gas per day through pipeline, processing and marketing. It also operates the world's longest liquids pipeline, delivering more than 2 million barrels per day. The company also has 20 years of operational experience in perma frost regions. The key challenges facing the construction of the proposed new high pressure liquids rich pipeline were discussed with reference to market outlook; cost reduction; U.S. fiscal and regulatory issues; Alaska fiscal contract; and, Canadian regulatory efficiency. A successful project will mean a $15 billion capital expenditure in Canada, $16 billion in government revenues, 12,000 construction work years, and tens of thousands of new jobs. It will also improve Alberta's position as the key energy hub and will increase the utilization of the existing infrastructure. Canadian consumers will benefit from access to a new supply basin and a more secure source of clean-burning natural gas at a cost competitive price. In order to get the project completed, the following requirements must be met: regulatory regimes must be clear and predictable; land access must be ensured in a timely manner; access to skilled human resources, material and equipment must also be ensured to facilitate timely and efficient project implementation; and, the safe and environmentally sound operation of the pipelines must also be ensured. This paper highlighted Canadian regulatory options in terms of the National Energy Board Act, Canadian Environmental Assessment Act, the Yukon Environmental and Socio-Economic Assessment Act, and the Northern Pipeline Act. Enbridge's proposed straddle plant at Fort Saskatchewan was discussed along with inter-connecting pipeline options. Enbridge

  5. Initial Assessment of Parallelization of Monte Carlo Calculation using Graphics Processing Units

    International Nuclear Information System (INIS)

    Choi, Sung Hoon; Joo, Han Gyu

    2009-01-01

    Monte Carlo (MC) simulation is an effective tool for calculating neutron transports in complex geometry. However, because Monte Carlo simulates each neutron behavior one by one, it takes a very long computing time if enough neutrons are used for high precision of calculation. Accordingly, methods that reduce the computing time are required. In a Monte Carlo code, parallel calculation is well-suited since it simulates the behavior of each neutron independently and thus parallel computation is natural. The parallelization of the Monte Carlo codes, however, was done using multi CPUs. By the global demand for high quality 3D graphics, the Graphics Processing Unit (GPU) has developed into a highly parallel, multi-core processor. This parallel processing capability of GPUs can be available to engineering computing once a suitable interface is provided. Recently, NVIDIA introduced CUDATM, a general purpose parallel computing architecture. CUDA is a software environment that allows developers to manage GPU using C/C++ or other languages. In this work, a GPU-based Monte Carlo is developed and the initial assessment of it parallel performance is investigated

  6. An Advanced Pre-Processing Pipeline to Improve Automated Photogrammetric Reconstructions of Architectural Scenes

    Directory of Open Access Journals (Sweden)

    Marco Gaiani

    2016-02-01

    Full Text Available Automated image-based 3D reconstruction methods are more and more flooding our 3D modeling applications. Fully automated solutions give the impression that from a sample of randomly acquired images we can derive quite impressive visual 3D models. Although the level of automation is reaching very high standards, image quality is a fundamental pre-requisite to produce successful and photo-realistic 3D products, in particular when dealing with large datasets of images. This article presents an efficient pipeline based on color enhancement, image denoising, color-to-gray conversion and image content enrichment. The pipeline stems from an analysis of various state-of-the-art algorithms and aims to adjust the most promising methods, giving solutions to typical failure causes. The assessment evaluation proves how an effective image pre-processing, which considers the entire image dataset, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in the case of poor texture scenarios.

  7. Bulletin 2005-12 : revised Alberta pipeline regulation issued

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-05-31

    A revised Pipeline Regulation has been issued and is currently available on the Alberta Energy and Utilities Board (EUB) website. Changes to the regulation reflect bothchanges in EUB regulatory policy and processes and technological improvements. Goals of the revision include improvements in overall pipeline performance, and the implementation of recommendations derived from the Public Safety and Sour Gas Committee concerning sour gas pipeline safety. The regulation was re-organized for greater clarity, and structured into 11 parts. Issues concerning the transition to the revised regulation were presented. The summary of notable administrative changes included clarifications of when a pipeline application is not required; when ABSA approval is required for steam lines; situations for which low-pressure natural gas lines must be licensed; and emergency response requirements. Technical clarifications include requirements for pipeline operations and maintenance manuals; composite materials; limitations on amounts of H{sub 2}S in polymeric pipe; pressure mismatches; approval for testing with gaseous media; venting of small volumes of raw gas; right-of-way surveillance; inspection of surface construction activities; annual corrosion evaluations; registering of pipelines and excavators in controlled areas with Alberta One-Call; ground disturbance training; restoration and signage maintenance on abandoned pipelines; sour service steel pipelines; unused pipelines and abandoned pipelines; and remediation of stub ends in operating pipelines.

  8. CCP4i2: the new graphical user interface to the CCP4 program suite.

    Science.gov (United States)

    Potterton, Liz; Agirre, Jon; Ballard, Charles; Cowtan, Kevin; Dodson, Eleanor; Evans, Phil R; Jenkins, Huw T; Keegan, Ronan; Krissinel, Eugene; Stevenson, Kyle; Lebedev, Andrey; McNicholas, Stuart J; Nicholls, Robert A; Noble, Martin; Pannu, Navraj S; Roth, Christian; Sheldrick, George; Skubak, Pavol; Turkenburg, Johan; Uski, Ville; von Delft, Frank; Waterman, David; Wilson, Keith; Winn, Martyn; Wojdyr, Marcin

    2018-02-01

    The CCP4 (Collaborative Computational Project, Number 4) software suite for macromolecular structure determination by X-ray crystallography groups brings together many programs and libraries that, by means of well established conventions, interoperate effectively without adhering to strict design guidelines. Because of this inherent flexibility, users are often presented with diverse, even divergent, choices for solving every type of problem. Recently, CCP4 introduced CCP4i2, a modern graphical interface designed to help structural biologists to navigate the process of structure determination, with an emphasis on pipelining and the streamlined presentation of results. In addition, CCP4i2 provides a framework for writing structure-solution scripts that can be built up incrementally to create increasingly automatic procedures.

  9. Processing-in-Memory Enabled Graphics Processors for 3D Rendering

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Chenhao; Song, Shuaiwen; Wang, Jing; Zhang, Weigong; Fu, Xin

    2017-02-06

    The performance of 3D rendering of Graphics Processing Unit that convents 3D vector stream into 2D frame with 3D image effects significantly impact users’ gaming experience on modern computer systems. Due to the high texture throughput in 3D rendering, main memory bandwidth becomes a critical obstacle for improving the overall rendering performance. 3D stacked memory systems such as Hybrid Memory Cube (HMC) provide opportunities to significantly overcome the memory wall by directly connecting logic controllers to DRAM dies. Based on the observation that texel fetches significantly impact off-chip memory traffic, we propose two architectural designs to enable Processing-In-Memory based GPU for efficient 3D rendering.

  10. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    Science.gov (United States)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  11. Lamb wave propagation modelling and simulation using parallel processing architecture and graphical cards

    International Nuclear Information System (INIS)

    Paćko, P; Bielak, T; Staszewski, W J; Uhl, T; Spencer, A B; Worden, K

    2012-01-01

    This paper demonstrates new parallel computation technology and an implementation for Lamb wave propagation modelling in complex structures. A graphical processing unit (GPU) and computer unified device architecture (CUDA), available in low-cost graphical cards in standard PCs, are used for Lamb wave propagation numerical simulations. The local interaction simulation approach (LISA) wave propagation algorithm has been implemented as an example. Other algorithms suitable for parallel discretization can also be used in practice. The method is illustrated using examples related to damage detection. The results demonstrate good accuracy and effective computational performance of very large models. The wave propagation modelling presented in the paper can be used in many practical applications of science and engineering. (paper)

  12. Comparison between research data processing capabilities of AMD and NVIDIA architecture-based graphic processors

    International Nuclear Information System (INIS)

    Dudnik, V.A.; Kudryavtsev, V.I.; Us, S.A.; Shestakov, M.V.

    2015-01-01

    A comparative analysis has been made to describe the potentialities of hardware and software tools of two most widely used modern architectures of graphic processors (AMD and NVIDIA). Special features and differences of GPU architectures are exemplified by fragments of GPGPU programs. Time consumption for the program development has been estimated. Some pieces of advice are given as to the optimum choice of the GPU type for speeding up the processing of scientific research results. Recommendations are formulated for the use of software tools that reduce the time of GPGPU application programming for the given types of graphic processors

  13. CPL: Common Pipeline Library

    Science.gov (United States)

    ESO CPL Development Team

    2014-02-01

    The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).

  14. Modeling and monitoring of pipelines and networks advanced tools for automatic monitoring and supervision of pipelines

    CERN Document Server

    Torres, Lizeth

    2017-01-01

    This book focuses on the analysis and design of advanced techniques for on-line automatic computational monitoring of pipelines and pipe networks. It discusses how to improve the systems’ security considering mathematical models of the flow, historical flow rate and pressure data, with the main goal of reducing the number of sensors installed along a pipeline. The techniques presented in the book have been implemented in digital systems to enhance the abilities of the pipeline network’s operators in recognizing anomalies. A real leak scenario in a Mexican water pipeline is used to illustrate the benefits of these techniques in locating the position of a leak. Intended for an interdisciplinary audience, the book addresses researchers and professionals in the areas of mechanical, civil and control engineering. It covers topics on fluid mechanics, instrumentation, automatic control, signal processing, computing, construction and diagnostic technologies.

  15. Worldwide natural gas pipeline situation. Sekai no tennen gas pipeline jokyo

    Energy Technology Data Exchange (ETDEWEB)

    Arimoto, T [Osaka Gas Co. Ltd., Osaka (Japan)

    1993-03-01

    Constructing natural gas pipelines in wide areas requires investments of a huge amount. Many countries are building natural gas supply infrastructures under public support as nations' basic policy of promoting use of natural gas. This paper describes the present conditions of building pipelines in Western Europe, the U.S.A., Korea and Taiwan. In Western Europe, transporting companies established in line with the national policy own trunk pipelines and storage facilities, and import and distribute natural gas. The U.S.A. has 2300 small and large pipeline companies bearing transportation business. Pipelines extend about 1.9 million kilometers in total, with trunk pipelines accounting for about 440,000 kilometers. The companies are given eminent domain for the right of way. Korea has a plan to build a pipeline network with a distance of 1600 kilometers in around 2000. Taiwan has completed trunk pipelines extending 330 kilometers in two years. In Japan, the industry is preparing draft plans for wide area pipeline construction. 5 figs., 1 tab.

  16. Corral framework: Trustworthy and fully functional data intensive parallel astronomical pipelines

    Science.gov (United States)

    Cabral, J. B.; Sánchez, B.; Beroiz, M.; Domínguez, M.; Lares, M.; Gurovich, S.; Granitto, P.

    2017-07-01

    Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. In this work we present Corral, a Python framework for astronomical pipeline generation. Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling: custom data models; processing stages; and communication alerts, and also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities. Corral represents an improvement over commonly found data processing pipelines in astronomysince the design pattern eases the programmer from dealing with processing flow and parallelization issues, allowing them to focus on the specific algorithms needed for the successive data transformations and at the same time provides a broad measure of quality over the created pipeline. Corral and working examples of pipelines that use it are available to the community at https://github.com/toros-astro.

  17. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    Science.gov (United States)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  18. Graphics Processing Unit Accelerated Hirsch-Fye Quantum Monte Carlo

    Science.gov (United States)

    Moore, Conrad; Abu Asal, Sameer; Rajagoplan, Kaushik; Poliakoff, David; Caprino, Joseph; Tomko, Karen; Thakur, Bhupender; Yang, Shuxiang; Moreno, Juana; Jarrell, Mark

    2012-02-01

    In Dynamical Mean Field Theory and its cluster extensions, such as the Dynamic Cluster Algorithm, the bottleneck of the algorithm is solving the self-consistency equations with an impurity solver. Hirsch-Fye Quantum Monte Carlo is one of the most commonly used impurity and cluster solvers. This work implements optimizations of the algorithm, such as enabling large data re-use, suitable for the Graphics Processing Unit (GPU) architecture. The GPU's sheer number of concurrent parallel computations and large bandwidth to many shared memories takes advantage of the inherent parallelism in the Green function update and measurement routines, and can substantially improve the efficiency of the Hirsch-Fye impurity solver.

  19. Natural gas pipeline technology overview.

    Energy Technology Data Exchange (ETDEWEB)

    Folga, S. M.; Decision and Information Sciences

    2007-11-01

    transmission companies. Compressor stations at required distances boost the pressure that is lost through friction as the gas moves through the steel pipes (EPA 2000). The natural gas system is generally described in terms of production, processing and purification, transmission and storage, and distribution (NaturalGas.org 2004b). Figure 1.1-2 shows a schematic of the system through transmission. This report focuses on the transmission pipeline, compressor stations, and city gates.

  20. Allele Workbench: transcriptome pipeline and interactive graphics for allele-specific expression.

    Directory of Open Access Journals (Sweden)

    Carol A Soderlund

    Full Text Available Sequencing the transcriptome can answer various questions such as determining the transcripts expressed in a given species for a specific tissue or condition, evaluating differential expression, discovering variants, and evaluating allele-specific expression. Differential expression evaluates the expression differences between different strains, tissues, and conditions. Allele-specific expression evaluates expression differences between parental alleles. Both differential expression and allele-specific expression have been studied for heterosis (hybrid vigor, where the hybrid has improved performance over the parents for one or more traits. The Allele Workbench software was developed for a heterosis study that evaluated allele-specific expression for a mouse F1 hybrid using libraries from multiple tissues with biological replicates. This software has been made into a distributable package, which includes a pipeline, a Java interface to build the database, and a Java interface for query and display of the results. The required input is a reference genome, annotation file, and one or more RNA-Seq libraries with optional replicates. It evaluates allelic imbalance at the SNP and transcript level and flags transcripts with significant opposite directional allele-specific expression. The Java interface allows the user to view data from libraries, replicates, genes, transcripts, exons, and variants, including queries on allele imbalance for selected libraries. To determine the impact of allele-specific SNPs on protein folding, variants are annotated with their effect (e.g., missense, and the parental protein sequences may be exported for protein folding analysis. The Allele Workbench processing results in transcript files and read counts that can be used as input to the previously published Transcriptome Computational Workbench, which has a new algorithm for determining a trimmed set of gene ontology terms. The software with demo files is available

  1. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  2. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  3. Gamma camera image processing and graphical analysis mutual software system

    International Nuclear Information System (INIS)

    Wang Zhiqian; Chen Yongming; Ding Ailian; Ling Zhiye; Jin Yongjie

    1992-01-01

    GCCS gamma camera image processing and graphical analysis system is a special mutual software system. It is mainly used to analyse various patient data acquired from gamma camera. This system is used on IBM PC, PC/XT or PC/AT. It consists of several parts: system management, data management, device management, program package and user programs. The system provides two kinds of user interfaces: command menu and command characters. It is easy to change and enlarge this system because it is best modularized. The user programs include almost all the clinical protocols used now

  4. Graphical symbol recognition

    OpenAIRE

    K.C. , Santosh; Wendling , Laurent

    2015-01-01

    International audience; The chapter focuses on one of the key issues in document image processing i.e., graphical symbol recognition. Graphical symbol recognition is a sub-field of a larger research domain: pattern recognition. The chapter covers several approaches (i.e., statistical, structural and syntactic) and specially designed symbol recognition techniques inspired by real-world industrial problems. It, in general, contains research problems, state-of-the-art methods that convey basic s...

  5. GUIdock-VNC: using a graphical desktop sharing system to provide a browser-based interface for containerized software.

    Science.gov (United States)

    Mittal, Varun; Hung, Ling-Hong; Keswani, Jayant; Kristiyanto, Daniel; Lee, Sung Bong; Yeung, Ka Yee

    2017-04-01

    Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line-based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. © The Authors 2017. Published by Oxford University Press.

  6. Computer graphics from basic to application

    International Nuclear Information System (INIS)

    Kim, Do Hyeong; Mun, Sung Min

    1998-04-01

    This book mentions conception of computer graphics, background history, necessity and applied field like construction design, image processing, auto mobile design, fashion design and TV broadcast, basic principle of computer, computer graphics hardware, computer graphics software such as adobe illustrator tool box and adobe photo shop, quarkXpress like introduction, application and operating circumstance, 3D graphics with summary, difference of versions of 3D studio and system, and Auto CAD application.

  7. Computer graphics from basic to application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Hyeong; Mun, Sung Min

    1998-04-15

    This book mentions conception of computer graphics, background history, necessity and applied field like construction design, image processing, auto mobile design, fashion design and TV broadcast, basic principle of computer, computer graphics hardware, computer graphics software such as adobe illustrator tool box and adobe photo shop, quarkXpress like introduction, application and operating circumstance, 3D graphics with summary, difference of versions of 3D studio and system, and Auto CAD application.

  8. Numerical simulation of wave-induced scour and backfilling processes beneath submarine pipelines

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Baykal, Cüneyt; Sumer, B. Mutlu

    2014-01-01

    A fully-coupled hydrodynamic/morphodynamic numerical model is presented and utilized for the simulation of wave-induced scour and backfilling processes beneath submarine pipelines. The model is based on solutions to Reynolds-averaged Navier–Stokes equations, coupled with k−ω turbulence closure......≤30 demonstrate reasonable match with previous experiments, both in terms of the equilibrium scour depth as well as the scour time scale. Wave-induced backfilling processes are additionally studied by subjecting initial conditions taken from scour simulations with larger KC to new wave climates...... characterized by lower KC values. The simulations considered demonstrate the ability of the model to predict backfilling toward expected equilibrium scour depths based on the new wave climate, in line with experimental expectations. The simulated backfilling process is characterized by two stages: (1...

  9. Representation stigma: Perceptions of tools and processes for design graphics

    Directory of Open Access Journals (Sweden)

    David Barbarash

    2016-12-01

    Full Text Available Practicing designers and design students across multiple fields were surveyed to measure preference and perception of traditional hand and digital tools to determine if common biases for an individual toolset are realized in practice. Significant results were found, primarily with age being a determinant in preference of graphic tools and processes; this finding demonstrates a hard line between generations of designers. Results show that while there are strong opinions in tools and processes, the realities of modern business practice and production gravitate towards digital methods despite a traditional tool preference in more experienced designers. While negative stigmas regarding computers remain, younger generations are more accepting of digital tools and images, which should eventually lead to a paradigm shift in design professions.

  10. Development of the graphic design and control system based on a graphic simulator for the spent fuel dismantling equipment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. Y.; Kim, S. H.; Song, T. G.; Yoon, J. S

    2000-06-01

    In this study, the graphic design system is developed for designing the spent fuel rod consolidation and the dismantling processes. This system is used throughout the design stages from the conceptual design to the motion analysis. Also, the real-time control system of the rod extracting equipment is developed. This system utilizes the graphic simulator which simulates the motion of the equipment in real time by synchronously connecting the control PC with the graphic server through the TCP/IP network. The developed system is expected to be used as an effective tool in designing the process equipment for the spent fuel management. And the real-time graphic control system can be effectively used to enhance the reliability and safety of the spent fuel handling process by providing the remote monitoring function of the process.

  11. Development of the graphic design and control system based on a graphic simulator for the spent fuel dismantling equipment

    International Nuclear Information System (INIS)

    Lee, J. Y.; Kim, S. H.; Song, T. G.; Yoon, J. S.

    2000-06-01

    In this study, the graphic design system is developed for designing the spent fuel rod consolidation and the dismantling processes. This system is used throughout the design stages from the conceptual design to the motion analysis. Also, the real-time control system of the rod extracting equipment is developed. This system utilizes the graphic simulator which simulates the motion of the equipment in real time by synchronously connecting the control PC with the graphic server through the TCP/IP network. The developed system is expected to be used as an effective tool in designing the process equipment for the spent fuel management. And the real-time graphic control system can be effectively used to enhance the reliability and safety of the spent fuel handling process by providing the remote monitoring function of the process

  12. CLOTU: An online pipeline for processing and clustering of 454 amplicon reads into OTUs followed by taxonomic annotation

    Directory of Open Access Journals (Sweden)

    Shalchian-Tabrizi Kamran

    2011-05-01

    Full Text Available Abstract Background The implementation of high throughput sequencing for exploring biodiversity poses high demands on bioinformatics applications for automated data processing. Here we introduce CLOTU, an online and open access pipeline for processing 454 amplicon reads. CLOTU has been constructed to be highly user-friendly and flexible, since different types of analyses are needed for different datasets. Results In CLOTU, the user can filter out low quality sequences, trim tags, primers, adaptors, perform clustering of sequence reads, and run BLAST against NCBInr or a customized database in a high performance computing environment. The resulting data may be browsed in a user-friendly manner and easily forwarded to downstream analyses. Although CLOTU is specifically designed for analyzing 454 amplicon reads, other types of DNA sequence data can also be processed. A fungal ITS sequence dataset generated by 454 sequencing of environmental samples is used to demonstrate the utility of CLOTU. Conclusions CLOTU is a flexible and easy to use bioinformatics pipeline that includes different options for filtering, trimming, clustering and taxonomic annotation of high throughput sequence reads. Some of these options are not included in comparable pipelines. CLOTU is implemented in a Linux computer cluster and is freely accessible to academic users through the Bioportal web-based bioinformatics service (http://www.bioportal.uio.no.

  13. Monitoring device for the reactor pipelines

    International Nuclear Information System (INIS)

    Fukumoto, Akira.

    1983-01-01

    Purpose: To enable rapid and accurate operator's monitoring for the state of pipelines in a BWR type reactor. Constitution: Specific symbols are attached respectively to a fluid supply source constituting the pipelines of a nuclear reactor facility, a plurality of fluid passing points and equipments to be supplied with the fluid, and a symmetrical matrix comprising these symbols in rows and columns is constituted. Then, a matrix is prepared based on detection signals for the states of the liquid supply source, equipments to be supplied with fluid and pipeline equipments by rendering the matrix elements between the signals expressing the state capable of passing the fluid as 1 and the matrix elements between the signals expressing the state incapable of passing the fluid as 0 . The matrix thus prepared in a signal procession circuit and a matrix in a memory circuit previously storing the matrix expressing the normal state of the pipelines are compared to judge the state of the pipelines in a short time and with no misjudging. (Moriyama, K.)

  14. Optimal hub location in pipeline networks

    Energy Technology Data Exchange (ETDEWEB)

    Dott, D.R.; Wirasinghe, S.C.; Chakma, A. [Univ. of Calgary, Alberta (Canada)

    1996-12-31

    This paper discusses optimization strategies and techniques for the location of natural gas marketing hubs in the North American gas pipeline network. A hub is a facility at which inbound and outbound network links meet and freight is redirected towards their destinations. Common examples of hubs used in the gas pipeline industry include gas plants, interconnects and market centers. Characteristics of the gas pipeline industry which are relevant to the optimization of transportation costs using hubs are presented. Allocation techniques for solving location-allocation problems are discussed. An outline of the research in process by the authors in the field of optimal gas hub location concludes the paper.

  15. Pipeline modeling and assessment in unstable slopes

    Energy Technology Data Exchange (ETDEWEB)

    Caceres, Carlos Nieves [Oleoducto Central S.A., Bogota, Cundinamarca (Colombia); Ordonez, Mauricio Pereira [SOLSIN S.A.S, Bogota, Cundinamarca (Colombia)

    2010-07-01

    The OCENSA pipeline system is vulnerable to geotechnical problems such as faults, landslides or creeping slopes, which are well-known in the Andes Mountains and tropical countries like Colombia. This paper proposes a methodology to evaluate the pipe behaviour during the soil displacements of slow landslides. Three different cases of analysis are examined, according to site characteristics. The process starts with a simplified analytical model and develops into 3D finite element numerical simulations applied to the on-site geometry of soil and pipe. Case 1 should be used when the unstable site is subject to landslides impacting significant lengths of pipeline, pipeline is straight, and landslide is simple from the geotechnical perspective. Case 2 should be used when pipeline is straight and landslide is complex (creeping slopes and non-conventional stabilization solutions). Case 3 should be used if the pipeline presents vertical or horizontal bends.

  16. United States petroleum pipelines: An empirical analysis of pipeline sizing

    Science.gov (United States)

    Coburn, L. L.

    1980-12-01

    The undersizing theory hypothesizes that integrated oil companies have a strong economic incentive to size the petroleum pipelines they own and ship over in a way that means that some of the demand must utilize higher cost alternatives. The DOJ theory posits that excess or monopoly profits are earned due to the natural monopoly characteristics of petroleum pipelines and the existence of market power in some pipelines at either the upstream or downstream market. The theory holds that independent petroleum pipelines owned by companies not otherwise affiliated with the petroleum industry (independent pipelines) do not have these incentives and all the efficiencies of pipeline transportation are passed to the ultimate consumer. Integrated oil companies on the other hand, keep these cost efficiencies for themselves in the form of excess profits.

  17. Underground pipeline corrosion

    CERN Document Server

    Orazem, Mark

    2014-01-01

    Underground pipelines transporting liquid petroleum products and natural gas are critical components of civil infrastructure, making corrosion prevention an essential part of asset-protection strategy. Underground Pipeline Corrosion provides a basic understanding of the problems associated with corrosion detection and mitigation, and of the state of the art in corrosion prevention. The topics covered in part one include: basic principles for corrosion in underground pipelines, AC-induced corrosion of underground pipelines, significance of corrosion in onshore oil and gas pipelines, n

  18. Extending the Fermi-LAT data processing pipeline to the grid

    Energy Technology Data Exchange (ETDEWEB)

    Zimmer, S. [Stockholm Univ., Stockholm (Sweden); The Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Arrabito, L. [Univ. Montpellier 2, Montpellier (France); Glanzman, T. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Johnson, T. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Lavalley, C. [Univ. Montpellier 2, Montpellier (France); Tsaregorodtsev, A. [Centre de Physique des Particules de Marseille, Marseille (France)

    2015-05-12

    The Data Handling Pipeline ("Pipeline") has been developed for the Fermi Gamma-Ray Space Telescope (Fermi) Large Area Telescope (LAT) which launched in June 2008. Since then it has been in use to completely automate the production of data quality monitoring quantities, reconstruction and routine analysis of all data received from the satellite and to deliver science products to the collaboration and the Fermi Science Support Center. Aside from the reconstruction of raw data from the satellite (Level 1), data reprocessing and various event-level analyses are also reasonably heavy loads on the pipeline and computing resources. These other loads, unlike Level 1, can run continuously for weeks or months at a time. Additionally, it receives heavy use in performing production Monte Carlo tasks.

  19. DROIDS 1.20: A GUI-Based Pipeline for GPU-Accelerated Comparative Protein Dynamics.

    Science.gov (United States)

    Babbitt, Gregory A; Mortensen, Jamie S; Coppola, Erin E; Adams, Lily E; Liao, Justin K

    2018-03-13

    Traditional informatics in comparative genomics work only with static representations of biomolecules (i.e., sequence and structure), thereby ignoring the molecular dynamics (MD) of proteins that define function in the cell. A comparative approach applied to MD would connect this very short timescale process, defined in femtoseconds, to one of the longest in the universe: molecular evolution measured in millions of years. Here, we leverage advances in graphics-processing-unit-accelerated MD simulation software to develop a comparative method of MD analysis and visualization that can be applied to any two homologous Protein Data Bank structures. Our open-source pipeline, DROIDS (Detecting Relative Outlier Impacts in Dynamic Simulations), works in conjunction with existing molecular modeling software to convert any Linux gaming personal computer into a "comparative computational microscope" for observing the biophysical effects of mutations and other chemical changes in proteins. DROIDS implements structural alignment and Benjamini-Hochberg-corrected Kolmogorov-Smirnov statistics to compare nanosecond-scale atom bond fluctuations on the protein backbone, color mapping the significant differences identified in protein MD with single-amino-acid resolution. DROIDS is simple to use, incorporating graphical user interface control for Amber16 MD simulations, cpptraj analysis, and the final statistical and visual representations in R graphics and UCSF Chimera. We demonstrate that DROIDS can be utilized to visually investigate molecular evolution and disease-related functional changes in MD due to genetic mutation and epigenetic modification. DROIDS can also be used to potentially investigate binding interactions of pharmaceuticals, toxins, or other biomolecules in a functional evolutionary context as well. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. A graphical method for estimating the tunneling factor for mode conversion processes

    International Nuclear Information System (INIS)

    Swanson, D.G.

    1994-01-01

    The fundamental parameter characterizing the strength of any mode conversion process is the tunneling parameter, which is typically determined from a model dispersion relation which is transformed into a differential equation. Here a graphical method is described which gives the tunneling parameter from quantities directly measured from a simple graph of the dispersion relation. The accuracy of the estimate depends only on the accuracy of the measurements

  1. Pipeline engineering. 8. rev. and enl. ed.

    International Nuclear Information System (INIS)

    Wagner, W.

    2000-01-01

    Apart from calculating the strength of pipeline components planning and design are the most important tasks on the areas of apparatus manufacturing, fluid engineering, process engineering and thermal engineering. It is therefore necessary that the flow diagrams of a plant are clearly understandable and in accordance with the technical rules even in the early stages of planning. This book concentrates on steel pipeline which are not laid underground but of the type used mostly in industrial applications. The pictures and equations provided can be used for the design of pipelines, tables and diagrams are given to facilitate estimation of elasticity, pipeline pressure losses and insulating thicknesses. An overview of the equations is given at the end of the book. Many examples facilitate learning. (orig.) [de

  2. [Influence of the recording interval and a graphic organizer on the writing process/product and on other psychological variables].

    Science.gov (United States)

    García Sánchez, Jesús N; Rodríguez Pérez, Celestino

    2007-05-01

    An experimental study of the influence of the recording interval and a graphic organizer on the processes of writing composition and on the final product is presented. We studied 326 participants, age 10 to 16 years old, by means of a nested design. Two groups were compared: one group was aided in the writing process with a graphic organizer and the other was not. Each group was subdivided into two further groups: one with a mean recording interval of 45 seconds and the other with approximately 90 seconds recording interval in a writing log. The results showed that the group aided by a graphic organizer obtained better results both in processes and writing product, and that the groups assessed with an average interval of 45 seconds obtained worse results. Implications for educational practice are discussed, and limitations and future perspectives are commented on.

  3. A Reliability Assessment of the Hydrostatic Test of Pipeline with 0.8 Design Factor in the West–East China Natural Gas Pipeline III

    Directory of Open Access Journals (Sweden)

    Kai Wen

    2018-05-01

    Full Text Available The use of 0.8 design factor in Chinese pipeline industry is a breakthrough with the success of the test pipe section in the west–east China gas pipeline III. For such a design factor, the traditional P-V (Pressure-Volume curve based pressure test control cannot describe the details of the process, and the 0/1 type failure is not an efficient index to show the safety level of the pipeline. In this paper, a reliability based assessment method is proposed to monitor the real-time failure probability of the pipeline during the hydrostatic test process. The reliability index can be used as the degree of risk. Following the actual hydrostatic testing of a test pipe section with 0.8 design factor in the west–east China gas pipeline III, reliability analysis was performed using Monte Carlo technique. The basic values of input parameters of the limit state equations are based on the data collected from either the tested section or the recommended value in the codes. The analysis of limit states, i.e., the yielding deformation and the excessive plastic deformation of pipeline, proceeded based on these distributions. Finally, it is found that the gradually increased water pressure makes the failure probability increase accordingly. A reliability assessment method was proposed and illustrated with the practical pressure test process.

  4. Removable pipeline plug

    International Nuclear Information System (INIS)

    Vassalotti, M.; Anastasi, F.

    1984-01-01

    A removable plugging device for a pipeline, and particularly for pressure testing a steam pipeline in a boiling water reactor, wherein an inflatable annular sealing member seals off the pipeline and characterized by radially movable shoes for holding the plug in place, each shoe being pivotally mounted for self-adjusting engagement with even an out-of-round pipeline interior

  5. Pipeline integrity management

    Energy Technology Data Exchange (ETDEWEB)

    Guyt, J.; Macara, C.

    1997-12-31

    This paper focuses on some of the issues necessary for pipeline operators to consider when addressing the challenge of managing the integrity of their systems. Topics are: Definition; business justification; creation and safeguarding of technical integrity; control and deviation from technical integrity; pipelines; pipeline failure assessment; pipeline integrity assessment; leak detection; emergency response. 6 figs., 3 tabs.

  6. 75 FR 13342 - Pipeline Safety: Workshop on Distribution Pipeline Construction

    Science.gov (United States)

    2010-03-19

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... natural gas distribution construction. Natural gas distribution pipelines are subject to a unique subset... distribution pipeline construction practices. This workshop will focus solely on natural gas distribution...

  7. Critique and Process: Signature Pedagogies in the Graphic Design Classroom

    Science.gov (United States)

    Motley, Phillip

    2017-01-01

    Like many disciplines in design and the visual fine arts, critique is a signature pedagogy in the graphic design classroom. It serves as both a formative and summative assessment while also giving students the opportunity to practice the habits of graphic design. Critiques help students become keen observers of relevant disciplinary criteria;…

  8. A DDC Bibliography on Optical or Graphic Information Processing (Information Sciences Series). Volume I.

    Science.gov (United States)

    Defense Documentation Center, Alexandria, VA.

    This unclassified-unlimited bibliography contains 183 references, with abstracts, dealing specifically with optical or graphic information processing. Citations are grouped under three headings: display devices and theory, character recognition, and pattern recognition. Within each group, they are arranged in accession number (AD-number) sequence.…

  9. GRAPHICAL MODELS OF THE AIRCRAFT MAINTENANCE PROCESS

    Directory of Open Access Journals (Sweden)

    Stanislav Vladimirovich Daletskiy

    2017-01-01

    Full Text Available The aircraft maintenance is realized by a rapid sequence of maintenance organizational and technical states, its re- search and analysis are carried out by statistical methods. The maintenance process concludes aircraft technical states con- nected with the objective patterns of technical qualities changes of the aircraft as a maintenance object and organizational states which determine the subjective organization and planning process of aircraft using. The objective maintenance pro- cess is realized in Maintenance and Repair System which does not include maintenance organization and planning and is a set of related elements: aircraft, Maintenance and Repair measures, executors and documentation that sets rules of their interaction for maintaining of the aircraft reliability and readiness for flight. The aircraft organizational and technical states are considered, their characteristics and heuristic estimates of connection in knots and arcs of graphs and of aircraft organi- zational states during regular maintenance and at technical state failure are given. It is shown that in real conditions of air- craft maintenance, planned aircraft technical state control and maintenance control through it, is only defined by Mainte- nance and Repair conditions at a given Maintenance and Repair type and form structures, and correspondingly by setting principles of Maintenance and Repair work types to the execution, due to maintenance, by aircraft and all its units mainte- nance and reconstruction strategies. The realization of planned Maintenance and Repair process determines the one of the constant maintenance component. The proposed graphical models allow to reveal quantitative correlations between graph knots to improve maintenance processes by statistical research methods, what reduces manning, timetable and expenses for providing safe civil aviation aircraft maintenance.

  10. A Novel Method to Enhance Pipeline Trajectory Determination Using Pipeline Junctions.

    Science.gov (United States)

    Sahli, Hussein; El-Sheimy, Naser

    2016-04-21

    Pipeline inspection gauges (pigs) have been used for many years to perform various maintenance operations in oil and gas pipelines. Different pipeline parameters can be inspected during the pig journey. Although pigs use many sensors to detect the required pipeline parameters, matching these data with the corresponding pipeline location is considered a very important parameter. High-end, tactical-grade inertial measurement units (IMUs) are used in pigging applications to locate the detected problems of pipeline using other sensors, and to reconstruct the trajectories of the pig. These IMUs are accurate; however, their high cost and large sizes limit their use in small diameter pipelines (8″ or less). This paper describes a new methodology for the use of MEMS-based IMUs using an extended Kalman filter (EKF) and the pipeline junctions to increase the position parameters' accuracy and to reduce the total RMS errors even during the unavailability of above ground markers (AGMs). The results of this new proposed method using a micro-electro-mechanical systems (MEMS)-based IMU revealed that the position RMS errors were reduced by approximately 85% compared to the standard EKF solution. Therefore, this approach will enable the mapping of small diameter pipelines, which was not possible before.

  11. Pollution from pipelines

    International Nuclear Information System (INIS)

    1991-01-01

    During the 1980s, over 3,900 spills from land-based pipelines released nearly 20 million gallons of oil into U.S. waters-almost twice as much as was released by the March 1989 Exxon Valdez oil spill. Although the Department of Transportation is responsible for preventing water pollution from petroleum pipelines, GAO found that it has not established a program to prevent such pollution. DOT has instead delegated this responsibility to the Coast Guard, which has a program to stop water pollution from ships, but not from pipelines. This paper reports that, in the absence of any federal program to prevent water pollution from pipelines, both the Coast Guard and the Environmental Protection Agency have taken steps to plan for and respond to oil spills, including those from pipelines, as required by the Clean Water Act. The Coast Guard cannot, however, adequately plan for or ensure a timely response to pipeline spills because it generally is unaware of specific locations and operators of pipelines

  12. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline.

    Science.gov (United States)

    Loh, K B; Ramli, N; Tan, L K; Roziah, M; Rahmat, K; Ariffin, H

    2012-07-01

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. Diffusion tensor imaging outperforms conventional MRI in depicting white matter maturation. • DTI will become an important clinical tool for diagnosing paediatric neurological diseases. • DTI appears especially helpful for developmental abnormalities, tumours and white matter disease. • An automated processing pipeline assists quantitative analysis of high throughput DTI data.

  13. 75 FR 63774 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Science.gov (United States)

    2010-10-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Pipelines AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), Department of... Gas Pipeline Safety Act of 1968, Public Law 90-481, delegated to DOT the authority to develop...

  14. 77 FR 61825 - Pipeline Safety: Notice of Public Meeting on Pipeline Data

    Science.gov (United States)

    2012-10-11

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... program performance measures for gas distribution, gas transmission, and hazardous liquids pipelines. The... distribution pipelines (49 CFR 192.1007(e)), gas transmission pipelines (49 CFR 192.945) and hazardous liquids...

  15. Pipelines 'R' us

    International Nuclear Information System (INIS)

    Thomas, P.

    1997-01-01

    The geopolitical background to the export of oil and gas from Kazakhstan by pipeline is explored with particular reference to the sensitivities of the USA. There are now a number of pipeline proposals which would enable Kazakhstan to get its hydrocarbons to world markets. The construction of two of these formed part of a major oil deal signed recently with China in the face of stiff competition from major US companies. The most convenient and cost effective route, connecting up with Iran's existing pipeline network to the Gulf, is unlikely to be developed given continuing US sanctions against Iran. Equally unlikely seems to be the Turkmenistan to Pakistan pipeline in the light of the political volatility of Afghanistan. US companies continue to face limits on export capacity via the existing Russian pipelines from Kazakhstan. A temporary solution could be to carry some oil in the existing pipeline from Azerbaijan to Georgia which has been upgraded and is due to become operational soon, and later in a second proposed pipeline on this route. The Caspian Pipeline Consortium, consisting of three countries and eleven international companies, is building a 1500 km pipeline from the Tergiz field to Novorossiysk on the Black Sea with a view to completion in 2000. An undersea pipeline crossing the Caspian from Azerbaijan is being promoted by Turkey. There is an international perception that within the next five years Kazakhstan could be in a position to export its oil via as many as half a dozen different routes. (UK)

  16. Public perceptions of CO2 transportation in pipelines

    International Nuclear Information System (INIS)

    Gough, Clair; O'Keefe, Laura; Mander, Sarah

    2014-01-01

    This paper explores the response by members of the lay public to the prospect of an onshore CO 2 pipeline through their locality as part of a proposed CCS development and presents results from deliberative Focus Groups held along a proposed pipeline route. Although there is a reasonable level of general knowledge about CO 2 across the lay public, understanding of its specific properties is more limited. The main concerns expressed around pipelines focused on five areas: (i) safe operation of the pipeline; (ii) the risks to people, livestock and vegetation arising from the leakage of CO 2 from the pipeline; (iii) the innovative and ‘first of its kind' nature of the pipeline and the consequent lack of operational CO 2 pipelines in the UK to demonstrate the technology; (iv) impacts on coastal erosion at the landfall site; and (v) the potential disruption to local communities during pipeline construction. Participants expressed scepticism over the motivations of CO 2 pipeline developers. Trust that the developer will minimise risk during the route selection and subsequent construction, operation and maintenance of the pipeline is key; building trust within the local community requires early engagement processes, tailored to deliver a variety of engagement and information approaches. - Highlights: • Lay publics express good general knowledge of CO 2 but not of its specific properties. • Key concerns relate to risk and safety and ‘first of a kind' nature of CO 2 pipeline. • Group participants are sceptical about motivations of CO 2 pipeline developers. • Communities' trust in developer is a major element of their risk assessment

  17. 76 FR 44985 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding

    Science.gov (United States)

    2011-07-27

    .... PHMSA-2011-0177] Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding AGENCY... liquid pipelines to communicate the potential for damage to pipeline facilities caused by severe flooding... pipelines in case of flooding. ADDRESSES: This document can be viewed on the Office of Pipeline Safety home...

  18. Hazard identification studies applied to oil pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Savio, Augusto; Alpert, Melina L. [TECNA S.A., Buenos Aires (Argentina)], e-mail: asavio@tecna.com, e-mail: malpert@tecna.com

    2008-07-01

    In order to assess risks inherent to an Oil Pipeline, it is imperative to analyze what happens 'outside the process'. HAZID (HAZard IDentification) studies are mainly carried out for this purpose. HAZID is a formal study which identifies hazards and risks associated to an operation or facility and enable its acceptability assessment. It is a brainstorming exercise guided by a typical 'Checklist', divided into four Sections: External, Facilities and Health Hazards and Issues pertaining to Project Execution, which are further subdivided into Hazard Categories. For each Category, there are 'Guide-words' and 'Prompts'. Even if an Oil Pipeline Risk Assessment can be performed by means of the above referred 'Checklist', carrying out the actual process can become lengthy and annoying due to the lack of specificity. This work aims at presenting the most suitable 'Checklist' for the identification of Oil Pipeline Risk Assessment, although it could be used for Gas Pipeline Risk Assessment too. Prepared ad hoc, this list, is based on the spill causes established by CONCAWE (CONservation of Clean Air Water in Europe). Performing Oil Pipeline Risk Assessment by means of specially formulated Checklist enables the Study Team to easily identify risks, shortens execution time and provides both accuracy and specificity. (author)

  19. Pipeline transportation of emerging partially upgraded bitumen

    International Nuclear Information System (INIS)

    Luhning, R.W.; Anand, A.; Blackmore, T.; Lawson, D.S.

    2002-01-01

    The recoverable reserves of Canada's vast oil deposits is estimated to be 335 billion barrels (bbl), most of which are in the Alberta oil sands. Canada was the largest import supplier of crude oil to the United States in 2001, followed by Saudi Arabia. By 2011, the production of oil sands is expected to increase to 50 per cent of Canada's oil, and conventional oil production will decline as more production will be provided by synthetic light oil and bitumen. This paper lists the announced oil sands projects. If all are to proceed, production would reach 3,445,000 bbl per day by 2011. The three main challenges regarding the transportation and marketing of this new production were described. The first is to expand the physical capacity of existing pipelines. The second is the supply of low viscosity diluent (such as natural gas condensate or synthetic diluent) to reduce the viscosity and density of the bitumen as it passes through the pipelines. The current pipeline specifications and procedures to transport partially upgraded products are presented. The final challenge is the projected refinery market constraint to process the bitumen and synthetic light oil into consumer fuel products. These challenges can be addressed by modifying refineries and increasing Canadian access in Petroleum Administration Defense District (PADD) II and IV. The technology for partial upgrading of bitumen to produce pipeline specification oil, reduce diluent requirements and add sales value, is currently under development. The number of existing refineries to potentially accept partially upgraded product is listed. The partially upgraded bitumen will be in demand for additional upgrading to end user products, and new opportunities will be presented as additional pipeline capacity is made available to transport crude to U.S. markets and overseas. The paper describes the following emerging partial upgrading methods: the OrCrude upgrading process, rapid thermal processing, CPJ process for

  20. Pipeline technology. Petroleum oil - long-distance pipelines. Pipelinetechnik. Mineraloelfernleitungen

    Energy Technology Data Exchange (ETDEWEB)

    Krass, W; Kittel, A; Uhde, A

    1979-01-01

    All questions and concerns of pipeline technique are dealt with in detail. Some chapters can be applied for petroleum pipelines only or partly, for example the importance of petroleum pipelines, projecting, calculation, and operation. The sections of pipes and formings, laying, rights of way, and corrosion protection, accessories and remote effect technique, however, are of general interest, for example also for gas pipelines. In the chapter on working material, a very good summary of today's pipe working material including the thermomechanically treated steels is given. Besides methods of improving the toughness, the problems of the corrosion caused by strain cracking and the ways of avoiding it are pointed out. The pipe producing methods and, in the end of the chapter, the tests in the factory are explained. The section of laying deals with the laying methods being applied for years in pipeline construction, a big part referring to welding methods and tests. Active and passive corrosion protection are explained with all details. In addition to the solidity calculation presented with special regard to concerns of petroleum pipelines, theoretical fundaments and calculation methods for pressure are dealt with. Beside general questions of pumps, accessories, and drives, there is a section dealing with measurement and control techniques. Furthermore, remote effect and transmission techniques and news systems are explained in detail. Here, problems are referred to which are applicable not only to the operation of mineral oil pipelines. The book is completed by indications as to pipeline operation emphasizing general operation control, maintenance, repair methods and damage and their elimination. The last chapter contains a collection of the legal fundaments and the technical rules.

  1. Validation of pig operations through pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Tolmasquim, Sueli Tiomno [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil); Nieckele, Angela O. [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Mecanica

    2005-07-01

    In the oil industry, pigging operations in pipelines have been largely applied for different purposes: pipe cleaning, inspection, liquid removal and product separation, among others. An efficient and safe pigging operation requires that a number of operational parameters, such as maximum and minimum pressures in the pipeline and pig velocity, to be well evaluated during the planning stage and maintained within stipulated limits while the operation is accomplished. With the objective of providing an efficient tool to assist in the control and design of pig operations through pipelines, a numerical code was developed, based on a finite difference scheme, which allows the simulation of two fluid transient flow, like liquid-liquid, gas-gas or liquid-gas products in the pipeline. Modules to automatically control process variables were included to employ different strategies to reach an efficient operation. Different test cases were investigated, to corroborate the robustness of the methodology. To validate the methodology, the results obtained with the code were compared with a real liquid displacement operation of a section of the OSPAR oil pipeline, belonging to PETROBRAS, with 30'' diameter and 60 km length, presenting good agreement. (author)

  2. Application of risk assessment techniques to 'major hazard' pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Cox, R A

    1982-12-01

    A risk analysis for a hazardous-material pipeline (carrying LPG, ammonia, or high-pressure gas) is presented. The analysis gives results in a form that will assist the decisionmaker in pipeline planning and route selection. The large inventory of hazardous materials in such pipelines means that risks exist even though the accident record of pipeline transportation compares favorably with that for competing modes of transport. Risk analysis techniques - commonly used in the civil aviation, nuclear, and process industries - can be equally well applied to pipelines and can produce results that not only give a measure of the risk but also indicate the principal sources of risk and possible areas for improvement. A number of pipeline risk analyses have demonstrated the viability of the technique and its usefulness as an aid to practical engineering in design, planning, and maintenance/repair phases.

  3. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL

    Directory of Open Access Journals (Sweden)

    Guan-Jie Hua

    2017-10-01

    Full Text Available A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  4. MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.

    Science.gov (United States)

    Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi

    2017-01-01

    A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.

  5. Fast DRR splat rendering using common consumer graphics hardware

    International Nuclear Information System (INIS)

    Spoerk, Jakob; Bergmann, Helmar; Wanschitz, Felix; Dong, Shuo; Birkfellner, Wolfgang

    2007-01-01

    Digitally rendered radiographs (DRR) are a vital part of various medical image processing applications such as 2D/3D registration for patient pose determination in image-guided radiotherapy procedures. This paper presents a technique to accelerate DRR creation by using conventional graphics hardware for the rendering process. DRR computation itself is done by an efficient volume rendering method named wobbled splatting. For programming the graphics hardware, NVIDIAs C for Graphics (Cg) is used. The description of an algorithm used for rendering DRRs on the graphics hardware is presented, together with a benchmark comparing this technique to a CPU-based wobbled splatting program. Results show a reduction of rendering time by about 70%-90% depending on the amount of data. For instance, rendering a volume of 2x10 6 voxels is feasible at an update rate of 38 Hz compared to 6 Hz on a common Intel-based PC using the graphics processing unit (GPU) of a conventional graphics adapter. In addition, wobbled splatting using graphics hardware for DRR computation provides higher resolution DRRs with comparable image quality due to special processing characteristics of the GPU. We conclude that DRR generation on common graphics hardware using the freely available Cg environment is a major step toward 2D/3D registration in clinical routine

  6. Perception in statistical graphics

    Science.gov (United States)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  7. RAP: RNA-Seq Analysis Pipeline, a new cloud-based NGS web application.

    Science.gov (United States)

    D'Antonio, Mattia; D'Onorio De Meo, Paolo; Pallocca, Matteo; Picardi, Ernesto; D'Erchia, Anna Maria; Calogero, Raffaele A; Castrignanò, Tiziana; Pesole, Graziano

    2015-01-01

    The study of RNA has been dramatically improved by the introduction of Next Generation Sequencing platforms allowing massive and cheap sequencing of selected RNA fractions, also providing information on strand orientation (RNA-Seq). The complexity of transcriptomes and of their regulative pathways make RNA-Seq one of most complex field of NGS applications, addressing several aspects of the expression process (e.g. identification and quantification of expressed genes and transcripts, alternative splicing and polyadenylation, fusion genes and trans-splicing, post-transcriptional events, etc.). In order to provide researchers with an effective and friendly resource for analyzing RNA-Seq data, we present here RAP (RNA-Seq Analysis Pipeline), a cloud computing web application implementing a complete but modular analysis workflow. This pipeline integrates both state-of-the-art bioinformatics tools for RNA-Seq analysis and in-house developed scripts to offer to the user a comprehensive strategy for data analysis. RAP is able to perform quality checks (adopting FastQC and NGS QC Toolkit), identify and quantify expressed genes and transcripts (with Tophat, Cufflinks and HTSeq), detect alternative splicing events (using SpliceTrap) and chimeric transcripts (with ChimeraScan). This pipeline is also able to identify splicing junctions and constitutive or alternative polyadenylation sites (implementing custom analysis modules) and call for statistically significant differences in genes and transcripts expression, splicing pattern and polyadenylation site usage (using Cuffdiff2 and DESeq). Through a user friendly web interface, the RAP workflow can be suitably customized by the user and it is automatically executed on our cloud computing environment. This strategy allows to access to bioinformatics tools and computational resources without specific bioinformatics and IT skills. RAP provides a set of tabular and graphical results that can be helpful to browse, filter and export

  8. 77 FR 34123 - Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines

    Science.gov (United States)

    2012-06-08

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2012-0100] Pipeline Safety: Public Meeting on Integrity Management of Gas Distribution Pipelines AGENCY: Office of Pipeline Safety, Pipeline and Hazardous Materials Safety Administration, DOT. ACTION...

  9. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  10. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Gang Peng

    2014-10-01

    Full Text Available Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  11. A new segmentation strategy for processing magnetic anomaly detection data of shallow depth ferromagnetic pipeline

    Science.gov (United States)

    Feng, Shuo; Liu, Dejun; Cheng, Xing; Fang, Huafeng; Li, Caifang

    2017-04-01

    Magnetic anomalies produced by underground ferromagnetic pipelines because of the polarization of earth's magnetic field are used to obtain the information on the location, buried depth and other parameters of pipelines. In order to achieve a fast inversion and interpretation of measured data, it is necessary to develop a fast and stable forward method. Magnetic dipole reconstruction (MDR), as a kind of integration numerical method, is well suited for simulating a thin pipeline anomaly. In MDR the pipeline model must be cut into small magnetic dipoles through different segmentation methods. The segmentation method has an impact on the stability and speed of forward calculation. Rapid and accurate simulation of deep-buried pipelines has been achieved by exciting segmentation method. However, in practical measurement, the depth of underground pipe is uncertain. When it comes to the shallow-buried pipeline, the present segmentation may generate significant errors. This paper aims at solving this problem in three stages. First, the cause of inaccuracy is analyzed by simulation experiment. Secondly, new variable interval section segmentation is proposed based on the existing segmentation. It can help MDR method to obtain simulation results in a fast way under the premise of ensuring the accuracy of different depth models. Finally, the measured data is inversed based on new segmentation method. The result proves that the inversion based on the new segmentation can achieve fast and accurate inversion of depth parameters of underground pipes without being limited by pipeline depth.

  12. Three-dimensional photoacoustic tomography based on graphics-processing-unit-accelerated finite element method.

    Science.gov (United States)

    Peng, Kuan; He, Ling; Zhu, Ziqiang; Tang, Jingtian; Xiao, Jiaying

    2013-12-01

    Compared with commonly used analytical reconstruction methods, the frequency-domain finite element method (FEM) based approach has proven to be an accurate and flexible algorithm for photoacoustic tomography. However, the FEM-based algorithm is computationally demanding, especially for three-dimensional cases. To enhance the algorithm's efficiency, in this work a parallel computational strategy is implemented in the framework of the FEM-based reconstruction algorithm using a graphic-processing-unit parallel frame named the "compute unified device architecture." A series of simulation experiments is carried out to test the accuracy and accelerating effect of the improved method. The results obtained indicate that the parallel calculation does not change the accuracy of the reconstruction algorithm, while its computational cost is significantly reduced by a factor of 38.9 with a GTX 580 graphics card using the improved method.

  13. The PREP Pipeline: Standardized preprocessing for large-scale EEG analysis

    Directory of Open Access Journals (Sweden)

    Nima eBigdelys Shamlo

    2015-06-01

    Full Text Available The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode/.

  14. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    Science.gov (United States)

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  15. Proposal and design of a natural gas liquefaction process recovering the energy obtained from the pressure reducing stations of high-pressure pipelines

    Science.gov (United States)

    Tan, Hongbo; Zhao, Qingxuan; Sun, Nannan; Li, Yanzhong

    2016-12-01

    Taking advantage of the refrigerating effect in the expansion at an appropriate temperature, a fraction of high-pressure natural gas transported by pipelines could be liquefied in a city gate station through a well-organized pressure reducing process without consuming any extra energy. The authors proposed such a new process, which mainly consists of a turbo-expander driven booster, throttle valves, multi-stream heat exchangers and separators, to yield liquefied natural gas (LNG) and liquid light hydrocarbons (LLHs) utilizing the high-pressure of the pipelines. Based on the assessment of the effects of several key parameters on the system performance by a steady-state simulation in Aspen HYSYS, an optimal design condition of the proposed process was determined. The results showed that the new process is more appropriate to be applied in a pressure reducing station (PRS) for the pipelines with higher pressure. For the feed gas at the pressure of 10 MPa, the maximum total liquefaction rate (ytot) of 15.4% and the maximum exergy utilizing rate (EUR) of 21.7% could be reached at the optimal condition. The present process could be used as a small-scale natural gas liquefying and peak-shaving plant at a city gate station.

  16. Characteristics of operating pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Gallyamov, A K; Armenskii, E A; Gimaev, R G; Mastobaev, B N; Shammazov, A M

    1977-04-01

    The interval in pressure changes according to operational data for the Kamennyi Log--Perm oil pipeline was determined with the aid of the pattern identification method. This has made it possible to determine pressure changes in the operational process. 2 references, 1 table.

  17. Logistics aspects of pipeline transport in the supply of petroleum products

    Directory of Open Access Journals (Sweden)

    Wessel Pienaar

    2008-09-01

    Full Text Available The commercial transportation of crude oil and petroleum products by pipeline is receiving increased attention in South Africa. Transnet Pipeline Transport has recently obtained permission from the National Energy Regulator of South Africa (Nersa to construct and operate a new petroleum products pipeline of 60 cm diameter from Durban to Gauteng. At an operating speed of 10 km/h the proposed 60 cm Transnet pipeline would be able to deliver 3,54 million litres of petroleum product per hour. This is equivalent to 89 deliveries per hour using road tank vehicles with an average carrying capacity of 40 000 litres of fuel per vehicle. This pipeline throughput is also equivalent to two trains departing per hour, each consisting of 42 petroleum tank wagons with an average carrying capacity of 42 500 litres of fuel per wagon. Considering that such road trucks and rail wagons return empty to the upstream refineries in Durban, it is clear that there is no tenable long-term alternative to pipeline transport:pipeline transport is substantially cheaper than road and rail transport;pipeline transport is much safer than rail and especially road transport; andpipeline transport frees up alternative road and rail transport capacity.Pipeline transport is a non-containerised bulk mode of transport for the carriage of suitable liquids (for example, petroleum commodities, which include crude oil, refined fuel products and liquid petro-chemicals, gas, slurrified coal and certain water-suspended ores and minerals. InSouth Africa, petroleum products account for the majority of commercial pipeline traffic, followed by crude oil and natural gas. There are three basic types of petroleum pipeline transport systems:Gathering pipeline systemsCrude oil trunk pipeline systemsRefined products pipeline systems Collectively, these systems provide a continuous link between extraction, processing, distribution, and wholesalers’ depots in areas of consumption. The following

  18. Virtual Instrumentation Corrosion Controller for Natural Gas Pipelines

    Science.gov (United States)

    Gopalakrishnan, J.; Agnihotri, G.; Deshpande, D. M.

    2012-12-01

    Corrosion is an electrochemical process. Corrosion in natural gas (methane) pipelines leads to leakages. Corrosion occurs when anode and cathode are connected through electrolyte. Rate of corrosion in metallic pipeline can be controlled by impressing current to it and thereby making it to act as cathode of corrosion cell. Technologically advanced and energy efficient corrosion controller is required to protect natural gas pipelines. Proposed virtual instrumentation (VI) based corrosion controller precisely controls the external corrosion in underground metallic pipelines, enhances its life and ensures safety. Designing and development of proportional-integral-differential (PID) corrosion controller using VI (LabVIEW) is carried out. When the designed controller is deployed at field, it maintains the pipe to soil potential (PSP) within safe operating limit and not entering into over/under protection zone. Horizontal deployment of this technique can be done to protect all metallic structure, oil pipelines, which need corrosion protection.

  19. 49 CFR 192.937 - What is a continual process of evaluation and assessment to maintain a pipeline's integrity?

    Science.gov (United States)

    2010-10-01

    ... Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Gas Transmission Pipeline Integrity Management § 192.937 What is a...

  20. 75 FR 5244 - Pipeline Safety: Integrity Management Program for Gas Distribution Pipelines; Correction

    Science.gov (United States)

    2010-02-02

    ... Management Program for Gas Distribution Pipelines; Correction AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Regulations to require operators of gas distribution pipelines to develop and implement integrity management...

  1. Pipeline protection with multi component liquid polyurethane coating systems

    Energy Technology Data Exchange (ETDEWEB)

    Kuprion, Rainer; Hornig, Maja [TIB Chemicals Ag, Mannheim (Germany)

    2009-07-01

    Protective coating systems are one of the major defence mechanisms against corrosion for transmission pipelines, pipes within a refinery or petrochemical processing facilities. More and more pipelines are being constructed for the supply and transmission of gas and oil, each year but, in addition many existing pipelines are approaching an age where inspection reveals the necessity to consider complete refurbishment. However, the number of rehabilitation projects each year is still relatively small. Therefore, in the coming years, a rising need and necessity can be expected, for the owners and operating companies to be faced with the option of either replacing the pipeline or refurbishing of the existing pipeline. If the pipeline is known to have external corrosion, then safe and economic operation should be assured. Rehabilitation should be done before it is too late in order to ensure its future integrity and operational life. Rehabilitation of pipelines has been both the economic solution and, more significantly, the ecological solution and in many of those cases the coatings selected for the external protection has been multi component liquids based on 100% solids polyurethanes. (author)

  2. Assessing fugitive emissions of CH4 from high-pressure gas pipelines

    Science.gov (United States)

    Worrall, Fred; Boothroyd, Ian; Davies, Richard

    2017-04-01

    The impact of unconventional natural gas production using hydraulic fracturing methods from shale gas basins has been assessed using life-cycle emissions inventories, covering areas such as pre-production, production and transmission processes. The transmission of natural gas from well pad to processing plants and its transport to domestic sites is an important source of fugitive CH4, yet emissions factors and fluxes from transmission processes are often based upon ver out of date measurements. It is important to determine accurate measurements of natural gas losses when compressed and transported between production and processing facilities so as to accurately determine life-cycle CH4 emissions. This study considers CH4 emissions from the UK National Transmission System (NTS) of high pressure natural gas pipelines. Mobile surveys of CH4 emissions using a Picarro Surveyor cavity-ring-down spectrometer were conducted across four areas in the UK, with routes bisecting high pressure pipelines and separate control routes away from the pipelines. A manual survey of soil gas measurements was also conducted along one of the high pressure pipelines using a tunable diode laser. When wind adjusted 92 km of high pressure pipeline and 72 km of control route were drive over a 10 day period. When wind and distance adjusted CH4 fluxes were significantly greater on routes with a pipeline than those without. The smallest leak detectable was 3% above ambient (1.03 relative concentration) with any leaks below 3% above ambient assumed ambient. The number of leaks detected along the pipelines correlate to the estimated length of pipe joints, inferring that there are constant fugitive CH4 emissions from these joints. When scaled up to the UK's National Transmission System pipeline length of 7600 km gives a fugitive CH4 flux of 4700 ± 2864 kt CH4/yr - this fugitive emission from high pressure pipelines is 0.016% of the annual gas supply.

  3. Pipeline network and environment

    International Nuclear Information System (INIS)

    Oliveira Nascimento, I.; Wagner, J.; Silveira, T.

    2012-01-01

    The Rio de Janeiro is one of 27 units of Brazil. It is located in the eastern portion of the Southeast and occupies an area of 43 696.054 km², being effectively the 3rd smallest state in Brazil. This state in recent years has suffered from erosion problems caused by the deployment of the network pipeline. The deployment pipeline is part of the activities related to the oil industry has caused a more intense conflict between the environment and economic activities, modifying the soil structure and distribution of surface and subsurface flows. This study aimed to analyze the erosion caused by the removal of soil for the deployment of pipeline transportation, with the consequences of the emergence of numerous gullies, landslides and silting of rivers. For the development of this study were performed bibliographic research, field work, mapping and digital preparation of the initial diagnosis of active processes and what the consequent environmental impacts. For these reasons, we conclude that the problems could be avoided or mitigated if there was a prior geological risk management. (author)

  4. A real-time GNSS-R system based on software-defined radio and graphics processing units

    Science.gov (United States)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  5. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Rath, N., E-mail: Nikolaus@rath.org; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q. [Department of Applied Physics and Applied Mathematics, Columbia University, 500 W 120th St, New York, New York 10027 (United States); Kato, S. [Department of Information Engineering, Nagoya University, Nagoya (Japan)

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  6. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Rath, N.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-01-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  7. The graphics future in scientific applications

    International Nuclear Information System (INIS)

    Enderle, G.

    1982-01-01

    Computer graphics methods and tools are being used to a great extent in scientific research. The future development in this area will be influenced both by new hardware developments and by software advances. On the hardware sector, the development of the raster technology will lead to the increased use of colour workstations with more local processing power. Colour hardcopy devices for creating plots, slides, or movies will be available at a lower price than today. The first real 3D-workstations appear on the marketplace. One of the main activities on the software sector is the standardization of computer graphics systems, graphical files, and device interfaces. This will lead to more portable graphical application programs and to a common base for computer graphics education. (orig.)

  8. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  9. Recent developments in MrBUMP: better search-model preparation, graphical interaction with search models, and solution improvement and assessment.

    Science.gov (United States)

    Keegan, Ronan M; McNicholas, Stuart J; Thomas, Jens M H; Simpkin, Adam J; Simkovic, Felix; Uski, Ville; Ballard, Charles C; Winn, Martyn D; Wilson, Keith S; Rigden, Daniel J

    2018-03-01

    Increasing sophistication in molecular-replacement (MR) software and the rapid expansion of the PDB in recent years have allowed the technique to become the dominant method for determining the phases of a target structure in macromolecular X-ray crystallography. In addition, improvements in bioinformatic techniques for finding suitable homologous structures for use as MR search models, combined with developments in refinement and model-building techniques, have pushed the applicability of MR to lower sequence identities and made weak MR solutions more amenable to refinement and improvement. MrBUMP is a CCP4 pipeline which automates all stages of the MR procedure. Its scope covers everything from the sourcing and preparation of suitable search models right through to rebuilding of the positioned search model. Recent improvements to the pipeline include the adoption of more sensitive bioinformatic tools for sourcing search models, enhanced model-preparation techniques including better ensembling of homologues, and the use of phase improvement and model building on the resulting solution. The pipeline has also been deployed as an online service through CCP4 online, which allows its users to exploit large bioinformatic databases and coarse-grained parallelism to speed up the determination of a possible solution. Finally, the molecular-graphics application CCP4mg has been combined with MrBUMP to provide an interactive visual aid to the user during the process of selecting and manipulating search models for use in MR. Here, these developments in MrBUMP are described with a case study to explore how some of the enhancements to the pipeline and to CCP4mg can help to solve a difficult case.

  10. Compensated Mass Balance Method For Oil Pipeline Leakage Detection using SCADA

    OpenAIRE

    Mohamed Zaid A. Karim; Amr A. Aziz Gaafar Alrasheedy

    2015-01-01

    Having extracting oil from reservoir below the ground surface, and after processing, the products are transported through a network of oil pipelines to oil terminals. Thus, oil pipelines play a major role of the economic structure. However, oil pipelines could be subjected to damage due to many reasons like (i) Pipeline corrosion or wear, (ii) Operation outside the design limits, (iii) Unintentional third-party damage and (iv) Intentional damage. As a result of this damage, oil would leak fro...

  11. An Overview of New Progresses in Understanding Pipeline Corrosion

    Energy Technology Data Exchange (ETDEWEB)

    Tan, M. YJ; Varela, F.; Huo, Y.; Gupta, R.; Abreu, D.; Mahdavi, F.; Hinton, B.; Forsyth, M. [Deakin University, Victoria (Australia)

    2016-12-15

    An approach to achieving the ambitious goal of cost effectively extending the safe operation life of energy pipeline to 100 years is the application of health monitoring and life prediction tools that are able to provide both long-term remnant pipeline life prediction and in-situ pipeline condition monitoring. A critical step is the enhancement of technological capabilities that are required for understanding and quantifying the effects of key factors influencing buried steel pipeline corrosion and environmentally assisted materials degradation, and the development of condition monitoring technologies that are able to provide in-situ monitoring and site-specific warning of pipeline damage. This paper provides an overview of our current research aimed at developing new sensors and electrochemical cells for monitoring, categorising and quantifying the level and nature of external pipeline and coating damages under the combined effects of various inter-related variables and processes such as localised corrosion, coating cracking and disbondment, cathodic shielding, transit loss of cathodic protection.

  12. The leak detection and location system design of petroleum pipeline

    International Nuclear Information System (INIS)

    Liu Lixia

    2011-01-01

    In order to improve the sensibility and location precision of petroleum pipeline leak with traditional negative pressure wave detection, a multi-point distributed detection and location monitoring system composed of detection nodes along pipeline, monitoring sub-stations and pressure monitoring center was designed using C/S structure. The detection node gets the pressure signal in pipeline, and sends it to monitoring center through CPRS network that achieves online monitoring for the whole pipeline in real time. The received data was analyzed and processed with multi-point distributed negative pressure wave detection and correlation analysis method. The system can rapidly detect the leak point in pipeline timely and locate accurately to avoid enormous economic loss and environment pollutions accidents. (author)

  13. Visual operations management tools in oil pipelines and terminals standardization processes

    Energy Technology Data Exchange (ETDEWEB)

    De Ludovico Almeida, Maria Fatima [Pontifical Catholic University of Rio de Janeiro (Brazil); Santiago, Adilson; Senra Ribeiro, Kassandra; Mendonca Arruda, Daniela [Petrobras Transporte (Brazil)

    2010-07-01

    Visual operations management (VOM) takes advantage of visual cues to communicate information, simplify processes and improve the quality and safety of operations. Because of heightened competition, the importance of standardization and quality management processes has become more evident for pipeline companies. Petrobras Transporte's marine terminal units has been working over the last years to be recognized as a reference in the activities it pursues. This is based on the Petrobras Transporte's strategic plan 2020, which foresees amongst others, the specialization of technical workforce, operational safety excellence, capital discipline, customer satisfaction, the search for new technologies and markets and the rendering of new services. To achieve these goals, the Marine Terminals standardization program must be adhered to. Focusing on communication and adoption of standards and procedures, this paper describes how visual guides were conceived and implemented within Petrobras Transporte to enable operators and technicians to meet operational, environmental and occupational health and safety requirements.

  14. Porcupine: A visual pipeline tool for neuroimaging analysis.

    Directory of Open Access Journals (Sweden)

    Tim van Mourik

    2018-05-01

    Full Text Available The field of neuroimaging is rapidly adopting a more reproducible approach to data acquisition and analysis. Data structures and formats are being standardised and data analyses are getting more automated. However, as data analysis becomes more complicated, researchers often have to write longer analysis scripts, spanning different tools across multiple programming languages. This makes it more difficult to share or recreate code, reducing the reproducibility of the analysis. We present a tool, Porcupine, that constructs one's analysis visually and automatically produces analysis code. The graphical representation improves understanding of the performed analysis, while retaining the flexibility of modifying the produced code manually to custom needs. Not only does Porcupine produce the analysis code, it also creates a shareable environment for running the code in the form of a Docker image. Together, this forms a reproducible way of constructing, visualising and sharing one's analysis. Currently, Porcupine links to Nipype functionalities, which in turn accesses most standard neuroimaging analysis tools. Our goal is to release researchers from the constraints of specific implementation details, thereby freeing them to think about novel and creative ways to solve a given problem. Porcupine improves the overview researchers have of their processing pipelines, and facilitates both the development and communication of their work. This will reduce the threshold at which less expert users can generate reusable pipelines. With Porcupine, we bridge the gap between a conceptual and an implementational level of analysis and make it easier for researchers to create reproducible and shareable science. We provide a wide range of examples and documentation, as well as installer files for all platforms on our website: https://timvanmourik.github.io/Porcupine. Porcupine is free, open source, and released under the GNU General Public License v3.0.

  15. The Kepler Science Operations Center Pipeline Framework Extensions

    Science.gov (United States)

    Klaus, Todd C.; Cote, Miles T.; McCauliff, Sean; Girouard, Forrest R.; Wohler, Bill; Allen, Christopher; Chandrasekaran, Hema; Bryson, Stephen T.; Middour, Christopher; Caldwell, Douglas A.; hide

    2010-01-01

    The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including managing targets, generating on-board data compression tables, monitoring photometer health and status, processing the science data, and exporting the pipeline products to the mission archive. We describe how the generic pipeline framework software developed for Kepler is extended to achieve these goals, including pipeline configurations for processing science data and other support roles, and custom unit of work generators that control how the Kepler data are partitioned and distributed across the computing cluster. We describe the interface between the Java software that manages the retrieval and storage of the data for a given unit of work and the MATLAB algorithms that process these data. The data for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing these files to be used to debug and evolve the algorithms offline.

  16. Method of drying long-distance pipelines in sections

    Energy Technology Data Exchange (ETDEWEB)

    Steinhaus, H.; Meiners, D.

    1989-04-11

    This invention provides a method of drying long distance pipelines using a vacuum, and provides high-quality drying over the whole length of the pipeline in a manageable and easily followed process. Evacuation of the pipeline is effected by means of a vacuum pump located at least at one point of the section of pipeline. The section is subsequently scavenged or flooded with scavenging gas. After a predetermined reduced pressure is reached, and while the vacuum pump continues to draw off, a scavenging is effected from the end or ends remote from the evacuation point with a molar flow rate of the stream of scavenging gas that is equal to or less than the evacuation stream in throughput, at least initially. The scavenging is effected not from the evacuation point, but from a remote point, and is also effected with a feed speed or feed amount that is throttled at least initially. This ensures that no condensation occurs even in the inner walls of the pipeline.

  17. INTLIB-6, Graphic Device Interface Library for ENDF/B Processing Codes

    International Nuclear Information System (INIS)

    Dunford, L.

    1999-01-01

    1 - Description of program or function: The graphic subroutine libraries DISSPLA and GRALIB (USCD1211) generally produce output which is independent of the output graphic device. A set of device dependent interface routines is required to translate the device independent output to the form required for each graphic device available. The interface library INTLIB provides interface routines for the following output formats: TETRONIX - LN03 PLUS, - video display terminal; POSTSCRIPT - LN03 PLUS with PostScript, - LaserJet III in PostScript mode, - video display terminal; REGIS - VT240 and VT1200; HPGL - LaserJet III in HPGL mode; FR80 - COMP80 film, fiche and hard copy

  18. Proof of pipeline strength based on measurements of inspection pigs; Festigkeitsnachweis von Pipelines aufgrund der Messergebnisse von Pruefmolchen

    Energy Technology Data Exchange (ETDEWEB)

    De la Camp, H.J.; Feser, G.; Hofmann, A.; Wolf, B.; Schmidt, H. [TUeV Sueddeutschland Bau und Betrieb GmbH, Muenchen (Germany); Herforth, H.E.; Juengling, K.H.; Schmidt, W. [TUeV Anlagentechnik GmbH, Berlin-Schoeneberg (Germany). Unternehmensgruppe TUeV Rheinland/Berlin-Brandenburg

    2002-01-01

    The report is aimed at collecting and documenting the state of the art and the extensive know how of experts and pipeline operators with regard to judging the structural integrity of pipelines. In order to assess the actual mechanical strength of pipelines based on measurement results obtained by inspection pigs, guidance is given for future processing, which eventually can be used as a basis for an industry standard. A literature study of the commercially available types of inspection pigs describes and synoptically lists the respective pros and cons. In essence the report comprises besides check lists of operating data for the pipeline and the pig runs mainly the evaluation of defects and respective calculating procedures. Included are recommendations regarding maintenance planning, verification of defects as well as repetition of pig runs. (orig.) [German] Ziel des Berichtes ist die Erfassung und Dokumentation zum derzeitigen Stand der Technik und des vorhandenen umfangreichen Know-how von Sachverstaendigen und Pipelinebetreibern auf dem Gebiet der sicherheitstechnischen Beurteilung von Pipelines. Fuer den Festigkeitsnachweis von Pipelines aufgrund der Messergebnisse von Pruefmolchen wurde ein Leitfaden als Basis fuer die zukuenftige Vorgehensweise erstellt, der eventuell die Grundlage eines normativen Regelwerkes bilden kann. In einer Literaturstudie wurden die auf dem Markt befindlichen Pruefmolchtypen zusammenfassend beschrieben und ihre Vor- und Nachteile tabellarisch gegenuebergestellt und bewertet. Neben der Erstellung von Checklisten fuer notwendige Daten zum Betrieb der Pipeline und der Molchlaeufe bildet die Fehlerbewertung mit entsprechenden Berechnungsverfahren den Hauptteil dieses Berichtes. Hinweise zur Instandhaltungsplanung (Fehlerverifikation und Molchlaufwiederholung) werden gegeben. (orig.)

  19. An X window based graphics user interface for radiation information processing system developed with object-oriented programming technology

    International Nuclear Information System (INIS)

    Gao Wenhuan; Fu Changqing; Kang Kejun

    1993-01-01

    X Window is a network-oriented and network transparent windowing system, and now dominant in the Unix domain. The object-oriented programming technology can be used to change the extensibility of a software system remarkably. An introduction to graphics user interface is given. And how to develop a graphics user interface for radiation information processing system with object-oriented programming technology, which is based on X Window and independent of application is described briefly

  20. US DOE Pipeline Unplugging Requirements Development

    International Nuclear Information System (INIS)

    Rivera, J.; McDaniel, D.

    2009-01-01

    Department of Energy (DOE) sites around the country have an ongoing effort to transport and process several tons of radioactive waste in the form of slurry (liquids and solids) from storage tanks to processing facilities. The system of pipes used for the transportation of this waste needs technology for maintenance and for the prevention (and correction) of pipeline plugging. The unplugging technologies that have been tested and evaluated at Florida International University include ones from NuVision Engineering, AIMM and AquaMiser. NuVision's technology acts as an ocean wave does on beach erosion. It can operate on a long pipeline that has drained down below a blockage. AIMM Technology's Hydrokinetic TM process uses a sonic resonance with a cleaning water stream. This sonic resonance travels through the water stream and transfers vibration to both the pipe and the blockage. The AquaMiser line of water blasting equipment combines 15,000- to 40,000-psi water injection technology to unplug pipelines. Some sites cannot allow this level of pressure in their pipes. After reviewing the results of every test, including the benefits, advantages and disadvantages of each technology, requirements were developed for pressure, personnel training, environmental concerns, safety, and compatibility with current systems, operability, reliability, maintainability and cost. (authors)

  1. Changes in DP systems to match order processing in pipeline engineering and manufacturing

    International Nuclear Information System (INIS)

    Pletschen, W.; Weber, J.

    1987-01-01

    Pipelines hold a pivotal position as the linking element between the mechanical and the electrical engineering components; hence, their production and machining is highly important. Information systems like GRAPLAN, MISTER, PVK, DOPLAS, and PFPD have been used successfully in recent years and are being constantly upgraded to meet the requirements on advanced nuclear pipeline systems which call for DP systems featuring variable dimensioning and suitable interlinkage capacities. (DG) [de

  2. 78 FR 41991 - Pipeline Safety: Potential for Damage to Pipeline Facilities Caused by Flooding

    Science.gov (United States)

    2013-07-12

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION: Notice; Issuance of Advisory... Gas and Hazardous Liquid Pipeline Systems. Subject: Potential for Damage to Pipeline Facilities Caused...

  3. 78 FR 41496 - Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees

    Science.gov (United States)

    2013-07-10

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0156] Pipeline Safety: Meetings of the Gas and Liquid Pipeline Advisory Committees AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Notice of advisory committee...

  4. Overview of slurry pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Gandhi, R L

    1982-01-01

    Slurry pipelines have proven to be a technically feasible, environmentally attractive and economic method of transporting finely divided particles over long distances. A pipeline system normally consists of preparation, pipeline and utilization facilities and requires optimization of all three components taken together. A considerable amount of research work has been done to develop hydraulic design of a slurry pipeline. Equipment selection and estimation of corrosion-erosion are considered to be as important as the hydraulic design. Future applications are expected to be for the large-scale transport of coal and for the exploitation of remotely located mineral deposits such as iron ore and copper. Application of slurry pipelines for the exploitation of remotely located mineral deposits is illustrated by the Kudremukh iron concentrate slurry pipeline in India.

  5. Color correction pipeline optimization for digital cameras

    Science.gov (United States)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  6. Alberta benefits : economic impacts of northern gas pipeline construction

    International Nuclear Information System (INIS)

    Rylska, N.L.; Graebeiel, J.E.; Mirus, R.K.; Janzen, S.S.; Frost, R.J.

    2003-11-01

    This paper describes the potential economic impact and benefits to Alberta from the proposed development of the Alaska Highway Pipeline (AHP) and the Mackenzie Valley Pipeline (MVP). It also includes a planning framework for business and industry in the province. Each proposed pipeline was evaluated separately. The paper includes a list of Alberta companies that stand to benefit from the construction of one or both pipelines. The main findings indicate that northern pipeline development will bring opportunities to Alberta business in design, construction and management. There will be a secondary impact on petrochemical industries and infrastructure. Both pipeline developments will increase employment and yield billions of dollars in gross domestic product. The existing oil and gas industry in Alberta will receive value-added opportunities in areas of specialized expertise such as natural gas and natural gas liquid storage, natural gas liquid processing, and gas to liquid technology projects. The industry will also benefit from power generation and cogeneration. The northern pipelines have the potential to improve the role of First Nations in economic development. Gas consumers in Alberta should benefit from a secure supply of gas and lower prices. refs., tabs., figs

  7. Pipeline Drag Reducers

    International Nuclear Information System (INIS)

    Marawan, H.

    2004-01-01

    Pipeline drag reducers have proven to be an extremely powerful tool in fluid transportation. High molecular weight polymers are used to reduce the frictional pressure loss ratio in crude oil pipelines, refined fuel and aqueous pipelines. Chemical structure of the main used pipeline drag reducers is one of the following polymers and copolymers classified according to the type of fluid to ; low density polyethylene, copolymer of I-hexane cross linked with divinyl benzene, polyacrylamide, polyalkylene oxide polymers and their copolymers, fluorocarbons, polyalkyl methacrylates and terpolymer of styrene, alkyl acrylate and acrylic acid. Drag reduction is the increase in pump ability of a fluid caused by the addition of small amounts of an additive to the fluid. The effectiveness of a drag reducer is normally expressed in terms of percent drag reduction. Frictional pressure loss in a pipeline system is a waste of energy and it costly. The drag reducing additive minimizes the flow turbulence, increases throughput and reduces the energy costs. The Flow can be increased by more than 80 % with existing assets. The effectiveness of the injected drag reducer in Mostorod to Tanta crude oil pipeline achieved 35.4 % drag reduction and 23.2 % flow increase of the actual performance The experimental application of DRA on Arab Petroleum Pipeline Company (Summed) achieved a flow increase ranging from 9-32 %

  8. A study on a real-time leak detection method for pressurized liquid refrigerant pipeline based on pressure and flow rate

    International Nuclear Information System (INIS)

    Tian, Shen; Du, Juanli; Shao, Shuangquan; Xu, Hongbo; Tian, Changqing

    2016-01-01

    Highlights: • A real-time leak detection method is developed for ammonia pipeline in cold storage. • A locating algorithm based on pressure difference profile is provided. • This method is validated by R22 and ammonia leak experiments. • The minimum detectable leak ratio is 1% for R22 and 4% for ammonia. • The location estimating errors are −27% ~ 17% for R22 and −27% ~ 27% for ammonia. - Graphical Abstract: - Abstract: Leakage from pressurized liquid ammonia pipeline has been a serious problem in large commercial cold storages because it might release large amount of liquid ammonia and without safety supervision in daily operations. The present paper shows a detection method for a pressurized liquid ammonia pipeline with a leak. The variations of pressure, flow rate and pressure difference profile are studied. A leak indicator (σ), proposed with the one-dimensional steady-state flow model, is used to detect the leak occurrence by comparing it with a threshold value (σ Le ). A locating algorithm based on pressure difference profile along the pipeline is also proposed, which has considered the effect of the static pressure increase at the leak point. Experiments on different leak positions and ratios from liquid R22 and ammonia pipelines are carried out to validate this method. It is found that, with a relatively low false alarm rate (as three percent), the minimum detectable leak ratio reached 1% for the R22 pipeline and 4% for the ammonia pipeline. The locating errors are between −27% ~ 17% for R22 pipeline and −27% ~ 27% for ammonia pipeline.

  9. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  10. Use of geographic information systems for applications on gas pipeline rights-of-way

    Energy Technology Data Exchange (ETDEWEB)

    Sydelko, P.J.; Wilkey, P.L.

    1992-12-01

    Geographic information system (GIS) applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation, endangered species, wetlands, and public line surveys. A GIS was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentally sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.

  11. Pipelines programming paradigms: Prefab plumbing

    International Nuclear Information System (INIS)

    Boeheim, C.

    1991-08-01

    Mastery of CMS Pipelines is a process of learning increasingly sophisticated tools and techniques that can be applied to your problem. This paper presents a compilation of techniques that can be used as a reference for solving similar problems

  12. North America pipeline map

    International Nuclear Information System (INIS)

    Anon.

    2005-01-01

    This map presents details of pipelines currently in place throughout North America. Fifty-nine natural gas pipelines are presented, as well as 16 oil pipelines. The map also identifies six proposed natural gas pipelines. Major cities, roads and highways are included as well as state and provincial boundaries. The National Petroleum Reserve is identified, as well as the Arctic National Wildlife Refuge. The following companies placed advertisements on the map with details of the services they provide relating to pipeline management and construction: Ferus Gas Industries Trust; Proline; SulfaTreat Direct Oxidation; and TransGas. 1 map

  13. 76 FR 303 - Pipeline Safety: Safety of On-Shore Hazardous Liquid Pipelines

    Science.gov (United States)

    2011-01-04

    ... leak detection requirements for all pipelines; whether to require the installation of emergency flow... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 195 [Docket ID PHMSA-2010-0229] RIN 2137-AE66 Pipeline Safety: Safety of On-Shore Hazardous Liquid...

  14. Leadership Pipeline

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård

    2013-01-01

    I artiklen undersøges det empiriske grundlag for Leader- ship Pipeline. Først beskrives Leadership Pipeline modellen om le- delsesbaner og skilleveje i opadgående transitioner mellem orga- nisatoriske ledelsesniveauer (Freedman, 1998; Charan, Drotter and Noel, 2001). Dernæst sættes fokus på det...... forholdet mellem kontinuitet- og diskontinuitet i ledel- seskompetencer på tværs af organisatoriske niveauer præsenteres og diskuteres. Afslutningsvis diskuteres begrænsningerne i en kompetencebaseret tilgang til Leadership Pipeline, og det foreslås, at succesfuld ledelse i ligeså høj grad afhænger af...

  15. Animated GIFs as vernacular graphic design

    DEFF Research Database (Denmark)

    Gürsimsek, Ödül Akyapi

    2016-01-01

    and often a mix of some of these modes, seem to enable participatory conversations by the audience communities that continue over a period of time. One example of such multimodal digital content is the graphic format called the animated GIF (graphics interchange format). This article focuses on content......Online television audiences create a variety of digital content on the internet. Fans of television production design produce and share such content to express themselves and engage with the objects of their interest. These digital expressions, which exist in the form of graphics, text, videos...... as design, both in the sense that multimodal meaning making is an act of design and in the sense that web-based graphics are designed graphics that are created through a design process. She specifically focuses on the transmedia television production entitled Lost and analyzes the design of animated GIFs...

  16. Smoldyn on graphics processing units: massively parallel Brownian dynamics simulations.

    Science.gov (United States)

    Dematté, Lorenzo

    2012-01-01

    Space is a very important aspect in the simulation of biochemical systems; recently, the need for simulation algorithms able to cope with space is becoming more and more compelling. Complex and detailed models of biochemical systems need to deal with the movement of single molecules and particles, taking into consideration localized fluctuations, transportation phenomena, and diffusion. A common drawback of spatial models lies in their complexity: models can become very large, and their simulation could be time consuming, especially if we want to capture the systems behavior in a reliable way using stochastic methods in conjunction with a high spatial resolution. In order to deliver the promise done by systems biology to be able to understand a system as whole, we need to scale up the size of models we are able to simulate, moving from sequential to parallel simulation algorithms. In this paper, we analyze Smoldyn, a widely diffused algorithm for stochastic simulation of chemical reactions with spatial resolution and single molecule detail, and we propose an alternative, innovative implementation that exploits the parallelism of Graphics Processing Units (GPUs). The implementation executes the most computational demanding steps (computation of diffusion, unimolecular, and bimolecular reaction, as well as the most common cases of molecule-surface interaction) on the GPU, computing them in parallel on each molecule of the system. The implementation offers good speed-ups and real time, high quality graphics output

  17. Sentinel-1 Archive and Processing in the Cloud using the Hybrid Pluggable Processing Pipeline (HyP3) at the ASF DAAC

    Science.gov (United States)

    Arko, S. A.; Hogenson, R.; Geiger, A.; Herrmann, J.; Buechler, B.; Hogenson, K.

    2016-12-01

    In the coming years there will be an unprecedented amount of SAR data available on a free and open basis to research and operational users around the globe. The Alaska Satellite Facility (ASF) DAAC hosts, through an international agreement, data from the Sentinel-1 spacecraft and will be hosting data from the upcoming NASA ISRO SAR (NISAR) mission. To more effectively manage and exploit these vast datasets, ASF DAAC has begun moving portions of the archive to the cloud and utilizing cloud services to provide higher-level processing on the data. The Hybrid Pluggable Processing Pipeline (HyP3) project is designed to support higher-level data processing in the cloud and extend the capabilities of researchers to larger scales. Built upon a set of core Amazon cloud services, the HyP3 system allows users to request data processing using a number of canned algorithms or their own algorithms once they have been uploaded to the cloud. The HyP3 system automatically accesses the ASF cloud-based archive through the DAAC RESTful application programming interface and processes the data on Amazon's elastic compute cluster (EC2). Final products are distributed through Amazon's simple storage service (S3) and are available for user download. This presentation will provide an overview of ASF DAAC's activities moving the Sentinel-1 archive into the cloud and developing the integrated HyP3 system, covering both the benefits and difficulties of working in the cloud. Additionally, we will focus on the utilization of HyP3 for higher-level processing of SAR data. Two example algorithms, for sea-ice tracking and change detection, will be discussed as well as the mechanism for integrating new algorithms into the pipeline for community use.

  18. 76 FR 29333 - Pipeline Safety: Meetings of the Technical Pipeline Safety Standards Committee and the Technical...

    Science.gov (United States)

    2011-05-20

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Technical Hazardous Liquid Pipeline Safety Standards Committee AGENCY: Pipeline and Hazardous Materials... for natural gas pipelines and for hazardous liquid pipelines. Both committees were established under...

  19. Remaining Sites Verification Package for the 100-F-26:12, 1.8-m (72-in.) Main Process Sewer Pipeline. Attachment to Waste Site Reclassification Form 2007-034

    International Nuclear Information System (INIS)

    Capron, J.M.

    2008-01-01

    The 100-F-26:12 waste site was an approximately 308-m-long, 1.8-m-diameter east-west-trending reinforced concrete pipe that joined the North Process Sewer Pipelines (100-F-26:1) and the South Process Pipelines (100-F-26:4) with the 1.8-m reactor cooling water effluent pipeline (100-F-19). In accordance with this evaluation, the verification sampling results support a reclassification of this site to Interim Closed Out. The results of verification sampling show that residual contaminant concentrations do not preclude any future uses and allow for unrestricted use of shallow zone soils. The results also demonstrate that residual contaminant concentrations are protective of groundwater and the Columbia River

  20. Optimal Energy Consumption Analysis of Natural Gas Pipeline

    Science.gov (United States)

    Liu, Enbin; Li, Changjun; Yang, Yi

    2014-01-01

    There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410

  1. Accelerating Solution Proposal of AES Using a Graphic Processor

    Directory of Open Access Journals (Sweden)

    STRATULAT, M.

    2011-11-01

    Full Text Available The main goal of this work is to analyze the possibility of using a graphic processing unit in non graphical calculations. Graphic Processing Units are being used nowadays not only for game engines and movie encoding/decoding, but also for a vast area of applications, like Cryptography. We used the graphic processing unit as a cryptographic coprocessor in order accelerate AES algorithm. Our implementation of AES is on a GPU using CUDA architecture. The performances obtained show that the CUDA implementation can offer speedups of 11.95Gbps. The tests are conducted in two directions: running the tests on small data sizes that are located in memory and large data that are stored in files on hard drives.

  2. 77 FR 16471 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Science.gov (United States)

    2012-03-21

    ... Registry of Pipeline and Liquefied Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts... Register (75 FR 72878) titled: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting...

  3. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  4. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  5. CT applications of medical computer graphics

    International Nuclear Information System (INIS)

    Rhodes, M.L.

    1985-01-01

    Few applications of computer graphics show as much promise and early success as that for CT. Unlike electron microscopy, ultrasound, business, military, and animation applications, CT image data are inherently digital. CT pictures can be processed directly by programs well established in the fields of computer graphics and digital image processing. Methods for reformatting digital pictures, enhancing structure shape, reducing image noise, and rendering three-dimensional (3D) scenes of anatomic structures have all become routine at many CT centers. In this chapter, the authors provide a brief introduction to computer graphics terms and techniques commonly applied to CT pictures and, when appropriate, to those showing promise for magnetic resonance images. Topics discussed here are image-processing options that are applied to digital images already constructed. In the final portion of this chapter techniques for ''slicing'' CT image data are presented, and geometric principles that describe the specification of oblique and curved images are outlined. Clinical examples are included

  6. The pipeline service obligation under changing LDC purchasing practices

    International Nuclear Information System (INIS)

    Neff, S.J.

    1990-01-01

    Historically, interstate natural gas pipelines served as aggregators and transporters of gas supplies from the producing fields to the city-gate. In turn, local distribution companies (LDCs) bought gas from pipelines at the city-gate under long-term sales contracts and resold the gas to retail customers. Once a pipeline/LDC sales relationship was established through a regulated certificate process, the LDC was assured of gas supply up to the level of its contract demand (CD) at just and reasonable rates until abandonment of the pipeline's sales service obligation was granted by the Federal Energy Regulatory Commission (FERC). During the years of regulated wellhead pricing and limited gas deliverability, pipelines signed long-term take-or-pay contracts with producers to induce them to develop and commit new gas supplies. Those supply cost obligations were reflected in tariff minimum bill provisions. For years, this pipeline/LDC arrangement was mutually beneficial and provided assured firm service. With the load diversity on large interstate pipeline systems and the make-up provisions under take-or-pay clauses, these gas purchase contracts provided supply reliability without negative economic consequence to the pipelines. Then, with the issuance of FERC Order Nos. 380, 436, and 500, LDCs' obligations to purchase gas from pipeline suppliers according to the terms of those long term sales agreements were irrevocably altered. The impacts of those long term sales agreements were irrevocably altered. The impacts of those orders the elimination of minimum bills and the advent of open access transportation caused a serious erosion of the mutual obligations between pipelines and their LDC customers. The result has been a significant loss of pipeline sales markets as LDC customers have chosen alternative supplied, often at the urging of state public utility commissions (PUCs) to lower short-term costs

  7. R graphics

    CERN Document Server

    Murrell, Paul

    2005-01-01

    R is revolutionizing the world of statistical computing. Powerful, flexible, and best of all free, R is now the program of choice for tens of thousands of statisticians. Destined to become an instant classic, R Graphics presents the first complete, authoritative exposition on the R graphical system. Paul Murrell, widely known as the leading expert on R graphics, has developed an in-depth resource that takes nothing for granted and helps both neophyte and seasoned users master the intricacies of R graphics. After an introductory overview of R graphics facilities, the presentation first focuses

  8. 75 FR 45591 - Pipeline Safety: Notice of Technical Pipeline Safety Advisory Committee Meetings

    Science.gov (United States)

    2010-08-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part... Committee Meetings AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. ACTION... safety standards, risk assessments, and safety policies for natural gas pipelines and for hazardous...

  9. Decontamination device for pipeline

    International Nuclear Information System (INIS)

    Harashina, Heihachi.

    1994-01-01

    Pipelines to be decontaminated are parts of pipelines contaminated with radioactive materials, and they are connected to a fluid transfer means (for example, a bladeless pump) and a ball collector by way of a connector. The fluid of a mixture of chemical decontaminating liquid and spheres is sent into pipelines to be decontaminated. The spheres are, for example, heat resistant porous hard or soft rubber spheres. The fluid discharged from the pipelines to be decontaminated are circulated by way of bypassing means. The inner surface of the pipelines is decontaminated by the circulation of the fluid. When the bypass means is closed, the fluid discharged from the pipelines to be decontaminated is sent to the ball collector, and the spheres are captured by a hopper. Further, the liquid is sent to the filtrating means to filter the chemical contaminating liquid, and sludges contained in the liquid are captured. (I.N.)

  10. PROFESSIONALLY ORIENTED COURSE OF ENGINEERING-GRAPHICAL TRAINING

    Directory of Open Access Journals (Sweden)

    Olga V. Zhuykova

    2015-01-01

    Full Text Available The aim of the article is to present the results of managing the competence oriented self-directed student learning while studying graphical subjects at Kalashnikov Izhevsk State Technical University.Methods. The technology of self-directed engineering-graphical training of future bachelors based on the analysis of educational literature and teaching experience, providing individualization and professional education is suggested. The method of team expert appraisal was used at all stages of self-directed learning management. This method is one of main in qualimetry (the science concerned with assessing and evaluating the quality of any objects and processes; it permits to reveal the components of engineering-graphical competence, to establish the criteria and markers of determining the level of its development, to perform expert evaluation of student tasks and estimation procedures.Results. It has been established that the revitalization of student selfdirected learning owing to professional education and individualization permits to increase the level of student engineering-graphical competence development. Scientific novelty. The criteria evaluation procedures for determining the level of student engineering-graphical competence development in the process of their professional oriented self-directed learning while studying graphical subjects at a technical university are developed.Practical significance. The professional-focused educational trajectories of independent engineering-graphic preparation of students are designed and substantially filled in content. Such training is being realised at the present time at Kalashnikov Izhevsk State Technical University, major «Instrument Engineering». 

  11. Determination of the actual hydraulic characteristics of a main oil pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Gubin, V V; Mironenko, N Ya; Titov, N S; Skovorodnikov, Yu A

    1976-01-01

    A method is presented for construction of hydraulic characteristics of sections of an oil pipeline by pattern recognition methods. In the theory of pattern recognition, the characteristics of a complex object are studied by means of adaptation algorithms. These algorithms allow the generation of models of processes, establishment of the relationships between their defining parameters and output characteristics on the basis of successive processing of information on the object (for example, dispatchers data on a pipeline sector). The analysis does not require analytic formalization of the processes. This work presents a solution of the problem of determination of the pressure loss to friction over the sections of a main pipeline on the basis of the following initial data: oil flowrate, diameter of pipeline, length of section, and viscosity of oil. The range of change of the pressure drop is divided into intervals (classes), and the task of determination of the continuous value is reduced to recognition of its membership in one of the classes with an accuracy equal to the size of an interval. The potential functions method from pattern recognition theory is used to perform the classification. The algorithm presented allows actual operational characteristics of main oil pipelines to be defined with an accuracy sufficient for practical application.

  12. High temperature pipeline design

    Energy Technology Data Exchange (ETDEWEB)

    Greenslade, J.G. [Colt Engineering, Calgary, AB (Canada). Pipelines Dept.; Nixon, J.F. [Nixon Geotech Ltd., Calgary, AB (Canada); Dyck, D.W. [Stress Tech Engineering Inc., Calgary, AB (Canada)

    2004-07-01

    It is impractical to transport bitumen and heavy oil by pipelines at ambient temperature unless diluents are added to reduce the viscosity. A diluted bitumen pipeline is commonly referred to as a dilbit pipeline. The diluent routinely used is natural gas condensate. Since natural gas condensate is limited in supply, it must be recovered and reused at high cost. This paper presented an alternative to the use of diluent to reduce the viscosity of heavy oil or bitumen. The following two basic design issues for a hot bitumen (hotbit) pipeline were presented: (1) modelling the restart problem, and, (2) establishing the maximum practical operating temperature. The transient behaviour during restart of a high temperature pipeline carrying viscous fluids was modelled using the concept of flow capacity. Although the design conditions were hypothetical, they could be encountered in the Athabasca oilsands. It was shown that environmental disturbances occur when the fluid is cooled during shut down because the ground temperature near the pipeline rises. This can change growing conditions, even near deeply buried insulated pipelines. Axial thermal loads also constrain the design and operation of a buried pipeline as higher operating temperatures are considered. As such, strain based design provides the opportunity to design for higher operating temperature than allowable stress based design methods. Expansion loops can partially relieve the thermal stress at a given temperature. As the design temperature increase, there is a point at which above grade pipelines become attractive options, although the materials and welding procedures must be suitable for low temperature service. 3 refs., 1 tab., 10 figs.

  13. Can we be more Graphic about Graphic Design?

    OpenAIRE

    Vienne, Véronique

    2012-01-01

    Can you objectify a subjective notion? This is the question graphic designers must face when they talk about their work. Even though graphic design artifacts are omnipresent in our culture, graphic design is still an exceptionally ill-defined profession. This is one of the reasons design criticism is still a rudimentary discipline. No one knows for sure what is this thing we sometimes call “graphic communication” for lack of a better word–a technique my Webster’s dictionary describes as “the ...

  14. 77 FR 19799 - Pipeline Safety: Pipeline Damage Prevention Programs

    Science.gov (United States)

    2012-04-02

    ... noted ``when the oil pipeline industry developed the survey for its voluntary spill reporting system...) [cir] The American Public Gas Association (APGA) [cir] The Association of Oil Pipelines (AOPL) [cir... the contrary, all 50 states in the United States have a law designed to prevent excavation damage to...

  15. Optimizing the TESS Planet Finding Pipeline

    Science.gov (United States)

    Chitamitara, Aerbwong; Smith, Jeffrey C.; Tenenbaum, Peter; TESS Science Processing Operations Center

    2017-10-01

    The Transiting Exoplanet Survey Satellite (TESS) is a new NASA planet finding all-sky survey that will observe stars within 200 light years and 10-100 times brighter than that of the highly successful Kepler mission. TESS is expected to detect ~1000 planets smaller than Neptune and dozens of Earth size planets. As in the Kepler mission, the Science Processing Operations Center (SPOC) processing pipeline at NASA Ames Research center is tasked with calibrating the raw pixel data, generating systematic error corrected light curves and then detecting and validating transit signals. The Transiting Planet Search (TPS) component of the pipeline must be modified and tuned for the new data characteristics in TESS. For example, due to each sector being viewed for as little as 28 days, the pipeline will be identifying transiting planets based on a minimum of two transit signals rather than three, as in the Kepler mission. This may result in a significantly higher false positive rate. The study presented here is to measure the detection efficiency of the TESS pipeline using simulated data. Transiting planets identified by TPS are compared to transiting planets from the simulated transit model using the measured epochs, periods, transit durations and the expected detection statistic of injected transit signals (expected MES). From the comparisons, the recovery and false positive rates of TPS is measured. Measurements of recovery in TPS are then used to adjust TPS configuration parameters to maximize the planet recovery rate and minimize false detections. The improvements in recovery rate between initial TPS conditions and after various adjustments will be presented and discussed.

  16. Research on pipeline leak detection method based on pressure and dynamic pressure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Likun; Xiong, Min; Zhao, Jinyun; Wang, Hongchao; Xu, Bin; Yu, DongLiang; Sun, Yi; Cai, Yongjun [RnD center of PetroChina Pipeline Company, Langfang, Hebei, (China)

    2010-07-01

    Pipeline leakages are very frequent and need to be detected as fast as possible to avoid safety and environment issues. Many leakage detection processes have been developed. Acoustic wave methods based on static pressure and dynamic pressure are both used for pipeline leakage detection. This study investigated a new pipeline leak detection method based on joint pressure and dynamic pressure. A dynamic pressure transmitter was designed based on a piezoelectric dynamic pressure sensor. The study showed that the dynamic pressure signal should be used for pipeline leak detection with a quick-change in pipeline internal pressure, while the static pressure signal provides better results with a slow-change of pipeline internal pressure. The in-field results showed that the location error of dynamic pressure is reduced to 80 m with a leakage ratio of 0.6 % pipeline throughput.

  17. Update on the SDSS-III MARVELS data pipeline development

    Science.gov (United States)

    Li, Rui; Ge, J.; Thomas, N. B.; Petersen, E.; Wang, J.; Ma, B.; Sithajan, S.; Shi, J.; Ouyang, Y.; Chen, Y.

    2014-01-01

    MARVELS (Multi-object APO Radial Velocity Exoplanet Large-area Survey), as one of the four surveys in the SDSS-III program, has monitored over 3,300 stars during 2008-2012, with each being visited an average of 26 times over a 2-year window. Although the early data pipeline was able to detect over 20 brown dwarf candidates and several hundreds of binaries, no giant planet candidates have been reliably identified due to its large systematic errors. Learning from past data pipeline lessons, we re-designed the entire pipeline to handle various types of systematic effects caused by the instrument (such as trace, slant, distortion, drifts and dispersion) and observation condition changes (such as illumination profile and continuum). We also introduced several advanced methods to precisely extract the RV signals. To date, we have achieved a long term RMS RV measurement error of 14 m/s for HIP-14810 (one of our reference stars) after removal of the known planet signal based on previous HIRES RV measurement. This new 1-D data pipeline has been used to robustly identify four giant planet candidates within the small fraction of the survey data that has been processed (Thomas et al. this meeting). The team is currently working hard to optimize the pipeline, especially the 2-D interference-fringe RV extraction, where early results show a 1.5 times improvement over the 1-D data pipeline. We are quickly approaching the survey baseline performance requirement of 10-35 m/s RMS for 8-12 solar type stars. With this fine-tuned pipeline and the soon to be processed plates of data, we expect to discover many more giant planet candidates and make a large statistical impact to the exoplanet study.

  18. Maritimes and Northeast Pipeline : from pipe dream to reality

    International Nuclear Information System (INIS)

    Langan, P.T.

    1998-01-01

    A general project description and time schedule of the Maritimes and Northeast Pipeline project was presented. The pipeline project is a component of the Sable Offshore Energy Project which involves the development of six separate gas fields near Sable Island on the Scotian Shelf about 250 km off the south coast of Nova Scotia. The six fields under development represent about 3.5 trillion cubic feet of proven gas supply. Another 2 trillion cubic feet of gas has been discovered in nearby pools. There is an estimated additional 13 trillion cubic feet of potential gas reserve in the Scotian Shelf region. The 2 billion-dollar offshore project involves twenty-eight production wells, construction and installation of six platforms and a 225-km long two-phase pipeline from the central platform that will transport the product to shore. A gas plant will be constructed on-shore at Goldboro at which point the liquids will be stripped from the gas stream and transported by an onshore pipeline to Point Tupper, Cape Breton Island, to a fractionation facility for further market processing. The Maritimes and Northeast Pipeline will transport the gas product to markets in Nova Scotia, New Brunswick and New England. A number of unique challenges associated with the Maritimes and Northeast Pipeline project such as the problems of serving a new market, the highly competitive anchor market in the U.S., supply and operating characteristics, the regulatory process, and various competing projects were also reviewed. Sable offshore gas is scheduled to flow by late 1999

  19. Historical analysis of US pipeline accidents triggered by natural hazards

    Science.gov (United States)

    Girgin, Serkan; Krausmann, Elisabeth

    2015-04-01

    Natural hazards, such as earthquakes, floods, landslides, or lightning, can initiate accidents in oil and gas pipelines with potentially major consequences on the population or the environment due to toxic releases, fires and explosions. Accidents of this type are also referred to as Natech events. Many major accidents highlight the risk associated with natural-hazard impact on pipelines transporting dangerous substances. For instance, in the USA in 1994, flooding of the San Jacinto River caused the rupture of 8 and the undermining of 29 pipelines by the floodwaters. About 5.5 million litres of petroleum and related products were spilled into the river and ignited. As a results, 547 people were injured and significant environmental damage occurred. Post-incident analysis is a valuable tool for better understanding the causes, dynamics and impacts of pipeline Natech accidents in support of future accident prevention and mitigation. Therefore, data on onshore hazardous-liquid pipeline accidents collected by the US Pipeline and Hazardous Materials Safety Administration (PHMSA) was analysed. For this purpose, a database-driven incident data analysis system was developed to aid the rapid review and categorization of PHMSA incident reports. Using an automated data-mining process followed by a peer review of the incident records and supported by natural hazard databases and external information sources, the pipeline Natechs were identified. As a by-product of the data-collection process, the database now includes over 800,000 incidents from all causes in industrial and transportation activities, which are automatically classified in the same way as the PHMSA record. This presentation describes the data collection and reviewing steps conducted during the study, provides information on the developed database and data analysis tools, and reports the findings of a statistical analysis of the identified hazardous liquid pipeline incidents in terms of accident dynamics and

  20. Optimizing the fMRI data-processing pipeline using prediction and reproducibility performance metrics: I. A preliminary group analysis

    DEFF Research Database (Denmark)

    Strother, Stephen C.; Conte, Stephen La; Hansen, Lars Kai

    2004-01-01

    We argue that published results demonstrate that new insights into human brain function may be obscured by poor and/or limited choices in the data-processing pipeline, and review the work on performance metrics for optimizing pipelines: prediction, reproducibility, and related empirical Receiver......, temporal detrending, and between-subject alignment) in a group analysis of BOLD-fMRI scans from 16 subjects performing a block-design, parametric-static-force task. Large-scale brain networks were detected using a multivariate linear discriminant analysis (canonical variates analysis, CVA) that was tuned...... of baseline scans have constant, equal means, and this assumption was assessed with prediction metrics. Higher-order polynomial warps compared to affine alignment had only a minor impact on the performance metrics. We found that both prediction and reproducibility metrics were required for optimizing...

  1. Natural Gas pipelines: economics of incremental capacity

    International Nuclear Information System (INIS)

    Kimber, M.

    2000-01-01

    A number of gas transmission pipeline systems in Australia exhibit capacity constraints, and yet there is little evidence of creative or innovative processes from either the service provides of the regulators which might provide a market-based response to these constraints. There is no provision in the Code in its current form to allow it to accommodate these processes. This aspect is one of many that require review to make the Code work. It is unlikely that the current members of the National Gas Pipeline Advisory Committee (NGPAC) or its advisers have sufficient understanding of the analysis of risk and the consequential commercial drivers to implement the necessary changes. As a result, the Code will increasingly lose touch with the commercial realities of the energy market and will continue to inhibit investment in new and expanded infrastructure where market risk is present. The recent report prepared for the Business Council of Australia indicates a need to re-vitalise the energy reform process. It is important for the Australian energy industry to provide leadership and advice to governments to continue the process of reform, and, in particular, to amend the Code to make it more relevant. These amendments must include a mechanism by which price signals can be generated to provide timely and effective information for existing service providers or new entrants to install incremental pipeline capacity

  2. A microscale protein NMR sample screening pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, Paolo; Swapna, G. V. T.; Huang, Yuanpeng J.; Aramini, James M. [State University of New Jersey, Center for Advanced Biotechnology and Medicine, Department of Molecular Biology and Biochemistry, Rutgers (United States); Anklin, Clemens [Bruker Biospin Corporation (United States); Conover, Kenith; Hamilton, Keith; Xiao, Rong; Acton, Thomas B.; Ertekin, Asli; Everett, John K.; Montelione, Gaetano T., E-mail: guy@cabm.rutgers.ed [State University of New Jersey, Center for Advanced Biotechnology and Medicine, Department of Molecular Biology and Biochemistry, Rutgers (United States)

    2010-01-15

    As part of efforts to develop improved methods for NMR protein sample preparation and structure determination, the Northeast Structural Genomics Consortium (NESG) has implemented an NMR screening pipeline for protein target selection, construct optimization, and buffer optimization, incorporating efficient microscale NMR screening of proteins using a micro-cryoprobe. The process is feasible because the newest generation probe requires only small amounts of protein, typically 30-200 {mu}g in 8-35 {mu}l volume. Extensive automation has been made possible by the combination of database tools, mechanization of key process steps, and the use of a micro-cryoprobe that gives excellent data while requiring little optimization and manual setup. In this perspective, we describe the overall process used by the NESG for screening NMR samples as part of a sample optimization process, assessing optimal construct design and solution conditions, as well as for determining protein rotational correlation times in order to assess protein oligomerization states. Database infrastructure has been developed to allow for flexible implementation of new screening protocols and harvesting of the resulting output. The NESG micro NMR screening pipeline has also been used for detergent screening of membrane proteins. Descriptions of the individual steps in the NESG NMR sample design, production, and screening pipeline are presented in the format of a standard operating procedure.

  3. An Exploratory Study of the Effect of Screen Size and Resolution on the Legibility of Graphics in Automated Job Performance Aids. Final Report.

    Science.gov (United States)

    Dwyer, Daniel J.

    Designed to assess the effect of alternative display (CRT) screen sizes and resolution levels on user ability to identify and locate printed circuit (PC) board points, this study is the first in a protracted research program on the legibility of graphics in computer-based job aids. Air Force maintenance training pipeline students (35 male and 1…

  4. Interactive Graphic Journalism

    Directory of Open Access Journals (Sweden)

    Laura Schlichting

    2016-12-01

    Full Text Available This paper examines graphic journalism (GJ in a transmedial context, and argues that transmedial graphic journalism (TMGJ is an important and fruitful new form of visual storytelling, that will re-invigorate the field of journalism, as it steadily tests out and plays with new media, ultimately leading to new challenges in both the production and reception process. With TMGJ, linear narratives may be broken up and ethical issues concerning the emotional and entertainment value are raised when it comes to ‘playing the news’. The aesthetic characteristics of TMGJ will be described and interactivity’s influence on non-fiction storytelling will be explored in an analysis of The Nisoor Square Shooting (2011 and Ferguson Firsthand (2015.

  5. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  6. Gamma scan technique for detecting coupon inside the mother pipeline

    International Nuclear Information System (INIS)

    Rasif Mohd Zain; Roslan Yahya; Mohamad Rabaie Shari; Airwan Affandi Mahmood; Mior Ahmad Khusaini Adnan

    2012-01-01

    Many times a year natural gas transmission and distribution companies need to make new connections to pipelines to expand or modify their existing system through hot tapping procedure. This procedure involves the installation of a new pipeline connection while the pipeline remains in service, flowing natural gas under pressure. The hot tap procedure includes attaching a branch connection and valve on the outside of an operating pipeline, and then cutting out the pipe-line wall within the branch and removing the wall section, which is called object of coupon through the valve. During the hot tapping process a critical problems occurred when a coupon fell into the mother pipeline. To overcome this problem, a gamma-ray absorption technique was chosen whereby a mapping technique will be done to detect the coupon position. The technique is non-destructive as it applies Co-60 (5 mCi) as a radioisotope sealed source to emit gamma radiation and a NaI(Tl) scintillation as detector. The result provided a visible representation of density profile inside pipeline where the coupon location can be located. This paper provides the detail of the technique used and presents the result obtained. (author)

  7. Methodology for reducing energy and resource costs in construction of trenchless crossover of pipelines

    Science.gov (United States)

    Toropov, V. S.

    2018-05-01

    The paper suggests a set of measures to select the equipment and its components in order to reduce energy costs in the process of pulling the pipeline into the well in the constructing the trenchless pipeline crossings of various materials using horizontal directional drilling technology. A methodology for reducing energy costs has been developed by regulating the operation modes of equipment during the process of pulling the working pipeline into a drilled and pre-expanded well. Since the power of the drilling rig is the most important criterion in the selection of equipment for the construction of a trenchless crossover, an algorithm is proposed for calculating the required capacity of the rig when operating in different modes in the process of pulling the pipeline into the well.

  8. Leadership Pipeline

    DEFF Research Database (Denmark)

    Elmholdt, Claus Westergård

    2012-01-01

    Artiklen analyserer grundlaget for Leadership Pipeline modellen med henblik på en vurdering af substansen bag modellen, og perspektiverne for generalisering af modellen til en dansk organisatorisk kontekst.......Artiklen analyserer grundlaget for Leadership Pipeline modellen med henblik på en vurdering af substansen bag modellen, og perspektiverne for generalisering af modellen til en dansk organisatorisk kontekst....

  9. A validated pipeline for detection of SNVs and short InDels from RNA Sequencing

    Directory of Open Access Journals (Sweden)

    Nitin Mandloi

    2017-12-01

    In this study, we have developed a pipeline to detect germline variants from RNA-seq data. The pipeline steps include: pre-processing, alignment, GATK best practices for RNA-seq and variant filtering. The pre-processing step includes base and adapter trimming and removal of contamination reads from rRNA, tRNA, mitochondrial DNA and repeat regions. The read alignment of the pre-processed reads is performed using STAR/HiSAT. After this we used GATK best practices for the RNA-seq dataset to call germline variants. We benchmarked our pipeline on NA12878 RNA-seq data downloaded from SRA (SRR1258218. After variant calling, the quality passed variants were compared against the gold standard variants provided by GIAB consortium. Of the total ~3.6 million high quality variants reported as gold standard variants for this sample (considering whole genome, our pipeline identified ~58,104 variants to be expressed in RNA-seq. Our pipeline achieved more than 99% of sensitivity in detection of germline variants.

  10. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  11. Interactive and Animated Scalable Vector Graphics and R Data Displays

    Directory of Open Access Journals (Sweden)

    Deborah Nolan

    2012-01-01

    Full Text Available We describe an approach to creating interactive and animated graphical displays using R's graphics engine and Scalable Vector Graphics, an XML vocabulary for describing two-dimensional graphical displays. We use the svg( graphics device in R and then post-process the resulting XML documents. The post-processing identities the elements in the SVG that correspond to the different components of the graphical display, e.g., points, axes, labels, lines. One can then annotate these elements to add interactivity and animation effects. One can also use JavaScript to provide dynamic interactive effects to the plot, enabling rich user interactions and compelling visualizations. The resulting SVG documents can be embedded withinHTML documents and can involve JavaScript code that integrates the SVG and HTML objects. The functionality is provided via the SVGAnnotation package and makes static plots generated via R graphics functions available as stand-alone, interactive and animated plots for the Web and other venues.

  12. Implementation of RLS-based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2011-01-01

    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. For a 4096-tap case, a GPU program is almost three times faster than a CPU program.

  13. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    Belleman, R.G.; Bédorf, J.; Portegies Zwart, S.F.

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  14. Process for testing noise emission from containers or pipelines made of steel, particularly for nuclear reactor plants

    International Nuclear Information System (INIS)

    Votava, E.; Stipsits, G.; Sommer, R.

    1982-01-01

    In a process for noise emission testing of steel containers or pipelines, particularly for testing primary circuit components of nuclear reactor plants, measuring sensors and/or associated electronic amplifiers are used, which are tuned for receiving the frequency band of the sound emission spectrum above a limiting frequency f G , but are limited or non-resonant for frequency bands less than f G . (orig./HP) [de

  15. 49 CFR 195.210 - Pipeline location.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Pipeline location. 195.210 Section 195.210 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY... PIPELINE Construction § 195.210 Pipeline location. (a) Pipeline right-of-way must be selected to avoid, as...

  16. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  17. GMAW (Gas Metal Arc Welding) process development for girth welding of high strength pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Rajan, Vaidyanath; Daniel, Joe; Quintana, Marie [The Lincoln Electric Company, Cleveland, OH (United States); Chen, Yaoshan [Center for Reliable Energy Systems (CRES), Dublin, OH (United States); Souza, Antonio [Lincoln Electric do Brasil, Guarulhos, SP (Brazil)

    2009-07-01

    This paper highlights some of the results and findings from the first phase of a consolidated program co-funded by US Department of Transportation Pipeline and Hazardous Materials Safety Administration (PHMSA) and Pipeline Research Council Inc (PRCI) to develop pipe weld assessment and qualification methods and optimize X 100 pipe welding technologies. One objective of the program is to establish the range of viable welding options for X 100 line pipe, and define the essential variables to provide welding process control for reliable and consistent mechanical performance of the weldments. In this first phase, a series of narrow gap girth welds were made with pulsed gas metal arc welding (GMAW), instrumented with thermocouples in the heat affected zone (HAZ) and weld metal to obtain the associated thermal profiles, and instrumented to measure true energy input as opposed to conventional heat input. Results reveal that true heat input is 16%-22% higher than conventional heat input. The thermal profile measurements correlate very well with thermal model predictions using true energy input data, which indicates the viability of treating the latter as an essential variable. Ongoing microstructural and mechanical testing work will enable validation of an integrated thermal-microstructural model being developed for these applications. Outputs from this model will be used to correlate essential welding process variables with weld microstructure and hardness. This will ultimately enable development of a list of essential variables and the ranges needed to ensure mechanical properties are achieved in practice, recommendations for controlling and monitoring these essential variables and test methods suitable for classification of welding consumables. (author)

  18. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    International Nuclear Information System (INIS)

    He, Qingyun; Chen, Hongli; Feng, Jingchao

    2015-01-01

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  19. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    He, Qingyun; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; Feng, Jingchao

    2015-12-15

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  20. Incidental electric heating of pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Sonninskii, A V; Sirotin, A M; Vasiliev, Y N

    1981-04-01

    VNIIgaz has improved the conventional Japanese SECT pipeline-heating system, which uses a small steel tube that contains an insulated heater/conductor and is welded to the top of the pipeline. The improved version has two insulated electric heaters - one on the top and the other on the bottom of the pipeline - located inside steel angle irons that are welded to the pipeline. A comparison of experimental results from heating a 200-ft pipeline with both systems at currents of up to 470 A clearly demonstrated the better heating efficiency of the VNIIgaz unit. The improved SECT system would be suitable for various types of pipelines, including gas lines, in the USSR's far north regions.

  1. Onset of scour below pipelines and self-burial

    DEFF Research Database (Denmark)

    Sumer, B. Mutlu; Truelsen, Christoffer; Sichmann, T.

    2001-01-01

    This paper summarizes the results of an experimental study on the onset of scour below and self-burial of pipelines in currents/waves. Pressure was measured on the surface of a slightly buried pipe at two points, one at the upstream side and the other at the downstream side of the pipe, both...... in the sand bed. The latter enabled the pressure gradient (which drives a seepage flow underneath the pipe) to be calculated. The results indicated that the excessive seepage flow and the resulting piping are the major factor to cause the onset of scour below the pipeline. The onset of scour occurred always...... locally (but not along the length of the pipeline as a two-dimensional process). The critical condition corresponding to the onset of scour was determined both in the case of currents and in the case of waves. Once the scour breaks out, it will propagate along the length of the pipeline, scour holes being...

  2. Simplified Technique for Predicting Offshore Pipeline Expansion

    Science.gov (United States)

    Seo, J. H.; Kim, D. K.; Choi, H. S.; Yu, S. Y.; Park, K. S.

    2018-06-01

    In this study, we propose a method for estimating the amount of expansion that occurs in subsea pipelines, which could be applied in the design of robust structures that transport oil and gas from offshore wells. We begin with a literature review and general discussion of existing estimation methods and terminologies with respect to subsea pipelines. Due to the effects of high pressure and high temperature, the production of fluid from offshore wells is typically caused by physical deformation of subsea structures, e.g., expansion and contraction during the transportation process. In severe cases, vertical and lateral buckling occurs, which causes a significant negative impact on structural safety, and which is related to on-bottom stability, free-span, structural collapse, and many other factors. In addition, these factors may affect the production rate with respect to flow assurance, wax, and hydration, to name a few. In this study, we developed a simple and efficient method for generating a reliable pipe expansion design in the early stage, which can lead to savings in both cost and computation time. As such, in this paper, we propose an applicable diagram, which we call the standard dimensionless ratio (SDR) versus virtual anchor length (L A ) diagram, that utilizes an efficient procedure for estimating subsea pipeline expansion based on applied reliable scenarios. With this user guideline, offshore pipeline structural designers can reliably determine the amount of subsea pipeline expansion and the obtained results will also be useful for the installation, design, and maintenance of the subsea pipeline.

  3. 77 FR 2126 - Pipeline Safety: Implementation of the National Registry of Pipeline and Liquefied Natural Gas...

    Science.gov (United States)

    2012-01-13

    ... Natural Gas Operators AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No...: ``Pipeline Safety: Updates to Pipeline and Liquefied Natural Gas Reporting Requirements.'' The final rule...

  4. Halifax Lateral Pipeline Project : comprehensive study report

    International Nuclear Information System (INIS)

    1998-12-01

    The National Energy Board has requested the preparation of a comprehensive study report (CSR) for the proposed Halifax Lateral Pipeline Project in support of Maritimes and Northeast Pipeline Company's proposal to construct the lateral pipeline to transport natural gas produced in offshore Nova Scotia to the Tufts Cove electric generating station in the Halifax Regional Municipality. The project will also enhance the access of natural gas to potential markets located along the pipeline route. This CSR was prepared according to guidelines of the Canadian Environmental Assessment Agency. The report presents: (1) an overview of the project, (2) a summary of the regulatory requirements for assessment, (3) a description of the environmental assessment and regulatory process to date, (4) a summary of the predicted residual environmental and socio-economic effects associated with the project, and (5) a summary of the public consultation process. The environmental and socio-economic assessment focused on these eleven issues: groundwater resources, surface water resources, wetlands, soils, air quality, fish habitat, rare herpetiles, mammals, avifauna, rare plants and archaeological heritage resources. The report identified potential interactions between the project and valued socio-economic and environmental components. These were addressed in combination with recommended mitigative measures to reduce potential adverse effects. It was concluded that the overall environmental effects from the proposed project are likely to be minimal and can be effectively managed with good environmental management methods. 14 refs., 5 tabs., 5 figs., 2 appendices

  5. 77 FR 36606 - Pipeline Safety: Government/Industry Pipeline Research and Development Forum, Public Meeting

    Science.gov (United States)

    2012-06-19

    ...: Threat Prevention --Working Group 2: Leak Detection/Mitigation & Storage --Working Group 3: Anomaly... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID PHMSA-2012-0146] Pipeline Safety: Government/Industry Pipeline Research and Development Forum, Public...

  6. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to ten times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.

  7. Graphical Model Theory for Wireless Sensor Networks

    International Nuclear Information System (INIS)

    Davis, William B.

    2002-01-01

    Information processing in sensor networks, with many small processors, demands a theory of computation that allows the minimization of processing effort, and the distribution of this effort throughout the network. Graphical model theory provides a probabilistic theory of computation that explicitly addresses complexity and decentralization for optimizing network computation. The junction tree algorithm, for decentralized inference on graphical probability models, can be instantiated in a variety of applications useful for wireless sensor networks, including: sensor validation and fusion; data compression and channel coding; expert systems, with decentralized data structures, and efficient local queries; pattern classification, and machine learning. Graphical models for these applications are sketched, and a model of dynamic sensor validation and fusion is presented in more depth, to illustrate the junction tree algorithm

  8. Thinning an object boundary on digital image using pipelined algorithm

    International Nuclear Information System (INIS)

    Dewanto, S.; Aliyanta, B.

    1997-01-01

    In digital image processing, the thinning process to an object boundary is required to analyze the image structure with a measurement of parameter such as area, circumference of the image object. The process needs a sufficient large memory and time consuming if all the image pixels stored in the memory and the following process is done after all the pixels has ben transformed. pipelined algorithm can reduce the time used in the process. This algorithm uses buffer memory where its size can be adjusted. the next thinning process doesn't need to wait all the transformation of pixels. This paper described pipelined algorithm with some result on the use of the algorithm to digital image

  9. Pipelines to eastern Canada

    International Nuclear Information System (INIS)

    Otsason, J.

    1998-01-01

    This presentation focused on four main topics: (1) the existing path of pipelines to eastern Canada, (2) the Chicago hub, (3) transport alternatives, and (4) the Vector Pipeline' expansion plans. In the eastern Canadian market, TransCanada Pipelines dominates 96 per cent of the market share and is effectively immune to expansion costs. Issues regarding the attractiveness of the Chicago hub were addressed. One attractive feature is that the Chicago hub has access to multiple supply basins including western Canada, the Gulf Coast, the mid-continent, and the Rockies. Regarding Vector Pipelines' future plans, the company proposes to construct 343 miles of pipeline from Joliet, Illinois to Dawn, Ontario. Project description included discussion of some of the perceived advantages of this route, namely, extensive storage in Michigan and south-western Ontario, the fact that the proposed pipeline traverses major markets which would mitigate excess capacity concerns, arbitrage opportunities, cost effective expansion capability reducing tolls, and likely lower landed costs in Ontario. Project schedule, costs, rates and tariffs are also discussed. tabs., figs

  10. Design of Flow Big Data System Based on Smart Pipeline Theory

    Directory of Open Access Journals (Sweden)

    Zhang Jianqing

    2017-01-01

    Full Text Available As telecom operators to build intelligent pipe more and more, analysis and processing of big data technology to deal the huge amounts of data intelligent pipeline generated has become an inevitable trend. Intelligent pipe describes operational data, sales data; operator’s pipe flow data make the value for e-commerce business form and business model in mobile e-business environment. Intelligent pipe is the third dimension of 3 D pipeline mobile electronic commerce system. Intelligent operation dimensions make the mobile e-business three-dimensional artifacts. This paper discusses the smart pipeline theory, smart pipeline flow big data system, their system framework and core technology.

  11. 78 FR 42889 - Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems

    Science.gov (United States)

    2013-07-18

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Part 192 [Docket No. PHMSA-2013-0097] Pipeline Safety: Reminder of Requirements for Utility LP-Gas and LPG Pipeline Systems AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION...

  12. Pipeline, utilities to spend $127 million on scada systems

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    Spending for new or upgraded supervisory control and data acquisition (scada) systems and for additional remote-terminal units (RTUs) by North American pipelines and utilities will exceed $165 million through February 1996. New and updated scada systems will total 122 at a cost of more than $127 million; 143 RTU add-on projects will cost more than $38 million. Pipelines and combined utilities/pipelines will spend $89.5 million for 58 scada-system projects and $30.2 million for RTU add-on projects. Scada systems are computerized hardware and software systems that perform monitoring and control functions. In gas utilities, these systems perform functions normally associated with gas transmission and distribution as well as production-plant process control. In gas and oil pipelines, the systems perform these functions as well as such specialized functions as batch tracking, leak detection, and gas load flow

  13. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    International Nuclear Information System (INIS)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M.; Rahmat, K.; Ariffin, H.

    2012-01-01

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  14. Quantification of diffusion tensor imaging in normal white matter maturation of early childhood using an automated processing pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Loh, K.B.; Ramli, N.; Tan, L.K.; Roziah, M. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); Rahmat, K. [University of Malaya, Department of Biomedical Imaging, University Malaya Research Imaging Centre (UMRIC), Faculty of Medicine, Kuala Lumpur (Malaysia); University Malaya, Biomedical Imaging Department, Kuala Lumpur (Malaysia); Ariffin, H. [University of Malaya, Department of Paediatrics, Faculty of Medicine, Kuala Lumpur (Malaysia)

    2012-07-15

    The degree and status of white matter myelination can be sensitively monitored using diffusion tensor imaging (DTI). This study looks at the measurement of fractional anistropy (FA) and mean diffusivity (MD) using an automated ROI with an existing DTI atlas. Anatomical MRI and structural DTI were performed cross-sectionally on 26 normal children (newborn to 48 months old), using 1.5-T MRI. The automated processing pipeline was implemented to convert diffusion-weighted images into the NIfTI format. DTI-TK software was used to register the processed images to the ICBM DTI-81 atlas, while AFNI software was used for automated atlas-based volumes of interest (VOIs) and statistical value extraction. DTI exhibited consistent grey-white matter contrast. Triphasic temporal variation of the FA and MD values was noted, with FA increasing and MD decreasing rapidly early in the first 12 months. The second phase lasted 12-24 months during which the rate of FA and MD changes was reduced. After 24 months, the FA and MD values plateaued. DTI is a superior technique to conventional MR imaging in depicting WM maturation. The use of the automated processing pipeline provides a reliable environment for quantitative analysis of high-throughput DTI data. (orig.)

  15. Proceedings of the ice scour and Arctic marine pipelines workshop

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    This conference was organized to discuss the challenges facing engineers in Arctic offshore oil and gas operations, particularly those dealing with the design, installation and operation of offshore pipelines. Adding to the usual engineering considerations, formidable enough in themselves, Arctic offshore pipelines also face constraints due to permafrost, ice cover, and ice scouring from icebergs. In addition to an examinations of the roles played by these constraints, the forces and deformation mechanisms experienced by different soils during ice scouring events, modeling the scouring process and the application of models to the issue of pipeline burial and protection were other topics that were addressed by various speakers. Some of the regulatory concerns regarding issues for Arctic pipelines were also discussed. refs., tabs., figs.

  16. Saudi Aramco experience towards establishing Pipelines Integrity Management Systems (PIMS)

    Energy Technology Data Exchange (ETDEWEB)

    AlAhmari, Saad A. [Saudi Aramco, Dhahran (Saudi Arabia)

    2009-12-19

    Saudi Aramco pipelines network transports hydrocarbons to export terminals, processing plants and domestic users. This network faced several safety and operational-related challenges that require having a more effective Pipelines Integrity Management System (PIMS). Therefore Saudi Aramco decided to develop its PIMS on the basis of geographical information system (GIS) support through different phases, i.e., establishing the integrity management framework, risk calculation approach, conducting a gap analysis toward the envisioned PIMS, establishing the required scope of work, screening the PIMS applications market, and selecting suitable tools that satisfy expected deliverables, and implement PIMS applications. Saudi Aramco expects great benefits from implementing PIMS, e.g., enhancing safety, enhancing pipeline network robustness, optimizing inspection and maintenance expenditures, and facilitating pipeline management and the decision-making process. Saudi Aramco's new experience in adopting PIMS includes many challenges and lessons-learned associated with all of the PIMS development phases. These challenges include performing the gap analysis, conducting QA/QC sensitivity analysis for the acquired data, establishing the scope of work, selecting the appropriate applications and implementing PIMS. (author)

  17. Saudi Aramco experience towards establishing Pipelines Integrity Management System (PIMS)

    Energy Technology Data Exchange (ETDEWEB)

    Al-Ahmari, Saad A. [Saudi Aramco, Dhahran (Saudi Arabia)

    2009-07-01

    Saudi Aramco pipelines network transports hydrocarbons to export terminals, processing plants and domestic users. This network faced several safety and operational-related challenges that require having a more effective Pipelines Integrity Management System (PIMS). Therefore Saudi Aramco decided to develop its PIMS on the basis of geographical information system (GIS) support through different phases, i.e., establishing the integrity management framework, risk calculation approach, conducting a gap analysis toward the envisioned PIMS, establishing the required scope of work, screening the PIMS applications market, and selecting suitable tools that satisfy expected deliverables, and implement PIMS applications. Saudi Aramco expects great benefits from implementing PIMS, e.g., enhancing safety, enhancing pipeline network robustness, optimizing inspection and maintenance expenditures, and facilitating pipeline management and the decision-making process. Saudi Aramco's new experience in adopting PIMS includes many challenges and lessons-learned associated with all of the PIMS development phases. These challenges include performing the gap analysis, conducting QA/QC sensitivity analysis for the acquired data, establishing the scope of work, selecting the appropriate applications and implementing PIMS. (author)

  18. Extending Graphic Statics for User-Controlled Structural Morphogenesis

    OpenAIRE

    Fivet, Corentin; Zastavni, Denis; Cap, Jean-François; Structural Morphology Group International Seminar 2011

    2011-01-01

    The first geometrical definitions of any structure are of primary importance when considering pertinence and efficiency in structural design processes. Engineering history has taught us how graphic statics can be a very powerful tool since it allows the designer to take shapes and forces into account simultaneously. However, current and past graphic statics methods are more suitable for analysis than structural morphogenesis. This contribution introduces new graphical methods that can supp...

  19. Theory and Application of Magnetic Flux Leakage Pipeline Detection.

    Science.gov (United States)

    Shi, Yan; Zhang, Chao; Li, Rui; Cai, Maolin; Jia, Guanwei

    2015-12-10

    Magnetic flux leakage (MFL) detection is one of the most popular methods of pipeline inspection. It is a nondestructive testing technique which uses magnetic sensitive sensors to detect the magnetic leakage field of defects on both the internal and external surfaces of pipelines. This paper introduces the main principles, measurement and processing of MFL data. As the key point of a quantitative analysis of MFL detection, the identification of the leakage magnetic signal is also discussed. In addition, the advantages and disadvantages of different identification methods are analyzed. Then the paper briefly introduces the expert systems used. At the end of this paper, future developments in pipeline MFL detection are predicted.

  20. Logistics aspects of petroleum pipeline operations

    Directory of Open Access Journals (Sweden)

    W. J. Pienaar

    2010-11-01

    Full Text Available The paper identifies, assesses and describes the logistics aspects of the commercial operation of petroleum pipelines. The nature of petroleum-product supply chains, in which pipelines play a role, is outlined and the types of petroleum pipeline systems are described. An outline is presented of the nature of the logistics activities of petroleum pipeline operations. The reasons for the cost efficiency of petroleum pipeline operations are given. The relative modal service effectiveness of petroleum pipeline transport, based on the most pertinent service performance measures, is offered. The segments in the petroleum-products supply chain where pipelines can play an efficient and effective role are identified.

  1. Detection and localization of leak of pipelines of RBMK reactor. Methods of processing of acoustic noise

    International Nuclear Information System (INIS)

    Tcherkaschov, Y.M.; Strelkov, B.P.; Chimanski, S.B.; Lebedev, V.I.; Belyanin, L.A.

    1997-01-01

    For realization of leak detection of input pipelines and output pipelines of RBMK reactor the method, based on detection and control of acoustic leak signals, was designed. In this report the review of methods of processing and analysis of acoustic noise is submitted. These methods were included in the software of the leak detection system and are used for the decision of the following problems: leak detection by method of sound pressure level in conditions of powerful background noise and strong attenuation of a signal; detection of a small leak in early stage by high-sensitivity correlation method; determination of a point of a sound source in conditions of strong reflection of a signal by a correlation method and sound pressure method; evaluation of leak size by the analysis of a sound level and point of a sound source. The work of considered techniques is illustrated on an example of test results of a fragment of the leak detection system. This test was executed on a Leningrad NPP, operated at power levels of 460, 700, 890 and 1000 MWe. 16 figs

  2. The pipeline fracture behavior and pressure assessment under HIC (Hydrogen induced cracking) environment

    Energy Technology Data Exchange (ETDEWEB)

    Shaohua, Dong [China National Petroleum Corporation (CNPC), Beijing (China); Lianwei, Wang [University of Science and Technology Beijing (USTB), Beijing (China)

    2009-07-01

    As Hydrogen's transmit and diffuse, after gestating for a while, the density of hydrogen around crack tip of pipeline will get to the critical density, and the pipeline material will descend, make critical stress factor, the reason of pipeline Hydrogen Induced Cracking is Hydrogen's transmit and diffuse. The stress factor of Hydrogen Induced Cracking under surroundings-condition of stress is the key that estimate material's rupture behavior. The paper study the relationship among hydrogen concentrate, crack tip stress, stain field, hydrogen diffusion and inner pressure for crack tip process zone, then determined the length of HIC (hydrogen induced cracking) process zone. Based on the theory of propagation which reason micro-crack making core, dislocation model is produced for fracture criteria of HIC, the influence between material and environments under the HIC is analyzed, step by step pipeline maximum load pressure and threshold of J-integrity ( J{sub ISCC} ) is calculated, which is very significant for pipeline safety operation. (author)

  3. Hydrocarbons pipeline transportation risk assessment

    Science.gov (United States)

    Zanin, A. V.; Milke, A. A.; Kvasov, I. N.

    2018-04-01

    The pipeline transportation applying risks assessment issue in the arctic conditions is addressed in the paper. Pipeline quality characteristics in the given environment has been assessed. To achieve the stated objective, the pipelines mathematical model was designed and visualized by using the software product SOLIDWORKS. When developing the mathematical model the obtained results made possible to define the pipeline optimal characteristics for designing on the Arctic sea bottom. In the course of conducting the research the pipe avalanche collapse risks were examined, internal longitudinal and circular loads acting on the pipeline were analyzed, as well as the water impact hydrodynamic force was taken into consideration. The conducted calculation can contribute to the pipeline transport further development under the harsh climate conditions of the Russian Federation Arctic shelf territory.

  4. Rough surface scattering simulations using graphics cards

    International Nuclear Information System (INIS)

    Klapetek, Petr; Valtr, Miroslav; Poruba, Ales; Necas, David; Ohlidal, Miloslav

    2010-01-01

    In this article we present results of rough surface scattering calculations using a graphical processing unit implementation of the Finite Difference in Time Domain algorithm. Numerical results are compared to real measurements and computational performance is compared to computer processor implementation of the same algorithm. As a basis for computations, atomic force microscope measurements of surface morphology are used. It is shown that the graphical processing unit capabilities can be used to speedup presented computationally demanding algorithms without loss of precision.

  5. Critical frameworks for graphic design: graphic design and visual culture

    OpenAIRE

    Dauppe, Michele-Anne

    2011-01-01

    The paper considers an approach to the study of graphic design which addresses the expanding nature of graphic design in the 21st century and the purposeful application of theory to the subject of graphic design. In recent years graphic design has expanded its domain from the world of print culture (e.g. books, posters) into what is sometimes called screen culture. Everything from a mobile phone to a display in an airport lounge to the A.T.M. carries graphic design. It has become ever more ub...

  6. Shore approach of Camarupim pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Bernardi, Tiaraju P.; Oliveira Neto, Vasco A. de; Siqueira, Jakson [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    Camarupim Field is located in the northern portion of Espirito Santo Basin and was discovered from the drilling of the well 1-ESS-164 in 2006. It is a gas field which start of the production is in mid of 2009. The production unit will be a FPSO (Floating Production, Storage and Offloading) and the gas will flow through a pipeline with diameter ranging from 12 inches and 24 inches with approximately 60 km long, from the FPSO Cidade de Sao Mateus to UTGC (Unit for Treatment of Gas Cacimbas-Linhares-ES). The FPSO will have processing capacity of 10MMm3/day of gas. Due to the approach of the pipeline in the continental portion, located in an environmental protection area and place of spawning of sea turtles, the connection between the stretch of sea and land pipeline running through a shore approach, known and proven technique of horizontal directional drilling about 950m in length. This paper will be shown the assumptions adopted, the technique employed, the challenges faced by the team and the lessons learned to build the directional hole. (author)

  7. STRESS AND STRAIN STATE OF REPAIRING SECTION OF PIPELINE

    Directory of Open Access Journals (Sweden)

    V. V. Nikolaev

    2015-01-01

    Full Text Available Reliability of continuous operation of pipelines is an actual problem. For this reason should be developed an effective warning system of the main pipelines‘  failures and accidents not only in design and operation but also in selected repair. Changing of linear, unloaded by bending position leads to the change of stress and strain state of pipelines. And besides this, the stress and strain state should be determined and controlled in the process of carrying out the repair works. The article presents mathematical model of pipeline’s section straining in viscoelastic setting taking into account soils creep and high-speed stress state of pipeline with the purpose of stresses evaluation and load-supporting capacity of repairing section of pipeline, depending on time.  Stress and strain state analysis of pipeline includes longitudinal and circular stresses calculation  with  account of axis-asymmetrical straining and  was  fulfilled  on  the base of momentless theory of shells. To prove the consistency of data there were compared the calcu- lation results and the solution results by analytical methods for different cases (long pipeline’s section strain only under influence of cross-axis action; long pipeline’s section strain under in- fluence of longitudinal stress; long pipeline’s section strain; which is on the elastic foundation, under influence of cross-axis action. Comparison results shows that the calculation error is not more than 3 %.Analysis of stress-strain state change of pipeline’s section was carried out with development  of  this  model,  which  indicates  the  enlargement  of  span  deflection  in  comparison with problem’s solution in elastic approach. It is also proved, that for consistent assessment of pipeline maintenance conditions, it is necessary to consider the areolas of rheological processes of soils. On the base of complex analysis of pipelines there were determined stresses and time

  8. Nine Years of XMM-Newton Pipeline: Experience and Feedback

    Science.gov (United States)

    Michel, Laurent; Motch, Christian

    2009-05-01

    The Strasbourg Astronomical Observatory is member of the Survey Science Centre (SSC) of the XMM-Newton satellite. Among other responsibilities, we provide a database access to the 2XMMi catalogue and run the part of the data processing pipeline performing the cross-correlation of EPIC sources with archival catalogs. These tasks were all developed in Strasbourg. Pipeline processing is flawlessly in operation since 1999. We describe here the work load and infrastructure setup in Strasbourg to support SSC activities. Our nine year long SSC experience could be used in the framework of the Simbol-X ground segment.

  9. Nine Years of XMM-Newton Pipeline: Experience and Feedback

    International Nuclear Information System (INIS)

    Michel, Laurent; Motch, Christian

    2009-01-01

    The Strasbourg Astronomical Observatory is member of the Survey Science Centre (SSC) of the XMM-Newton satellite. Among other responsibilities, we provide a database access to the 2XMMi catalogue and run the part of the data processing pipeline performing the cross-correlation of EPIC sources with archival catalogs. These tasks were all developed in Strasbourg. Pipeline processing is flawlessly in operation since 1999. We describe here the work load and infrastructure setup in Strasbourg to support SSC activities. Our nine year long SSC experience could be used in the framework of the Simbol-X ground segment.

  10. Web-based execution of graphical work-flows: a modular platform for multifunctional scientific process automation

    International Nuclear Information System (INIS)

    De Ley, E.; Jacobs, D.; Ounsy, M.

    2012-01-01

    The Passerelle process automation suite offers a fundamentally modular solution platform, based on a layered integration of several best-of-breed technologies. It has been successfully applied by Synchrotron Soleil as the sequencer for data acquisition and control processes on its beamlines, integrated with TANGO as a control bus and GlobalScreen TM ) as the SCADA package. Since last year it is being used as the graphical work-flow component for the development of an eclipse-based Data Analysis Work Bench, at ESRF. The top layer of Passerelle exposes an actor-based development paradigm, based on the Ptolemy framework (UC Berkeley). Actors provide explicit reusability and strong decoupling, combined with an inherently concurrent execution model. Actor libraries exist for TANGO integration, web-services, database operations, flow control, rules-based analysis, mathematical calculations, launching external scripts etc. Passerelle's internal architecture is based on OSGi, the major Java framework for modular service-based applications. A large set of modules exist that can be recombined as desired to obtain different features and deployment models. Besides desktop versions of the Passerelle work-flow workbench, there is also the Passerelle Manager. It is a secured web application including a graphical editor, for centralized design, execution, management and monitoring of process flows, integrating standard Java Enterprise services with OSGi. We will present the internal technical architecture, some interesting application cases and the lessons learnt. (authors)

  11. GASVOL 18'' gas pipeline - risk based inspection study

    Energy Technology Data Exchange (ETDEWEB)

    Bjoernoey, Ola H.; Etterdal, Birger A. [Det Norske Veritas (DNV), Oslo (Norway); Guarize, Rosimar; Oliveira, Luiz F.S. [Det Norske Veritas (DNV) (Brazil); Faertes, Denise; Dias, Ricardo [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2003-07-01

    This paper describes a risk based approach and inspection planning as part of the Pipeline Integrity Management (PIM) system for the 95.5 km long 18'' GASVOL gas pipeline in the South eastern region of Brazil transporting circa 5 000 000 m3 dry gas per day. Pipeline systems can be subject to several degradation mechanisms and inspection and monitoring are used to ensure system integrity. Modern pipeline regulations and codes are normally based on a core safety or risk philosophy. The detailed design requirements presented in design codes are practical interpretations established so as to fulfill these core objectives. A given pipeline, designed, constructed and installed according to a pipeline code is therefore the realization of a structure, which, along its whole length, meets the applicable safety objectives of that code. The main objective of Pipeline Integrity Management (PIM) is to control and document the integrity of the pipeline for its whole service life, and to do this in a cost-effective manner. DNV has a specific approach to RBI planning, starting with an initial qualitative assessment where pipelines and damage type are ranked according to risk and potential risk reduction by an inspection and then carried forward to a quantitative detailed assessment where the level of complexity and accuracy can vary based on availability of information and owner needs. Detailed assessment requires significant effort in data gathering. The findings are dependent upon the accuracy of the inspection data, and on DNV's interpretation of the pipeline reference system and simplifications in the inspection data reported. The following specific failure mechanisms were investigated: internal corrosion, external corrosion, third party interference, landslides and black powder. RBI planning, in general words, is a 'living process'. In order to optimize future inspections, it is essential that the analyses utilize the most recent information regarding

  12. Integration of rocket turbine design and analysis through computer graphics

    Science.gov (United States)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  13. The LCOGT Science Archive and Data Pipeline

    Science.gov (United States)

    Lister, Tim; Walker, Z.; Ciardi, D.; Gelino, C. R.; Good, J.; Laity, A.; Swain, M.

    2013-01-01

    Las Cumbres Observatory Global Telescope (LCOGT) is building and deploying a world-wide network of optical telescopes dedicated to time-domain astronomy. In the past year, we have deployed and commissioned four new 1m telescopes at McDonald Observatory, Texas and at CTIO, Chile, with more to come at SAAO, South Africa and Siding Spring Observatory, Australia. To handle these new data sources coming from the growing LCOGT network, and to serve them to end users, we have constructed a new data pipeline and Science Archive. We describe the new LCOGT pipeline, currently under development and testing, which makes use of the ORAC-DR automated recipe-based data reduction pipeline and illustrate some of the new data products. We also present the new Science Archive, which is being developed in partnership with the Infrared Processing and Analysis Center (IPAC) and show some of the new features the Science Archive provides.

  14. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    International Nuclear Information System (INIS)

    Townsend, R. H. D.

    2010-01-01

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  15. Canary: an atomic pipeline for clinical amplicon assays.

    Science.gov (United States)

    Doig, Kenneth D; Ellul, Jason; Fellowes, Andrew; Thompson, Ella R; Ryland, Georgina; Blombery, Piers; Papenfuss, Anthony T; Fox, Stephen B

    2017-12-15

    High throughput sequencing requires bioinformatics pipelines to process large volumes of data into meaningful variants that can be translated into a clinical report. These pipelines often suffer from a number of shortcomings: they lack robustness and have many components written in multiple languages, each with a variety of resource requirements. Pipeline components must be linked together with a workflow system to achieve the processing of FASTQ files through to a VCF file of variants. Crafting these pipelines requires considerable bioinformatics and IT skills beyond the reach of many clinical laboratories. Here we present Canary, a single program that can be run on a laptop, which takes FASTQ files from amplicon assays through to an annotated VCF file ready for clinical analysis. Canary can be installed and run with a single command using Docker containerization or run as a single JAR file on a wide range of platforms. Although it is a single utility, Canary performs all the functions present in more complex and unwieldy pipelines. All variants identified by Canary are 3' shifted and represented in their most parsimonious form to provide a consistent nomenclature, irrespective of sequencing variation. Further, proximate in-phase variants are represented as a single HGVS 'delins' variant. This allows for correct nomenclature and consequences to be ascribed to complex multi-nucleotide polymorphisms (MNPs), which are otherwise difficult to represent and interpret. Variants can also be annotated with hundreds of attributes sourced from MyVariant.info to give up to date details on pathogenicity, population statistics and in-silico predictors. Canary has been used at the Peter MacCallum Cancer Centre in Melbourne for the last 2 years for the processing of clinical sequencing data. By encapsulating clinical features in a single, easily installed executable, Canary makes sequencing more accessible to all pathology laboratories. Canary is available for download as source

  16. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution.

    Science.gov (United States)

    Correia, J R C C C; Martins, C J A P

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  17. General purpose graphics-processing-unit implementation of cosmological domain wall network evolution

    Science.gov (United States)

    Correia, J. R. C. C. C.; Martins, C. J. A. P.

    2017-10-01

    Topological defects unavoidably form at symmetry breaking phase transitions in the early universe. To probe the parameter space of theoretical models and set tighter experimental constraints (exploiting the recent advances in astrophysical observations), one requires more and more demanding simulations, and therefore more hardware resources and computation time. Improving the speed and efficiency of existing codes is essential. Here we present a general purpose graphics-processing-unit implementation of the canonical Press-Ryden-Spergel algorithm for the evolution of cosmological domain wall networks. This is ported to the Open Computing Language standard, and as a consequence significant speedups are achieved both in two-dimensional (2D) and 3D simulations.

  18. An Application of Graphics Processing Units to Geosimulation of Collective Crowd Behaviour

    Directory of Open Access Journals (Sweden)

    Cjoskāns Jānis

    2017-12-01

    Full Text Available The goal of the paper is to assess the ways for computational performance and efficiency improvement of collective crowd behaviour simulation by using parallel computing methods implemented on graphics processing unit (GPU. To perform an experimental evaluation of benefits of parallel computing, a new GPU-based simulator prototype is proposed and the runtime performance is analysed. Based on practical examples of pedestrian dynamics geosimulation, the obtained performance measurements are compared to several other available multiagent simulation tools to determine the efficiency of the proposed simulator, as well as to provide generic guidelines for the efficiency improvements of the parallel simulation of collective crowd behaviour.

  19. Solution of relativistic quantum optics problems using clusters of graphical processing units

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, D.F., E-mail: daviel.gordon@nrl.navy.mil; Hafizi, B.; Helle, M.H.

    2014-06-15

    Numerical solution of relativistic quantum optics problems requires high performance computing due to the rapid oscillations in a relativistic wavefunction. Clusters of graphical processing units are used to accelerate the computation of a time dependent relativistic wavefunction in an arbitrary external potential. The stationary states in a Coulomb potential and uniform magnetic field are determined analytically and numerically, so that they can used as initial conditions in fully time dependent calculations. Relativistic energy levels in extreme magnetic fields are recovered as a means of validation. The relativistic ionization rate is computed for an ion illuminated by a laser field near the usual barrier suppression threshold, and the ionizing wavefunction is displayed.

  20. LNG transport through pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Pfund, P; Philipps, A

    1975-01-01

    LNG pipelines could help solve some peakshaving problems if operated in conjunction with other facilities that could use the LNG cold recovered during regasification. In some areas at present, LNG is delivered by tanker and regasified near the terminal for transmission through conventional gas pipelines. In other places, utilities liquefy natural gas for easy storage for later peakshaving use. The only chance to avoid the second expensive liquefaction step would be to convey imported LNG through a suitable designed LNG pipeline. The technical problems involved in LNG pipeline construction have basically been solved in recent years, but those pipelines actually constructed have been only short ones. To be economically justified, long-distance LNG lines require additional credit, which could be obtained by selling the LNG cold recovered during regasification to industrial users located in or near the points of gas consumption. Technical details presented cover the pipe material, stress relief, steel composition, pressure enthalpy, bellows-type expansion joints, and mechanical and thermal insulation.

  1. Planned and proposed pipeline regulations

    International Nuclear Information System (INIS)

    De Leon, C.

    1992-01-01

    The Research and Special Programs Administration administers the Natural Gas Pipeline Safety Act of 1968 (NGPSA) and the Hazardous Liquid Pipeline Safety Act of 1979 (HLPSA). The RSPA issues and enforces design, construction, operation and maintenance regulations for natural gas pipelines and hazardous liquid pipelines. This paper discusses a number of proposed and pending safety regulations and legislative initiatives currently being considered by the RSPA and the US Congress. Some new regulations have been enacted. The next few years will see a great deal of regulatory activity regarding natural gas and hazardous liquid pipelines, much of it resulting from legislative requirements. The office of Pipeline Safety is currently conducting a study to streamline its operations. This study is analyzing the office's business, social and technical operations with the goal of improving overall efficiency, effectiveness, productivity and job satisfaction to meet the challenges of the future

  2. Chechnya: the pipeline front

    Energy Technology Data Exchange (ETDEWEB)

    Anon,

    1999-11-01

    This article examines the impact of the Russian campaign against Chechnya on projects for oil and gas pipelines from the new Caspian republics, which are seeking financial support. Topics discussed include the pipeline transport of oil from Azerbaijan through Chechnya to the Black Sea, the use of oil money to finance the war, the push for non-Russian export routes, the financing of pipelines, the impact of the war on the supply of Russian and Turkmenistan gas to Turkey, the proposed construction of the Trans Caspian pipeline, the weakening of trust between Russia and its neighbours, and the potential for trans Caucasus republics to look to western backers due to the instability of the North Caucasus. (UK)

  3. Use of general purpose graphics processing units with MODFLOW

    Science.gov (United States)

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  4. Bio-Docklets: virtualization containers for single-step execution of NGS pipelines.

    Science.gov (United States)

    Kim, Baekdoo; Ali, Thahmina; Lijeron, Carlos; Afgan, Enis; Krampis, Konstantinos

    2017-08-01

    Processing of next-generation sequencing (NGS) data requires significant technical skills, involving installation, configuration, and execution of bioinformatics data pipelines, in addition to specialized postanalysis visualization and data mining software. In order to address some of these challenges, developers have leveraged virtualization containers toward seamless deployment of preconfigured bioinformatics software and pipelines on any computational platform. We present an approach for abstracting the complex data operations of multistep, bioinformatics pipelines for NGS data analysis. As examples, we have deployed 2 pipelines for RNA sequencing and chromatin immunoprecipitation sequencing, preconfigured within Docker virtualization containers we call Bio-Docklets. Each Bio-Docklet exposes a single data input and output endpoint and from a user perspective, running the pipelines as simply as running a single bioinformatics tool. This is achieved using a "meta-script" that automatically starts the Bio-Docklets and controls the pipeline execution through the BioBlend software library and the Galaxy Application Programming Interface. The pipeline output is postprocessed by integration with the Visual Omics Explorer framework, providing interactive data visualizations that users can access through a web browser. Our goal is to enable easy access to NGS data analysis pipelines for nonbioinformatics experts on any computing environment, whether a laboratory workstation, university computer cluster, or a cloud service provider. Beyond end users, the Bio-Docklets also enables developers to programmatically deploy and run a large number of pipeline instances for concurrent analysis of multiple datasets. © The Authors 2017. Published by Oxford University Press.

  5. A novel approach to pipeline tensioner modeling

    Energy Technology Data Exchange (ETDEWEB)

    O' Grady, Robert; Ilie, Daniel; Lane, Michael [MCS Software Division, Galway (Ireland)

    2009-07-01

    As subsea pipeline developments continue to move into deep and ultra-deep water locations, there is an increasing need for the accurate prediction of expected pipeline fatigue life. A significant factor that must be considered as part of this process is the fatigue damage sustained by the pipeline during installation. The magnitude of this installation-related damage is governed by a number of different agents, one of which is the dynamic behavior of the tensioner systems during pipe-laying operations. There are a variety of traditional finite element methods for representing dynamic tensioner behavior. These existing methods, while basic in nature, have been proven to provide adequate forecasts in terms of the dynamic variation in typical installation parameters such as top tension and sagbend/overbend strain. However due to the simplicity of these current approaches, some of them tend to over-estimate the frequency of tensioner pay out/in under dynamic loading. This excessive level of pay out/in motion results in the prediction of additional stress cycles at certain roller beds, which in turn leads to the prediction of unrealistic fatigue damage to the pipeline. This unwarranted fatigue damage then equates to an over-conservative value for the accumulated damage experienced by a pipeline weld during installation, and so leads to a reduction in the estimated fatigue life for the pipeline. This paper describes a novel approach to tensioner modeling which allows for greater control over the velocity of dynamic tensioner pay out/in and so provides a more accurate estimation of fatigue damage experienced by the pipeline during installation. The paper reports on a case study, as outlined in the proceeding section, in which a comparison is made between results from this new tensioner model and from a more conventional approach. The comparison considers typical installation parameters as well as an in-depth look at the predicted fatigue damage for the two methods

  6. Thermal expansion absorbing structure for pipeline

    International Nuclear Information System (INIS)

    Nagata, Takashi; Yamashita, Takuya.

    1995-01-01

    A thermal expansion absorbing structure for a pipeline is disposed to the end of pipelines to form a U-shaped cross section connecting a semi-circular torus shell and a short double-walled cylindrical tube. The U-shaped longitudinal cross-section is deformed in accordance with the shrinking deformation of the pipeline and absorbs thermal expansion. Namely, since the central lines of the outer and inner tubes of the double-walled cylindrical tube deform so as to incline, when the pipeline is deformed by thermal expansion, thermal expansion can be absorbed by a simple configuration thereby enabling to contribute to ensure the safety. Then, the entire length of the pipeline can greatly be shortened by applying it to the pipeline disposed in a high temperature state compared with a method of laying around a pipeline using only elbows, which has been conducted so far. Especially, when it is applied to a pipeline for an FBR-type reactor, the cost for the construction of a facility of a primary systems can greater be reduced. In addition, it can be applied to a pipeline for usual chemical plants and any other structures requiring absorption of deformation. (N.H.)

  7. An overview of Samarco's pipelines and their KPI'S (Key Performance Indicators)

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Ivan; Andrade, Ricardo Bruno Nebias; Silva, Tatiana [Samarco Mineracao S.A., Belo Horizonte, MG (Brazil)

    2009-07-01

    Samarco is the owner and operates the biggest slurry pipeline grid of the world composed of three pipelines with a total length of 801 km. This paper shows some important key performance indicators (KPI's) of Samarco's pipelines as: pumped tonnage; slurry concentration; availability and safety. This paper also presents the main features and the flow sheet of each pipeline. The objective of this paper is to give a brief idea of Samarco's pipelines process and how was possible to improve these main KPI's presented. (author)

  8. Fishing intensity around the BBL pipeline

    NARCIS (Netherlands)

    Hintzen, Niels

    2016-01-01

    Wageningen Marine Research was requested by ACRB B.V. to investigate the fishing activities around the BBL pipeline. This gas pipeline crosses the southern North Sea from Balgzand (near Den Helder) in the Netherlands to Bacton in the UK (230km). This pipeline is abbreviated as the BBL pipeline. Part

  9. Diagnosing in building main pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Telegin, L.G.; Gorelov, A.S.; Kurepin, B.N.; Orekhov, V.I.; Vasil' yev, G.G.; Yakovlev, Ye. I.

    1984-01-01

    General principles are examined for technical diagnosis in building main pipelines. A technique is presented for diagnosis during construction, as well as diagnosis of the technical state of the pipeline-construction machines and mechanisms. The survey materials could be used to set up construction of main pipelines.

  10. Rapid Large Scale Reprocessing of the ODI Archive using the QuickReduce Pipeline

    Science.gov (United States)

    Gopu, A.; Kotulla, R.; Young, M. D.; Hayashi, S.; Harbeck, D.; Liu, W.; Henschel, R.

    2015-09-01

    The traditional model of astronomers collecting their observations as raw instrument data is being increasingly replaced by astronomical observatories serving standard calibrated data products to observers and to the public at large once proprietary restrictions are lifted. For this model to be effective, observatories need the ability to periodically re-calibrate archival data products as improved master calibration products or pipeline improvements become available, and also to allow users to rapidly calibrate their data on-the-fly. Traditional astronomy pipelines are heavily I/O dependent and do not scale with increasing data volumes. In this paper, we present the One Degree Imager - Portal, Pipeline and Archive (ODI-PPA) calibration pipeline framework which integrates the efficient and parallelized QuickReduce pipeline to enable a large number of simultaneous, parallel data reduction jobs - initiated by operators AND/OR users - while also ensuring rapid processing times and full data provenance. Our integrated pipeline system allows re-processing of the entire ODI archive (˜15,000 raw science frames, ˜3.0 TB compressed) within ˜18 hours using twelve 32-core compute nodes on the Big Red II supercomputer. Our flexible, fast, easy to operate, and highly scalable framework improves access to ODI data, in particular when data rates double with an upgraded focal plane (scheduled for 2015), and also serve as a template for future data processing infrastructure across the astronomical community and beyond.

  11. Topographic Digital Raster Graphics - USGS DIGITAL RASTER GRAPHICS

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — USGS Topographic Digital Raster Graphics downloaded from LABINS (http://data.labins.org/2003/MappingData/drg/drg_stpl83.cfm). A digital raster graphic (DRG) is a...

  12. Pipeline integrity handbook risk management and evaluation

    CERN Document Server

    Singh, Ramesh

    2013-01-01

    Based on over 40 years of experience in the field, Ramesh Singh goes beyond corrosion control, providing techniques for addressing present and future integrity issues. Pipeline Integrity Handbook provides pipeline engineers with the tools to evaluate and inspect pipelines, safeguard the life cycle of their pipeline asset and ensure that they are optimizing delivery and capability. Presented in easy-to-use, step-by-step order, Pipeline Integrity Handbook is a quick reference for day-to-day use in identifying key pipeline degradation mechanisms and threats to pipeline integrity. The book begins

  13. FMAP: Functional Mapping and Analysis Pipeline for metagenomics and metatranscriptomics studies.

    Science.gov (United States)

    Kim, Jiwoong; Kim, Min Soo; Koh, Andrew Y; Xie, Yang; Zhan, Xiaowei

    2016-10-10

    Given the lack of a complete and comprehensive library of microbial reference genomes, determining the functional profile of diverse microbial communities is challenging. The available functional analysis pipelines lack several key features: (i) an integrated alignment tool, (ii) operon-level analysis, and (iii) the ability to process large datasets. Here we introduce our open-sourced, stand-alone functional analysis pipeline for analyzing whole metagenomic and metatranscriptomic sequencing data, FMAP (Functional Mapping and Analysis Pipeline). FMAP performs alignment, gene family abundance calculations, and statistical analysis (three levels of analyses are provided: differentially-abundant genes, operons and pathways). The resulting output can be easily visualized with heatmaps and functional pathway diagrams. FMAP functional predictions are consistent with currently available functional analysis pipelines. FMAP is a comprehensive tool for providing functional analysis of metagenomic/metatranscriptomic sequencing data. With the added features of integrated alignment, operon-level analysis, and the ability to process large datasets, FMAP will be a valuable addition to the currently available functional analysis toolbox. We believe that this software will be of great value to the wider biology and bioinformatics communities.

  14. Internal corrosion control of northern pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Papavinasam, S.

    2005-02-01

    The general causes of internal corrosion in pipelines were discussed along with the methods to control them. Efficient methods are needed to determine chemical efficiency for mitigating internal corrosion in transmission pipelines, particularly those used in environmentally sensitive regions in the Arctic where harsh environmental conditions prevail. According to the Office of Pipeline Safety, 15 per cent of pipeline failures in the United States from 1994 to 2000 were caused by internal corrosion. Since pipelines in the United States are slightly older than Canadian pipelines, internal corrosion is a significant issue from a Canadian perspective. There are 306,618 km of energy-related pipelines in western Canada. Between April 2001 and March 2002 there were 808 failures, of which 425 failures resulted from internal corrosion. The approach to control internal corrosion comprises of dehydrating the gases at production facilities; controlling the quality of corrosive gases such as carbon dioxide and hydrogen sulphide; and, using internal coatings. The approaches to control internal corrosion are appropriate, when supplemented by adequate integrity management program to ensure that corrosive liquids do not collect, over the operational lifetime of the pipelines, at localized areas. It was suggested that modeling of pipeline operations may need improvement. This paper described the causes, prediction and control of internal pitting corrosion. It was concluded that carbon steel equipment can continue to be used reliably and safely as pipeline materials for northern pipelines if the causes that lead to internal corrosion are scientifically and accurately predicted, and if corrosion inhibitors are properly evaluated and applied. 5 figs.

  15. Implementation and design of a teleoperation system based on a VMEBUS/68020 pipelined architecture

    Science.gov (United States)

    Lee, Thomas S.

    1989-01-01

    A pipelined control design and architecture for a force-feedback teleoperation system that is being implemented at the Jet Propulsion Laboratory and which will be integrated with the autonomous portion of the testbed to achieve share control is described. At the local site, the operator sees real-time force/torque displays and moves two 6-degree of freedom (dof) force-reflecting hand-controllers as his hands feel the contact force/torques generated at the remote site where the robots interact with the environment. He also uses a graphical user menu to monitor robot states and specify system options. The teleoperation software is written in the C language and runs on MC68020-based processor boards in the VME chassis, which utilizes a real-time operating system; the hardware is configured to realize a four-stage pipeline configuration. The environment is very flexible, such that the system can easily be configured as a stand-alone facility for performing independent research in human factors, force control, and time-delayed systems.

  16. System of maintenance of sustainability of oil pipelines

    International Nuclear Information System (INIS)

    Aleskerov, R.

    2005-01-01

    Full text : Development of the ecological science defining interrelation and interaction of system of an alive and lifeless matter, opens new opportunities and decisions of a problem of system maintenance of stability of oil pipelines and other engineering constructions and devices of strategic purpose. In work the methodology of system maintenance of stability of oil pipelines is resulted. It is known, that at transport of oil and gas a plenty of automatic and electronic devices, devices are applied to the control and the signal system of parameters of dangerous and harmful factors, a condition of the technological and test equipment, diagnostics and the control of pipelines. The control of parameters of safety of an oil pipeline over the operation, considering influence of heavy climatic conditions during all line, etc. (1, 2, 3) is carried out. Therefore stability of work of various parts of system of an oil pipeline depends on reliability and accuracy of work of devices and devices. However, thus influence of variations of geomagnetic fields and the geodynamic processes breaking the indications of devices and devices which lead to infringement reliability of all design of system of an oil pipeline is not considered. In turn, specified leads to failure, lost of human and natural resources. Now, according to the accepted methodology of a safety of working conditions, potential dangers of any activity, only person, with the subsequent development of measures of protection (4) are considered. Proceeding from it all surrounding material world shares on the following objects forming in aggregate working conditions: subjects of work.; means of production; products of work; the industrial environment; technology process; an environmental-climatic complex; fauna and flora; people (work of the person). Apparently from above resulted, in the accepted methodology potential dangers of any activity of the person and corresponding environmental -climatic conditions of the

  17. Trouble in the pipeline?

    Energy Technology Data Exchange (ETDEWEB)

    Snieckus, Darius

    2002-10-01

    The author provides a commentary on the political, economic, environmental and social problems facing the proposed 3 billion US dollars Baku-Ceyhan-Tbilisi export pipeline. The 1760 km long pipeline has been designed to carry 1 million b/d of crude oil from the Caspian Sea to Turkey's Mediterranean coast. The pipeline is being constructed by a BP-led consortium made up of Socar, Statoil, Unocal, TPAO, Eni, Itochu, Amerada Hess, TotalFinaElf and BP. (UK)

  18. Canadian pipeline transportation system : transportation assessment

    International Nuclear Information System (INIS)

    2009-07-01

    In addition to regulating the construction and operation of 70,000 km of oil and natural gas pipelines in Canada, the National Energy Board (NEB) regulates the trade of natural gas, oil and natural gas liquids. This report provided an assessment of the Canadian hydrocarbon transportation system in relation to its ability to provide a robust energy infrastructure. Data was collected from NEB-regulated pipeline companies and a range of publicly available sources to determine if adequate pipeline capacity is in place to transport products to consumers. The NEB also used throughput and capacity information received from pipeline operators as well as members of the investment community. The study examined price differentials compared with firm service tolls for transportation paths, as well as capacity utilization on pipelines and the degree of apportionment on major oil pipelines. This review indicated that in general, the Canadian pipeline transportation system continues to work effectively, with adequate pipeline capacity in place to move products to consumers who need them. 9 tabs., 30 figs., 3 appendices.

  19. 77 FR 6857 - Pipeline Safety: Notice of Public Meetings on Improving Pipeline Leak Detection System...

    Science.gov (United States)

    2012-02-09

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket ID... installed to lessen the volume of natural gas and hazardous liquid released during catastrophic pipeline... p.m. Panel 3: Considerations for Natural Gas Pipeline Leak Detection Systems 3:30 p.m. Break 3:45 p...

  20. Graphics in DAQSIM

    International Nuclear Information System (INIS)

    Wang, C.C.; Booth, A.W.; Chen, Y.M.; Botlo, M.

    1993-06-01

    At the Superconducting Super Collider Laboratory (SSCL) a tool called DAQSIM has been developed to study the behavior of Data Acquisition (DAQ) systems. This paper reports and discusses the graphics used in DAQSIM. DAQSIM graphics includes graphical user interface (GUI), animation, debugging, and control facilities. DAQSIM graphics not only provides a convenient DAQ simulation environment, it also serves as an efficient manager in simulation development and verification

  1. SU-E-P-59: A Graphical Interface for XCAT Phantom Configuration, Generation and Processing

    International Nuclear Information System (INIS)

    Myronakis, M; Cai, W; Dhou, S; Cifter, F; Lewis, J; Hurwitz, M

    2015-01-01

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing, our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc

  2. SU-E-P-59: A Graphical Interface for XCAT Phantom Configuration, Generation and Processing

    Energy Technology Data Exchange (ETDEWEB)

    Myronakis, M; Cai, W; Dhou, S; Cifter, F; Lewis, J [Brigham and Women’s Hospital, Boston, MA (United States); Hurwitz, M [Newton, MA (United States)

    2015-06-15

    Purpose: To design a comprehensive open-source, publicly available, graphical user interface (GUI) to facilitate the configuration, generation, processing and use of the 4D Extended Cardiac-Torso (XCAT) phantom. Methods: The XCAT phantom includes over 9000 anatomical objects as well as respiratory, cardiac and tumor motion. It is widely used for research studies in medical imaging and radiotherapy. The phantom generation process involves the configuration of a text script to parameterize the geometry, motion, and composition of the whole body and objects within it, and to generate simulated PET or CT images. To avoid the need for manual editing or script writing, our MATLAB-based GUI uses slider controls, drop-down lists, buttons and graphical text input to parameterize and process the phantom. Results: Our GUI can be used to: a) generate parameter files; b) generate the voxelized phantom; c) combine the phantom with a lesion; d) display the phantom; e) produce average and maximum intensity images from the phantom output files; f) incorporate irregular patient breathing patterns; and f) generate DICOM files containing phantom images. The GUI provides local help information using tool-tip strings on the currently selected phantom, minimizing the need for external documentation. The DICOM generation feature is intended to simplify the process of importing the phantom images into radiotherapy treatment planning systems or other clinical software. Conclusion: The GUI simplifies and automates the use of the XCAT phantom for imaging-based research projects in medical imaging or radiotherapy. This has the potential to accelerate research conducted with the XCAT phantom, or to ease the learning curve for new users. This tool does not include the XCAT phantom software itself. We would like to acknowledge funding from MRA, Varian Medical Systems Inc.

  3. 75 FR 4134 - Pipeline Safety: Leak Detection on Hazardous Liquid Pipelines

    Science.gov (United States)

    2010-01-26

    ... safety study on pipeline Supervisory Control and Data Acquisition (SCADA) systems (NTSB/SS-05/02). The... indications of a leak on the SCADA interface was the impetus for this study. The NTSB examined 13 hazardous... pipelines, the line balance technique for leak detection can often be performed with manual calculations...

  4. Contemporary methods of emergency repair works on transit pipelines. Repair works on in-service pipelines

    International Nuclear Information System (INIS)

    Olma, T.; Winckowski, J.

    2007-01-01

    The paper presents modern methods and relevant technologies of pipeline failure repairs, basing on TD Williamson technique for hermetic plugging of gas pipelines without interrupting service. Rules for management of emergency situations on the Polish Section of Yamal - Europe Transit Gas Pipeline are being discussed as well. (author)

  5. Northern pipelines : backgrounder

    International Nuclear Information System (INIS)

    2002-04-01

    Most analysts agree that demand for natural gas in North America will continue to grow. Favourable market conditions created by rising demand and declining production have sparked renewed interest in northern natural gas development. The 2002 Annual Energy Outlook forecasted U.S. consumption to increase at an annual average rate of 2 per cent from 22.8 trillion cubic feet to 33.8 TCF by 2020, mostly due to rapid growth in demand for electric power generation. Natural gas prices are also expected to increase at an annual average rate of 1.6 per cent, reaching $3.26 per thousand cubic feet in 2020. There are currently 3 proposals for pipelines to move northern gas to US markets. They include a stand-alone Mackenzie Delta Project, the Alaska Highway Pipeline Project, and an offshore route that would combine Alaskan and Canadian gas in a pipeline across the floor of the Beaufort Sea. Current market conditions and demand suggest that the projects are not mutually exclusive, but complimentary. The factors that differentiate northern pipeline proposals are reserves, preparedness for market, costs, engineering, and environmental differences. Canada has affirmed its role to provide the regulatory and fiscal certainty needed by industry to make investment decisions. The Government of the Yukon does not believe that the Alaska Highway Project will shut in Mackenzie Delta gas, but will instead pave the way for development of a new northern natural gas industry. The Alaska Highway Pipeline Project will bring significant benefits for the Yukon, the Northwest Territories and the rest of Canada. Unresolved land claims are one of the challenges that has to be addressed for both Yukon and the Northwest Territories, as the proposed Alaska Highway Pipeline will travel through traditional territories of several Yukon first Nations. 1 tab., 4 figs

  6. Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units

    Science.gov (United States)

    Demchik, Vadim

    2011-03-01

    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.

  7. Alternating current corrosion of cathodically protected pipelines: Discussion of the involved processes and their consequences on the critical interference values

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, M. [SGK Swiss Society for Corrosion Protection, Technoparkstr. 1, CH-8005 Zuerich (Switzerland)

    2012-12-15

    Based on laboratory studies and model concepts, a profound understanding of the involved processes in ac corrosion and the required limits has been obtained in the last years. But there was no information whether these thresholds can be effectively applied to pipelines or whether operational constraints make their implementation impossible. Therefore, an extensive field test was carried out. Thereby, the relevance of the laboratory tests for field application could be demonstrated and all threshold values were confirmed. Detailed analysis made it possible to explain the observed threshold values based on thermodynamic and kinetic considerations. The results summarized in the present work are the basis for the normative work defining the thresholds for the operation conditions of cathodically protected pipelines. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  8. Pipeline oil fire detection with MODIS active fire products

    Science.gov (United States)

    Ogungbuyi, M. G.; Martinez, P.; Eckardt, F. D.

    2017-12-01

    We investigate 85 129 MODIS satellite active fire events from 2007 to 2015 in the Niger Delta of Nigeria. The region is the oil base for Nigerian economy and the hub of oil exploration where oil facilities (i.e. flowlines, flow stations, trunklines, oil wells and oil fields) are domiciled, and from where crude oil and refined products are transported to different Nigerian locations through a network of pipeline systems. Pipeline and other oil facilities are consistently susceptible to oil leaks due to operational or maintenance error, and by acts of deliberate sabotage of the pipeline equipment which often result in explosions and fire outbreaks. We used ground oil spill reports obtained from the National Oil Spill Detection and Response Agency (NOSDRA) database (see www.oilspillmonitor.ng) to validate MODIS satellite data. NOSDRA database shows an estimate of 10 000 spill events from 2007 - 2015. The spill events were filtered to include largest spills by volume and events occurring only in the Niger Delta (i.e. 386 spills). By projecting both MODIS fire and spill as `input vector' layers with `Points' geometry, and the Nigerian pipeline networks as `from vector' layers with `LineString' geometry in a geographical information system, we extracted the nearest MODIS events (i.e. 2192) closed to the pipelines by 1000m distance in spatial vector analysis. The extraction process that defined the nearest distance to the pipelines is based on the global practices of the Right of Way (ROW) in pipeline management that earmarked 30m strip of land to the pipeline. The KML files of the extracted fires in a Google map validated their source origin to be from oil facilities. Land cover mapping confirmed fire anomalies. The aim of the study is to propose a near-real-time monitoring of spill events along pipeline routes using 250 m spatial resolution of MODIS active fire detection sensor when such spills are accompanied by fire events in the study location.

  9. The inspection of a radiologically contaminated pipeline using a teleoperated pipe crawler

    International Nuclear Information System (INIS)

    Fogle, R.F.; Kuelske, K.; Kellner, R.A.

    1995-01-01

    In the 1950s, the Savannah River Site built an open, unlined retention basin to temporarily store potentially radionuclide contaminated cooling water from a chemical separations process and storm water drainage from a nearby waste management facility that stored large quantities of nuclear fission byproducts in carbon steel tanks. The retention basin was retired from service in 1972 when a new, lined basin was completed. In 1978, the old retention basin was excavated, backfilled with uncontaminated dirt, and covered with grass. At the same time, much of the underground process pipeline leading to the basin was abandoned. Since the closure of the retention basin, new environmental regulations require that the basin undergo further assessment to determine whether additional remediation is required. A visual and radiological inspection of the pipeline was necessary to aid in the remediation decision making process for the retention basin system. A teleoperated pipe crawler inspection system was developed to survey the abandoned sections of underground pipelines leading to the retired retention basin. This paper will describe the background to this project, the scope of the investigation, the equipment requirements, and the results of the pipeline inspection

  10. A Hybrid Scheme Based on Pipelining and Multitasking in Mobile Application Processors for Advanced Video Coding

    Directory of Open Access Journals (Sweden)

    Muhammad Asif

    2015-01-01

    Full Text Available One of the key requirements for mobile devices is to provide high-performance computing at lower power consumption. The processors used in these devices provide specific hardware resources to handle computationally intensive video processing and interactive graphical applications. Moreover, processors designed for low-power applications may introduce limitations on the availability and usage of resources, which present additional challenges to the system designers. Owing to the specific design of the JZ47x series of mobile application processors, a hybrid software-hardware implementation scheme for H.264/AVC encoder is proposed in this work. The proposed scheme distributes the encoding tasks among hardware and software modules. A series of optimization techniques are developed to speed up the memory access and data transferring among memories. Moreover, an efficient data reusage design is proposed for the deblock filter video processing unit to reduce the memory accesses. Furthermore, fine grained macroblock (MB level parallelism is effectively exploited and a pipelined approach is proposed for efficient utilization of hardware processing cores. Finally, based on parallelism in the proposed design, encoding tasks are distributed between two processing cores. Experiments show that the hybrid encoder is 12 times faster than a highly optimized sequential encoder due to proposed techniques.

  11. Nova Gas's pipeline to Asia

    International Nuclear Information System (INIS)

    Lea, N.

    1996-01-01

    The involvement of the Calgary-based company NOVA Gas International (NGI) in Malaysia's peninsular gas utilization (PGU) project, was described. Phase I and II of the project involved linking onshore gas processing plants with a natural gas transmission system. Phase III of the PGU project was a gas transmission pipeline that began midway up the west coast of peninsular Malaysia to the Malaysia-Thailand border. The complex 549 km pipeline included route selection, survey and soil investigation, archaeological study, environmental impact assessment, land acquisition, meter-station construction, telecommunication systems and office buildings. NGI was the prime contractor on the project through a joint venture with OGP Technical Services, jointly owned by NGI and Petronas, the Thai state oil company. Much of NGI's success was attributed to excellent interpersonal skills, particularly NGI's ability to build confidence and credibility with its Thai partners

  12. High performance graphics processors for medical imaging applications

    International Nuclear Information System (INIS)

    Goldwasser, S.M.; Reynolds, R.A.; Talton, D.A.; Walsh, E.S.

    1989-01-01

    This paper describes a family of high- performance graphics processors with special hardware for interactive visualization of 3D human anatomy. The basic architecture expands to multiple parallel processors, each processor using pipelined arithmetic and logical units for high-speed rendering of Computed Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) data. User-selectable display alternatives include multiple 2D axial slices, reformatted images in sagittal or coronal planes and shaded 3D views. Special facilities support applications requiring color-coded display of multiple datasets (such as radiation therapy planning), or dynamic replay of time- varying volumetric data (such as cine-CT or gated MR studies of the beating heart). The current implementation is a single processor system which generates reformatted images in true real time (30 frames per second), and shaded 3D views in a few seconds per frame. It accepts full scale medical datasets in their native formats, so that minimal preprocessing delay exists between data acquisition and display

  13. PLUGGING AND UNPLUGGING OF WASTE TRANSFER PIPELINES

    International Nuclear Information System (INIS)

    Ebadian, M.A.

    1999-01-01

    This project, which began in FY97, involves both the flow loop research on plugging and unplugging of waste transfer pipelines, and the large-scale industrial equipment test of plugging locating and unplugging technologies. In FY98, the related work was performed under the project name ''Mixing, Settling, and Pipe Unplugging of Waste Transfer Lines.'' The mixing, settling, and pipeline plugging and unplugging are critical to the design and maintenance of a waste transfer pipeline system, especially for the High-Level Waste (HLW) pipeline transfer. The major objective of this work is to recreate pipeline plugging conditions for equipment testing of plug locating and removal and to provide systematic operating data for modification of equipment design and enhancement of performance of waste transfer lines used at DOE sites. As the waste tank clean-out and decommissioning program becomes active at the DOE sites, there is an increasing potential that the waste slurry transfer lines will become plugged and unable to transport waste slurry from one tank to another or from the mixing tank to processing facilities. Transfer systems may potentially become plugged if the solids concentration of the material being transferred increases beyond the capability of the prime mover or if upstream mixing is inadequately performed. Plugging can occur due to the solids' settling in either the mixing tank, the pumping system, or the transfer lines. In order to enhance and optimize the slurry's removal and transfer, refined and reliable data on the mixing, sampling, and pipe unplugging systems must be obtained based on both laboratory-scale and simulated in-situ operating conditions

  14. Facilitating major additions to gas pipeline capacity: innovative approaches to financing, contracting, and regulation

    International Nuclear Information System (INIS)

    Schlesinger, B.; George, R.

    1997-01-01

    The North American gas pipeline industry is in the process of changing from a highly regulated merchant business to a less-regulated, more competitive, transportation industry. This has changed the risk profiles of many companies. This study examined various innovative approaches to successfully financing major pipeline projects emphasizing pipeline capacity financing, contractual terms between shippers and pipelines, and regulatory developments. Besides suggesting options to enhance prospects for financing major pipeline expansion projects, the study also aimed at creating a better understanding of the regulatory market and commercial changes in the pipeline industry and their financing implications. The study also includes a review of the evolution in gas markets and a record of consultations with lenders, producers, marketers and users. Innovative financing, contracting and regulatory solutions are identified and assessed. 25 refs., 17 tabs., 16 figs

  15. Crude oil pipeline expansion summary

    International Nuclear Information System (INIS)

    2005-02-01

    The Canadian Association of Petroleum Producers has been working with producers to address issues associated with the development of new pipeline capacity from western Canada. This document presents an assessment of the need for additional oil pipeline capacity given the changing mix of crude oil types and forecasted supply growth. It is of particular interest to crude oil producers and contributes to current available information for market participants. While detailed, the underlying analysis does not account for all the factors that may come into play when individual market participants make choices about which expansions they may support. The key focus is on the importance of timely expansion. It was emphasized that if pipeline expansions lags the crude supply growth, then the consequences would be both significant and unacceptable. Obstacles to timely expansion are also discussed. The report reviews the production and supply forecasts, the existing crude oil pipeline infrastructure, opportunities for new market development, requirements for new pipeline capacity and tolling options for pipeline development. tabs., figs., 1 appendix

  16. 77 FR 45417 - Pipeline Safety: Inspection and Protection of Pipeline Facilities After Railway Accidents

    Science.gov (United States)

    2012-07-31

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No... Accidents AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA); DOT. [[Page 45418

  17. Development of ecologically safe method for main oil and gas pipeline trenching

    Directory of Open Access Journals (Sweden)

    Akhmedov Asvar Mikdadovich

    2014-05-01

    Full Text Available Constructive, technical and technological reliability of major pipeline ensures ecological safety on different stages of life circle - beginning with project preparation activities up to the end of major pipeline operation. Even in the process of transition into new life circle stage, no matter if the pipeline needs major repairs or reconstruction, such technical and technological solutions should be found, which would preserve ecological stability of nature-anthropogenic system. Development of ecology protection technologies of construction, reconstruction and major repairs of main pipelines is of great importance not only for a region, but ensures ecological safety across the globe. The article presents a new way of trenching the main oil and gas pipeline, preservation and increase of ecological safety during its service. The updated technological plan is given in the paper for overhaul of the main oil and gas pipeline using the new technology of pipeline trenching. The suggested technical solution contributes to environment preservation with the help of deteriorating shells - the shells’ material decomposes into environment-friendly components: carbon dioxide, water and humus. The quantity of polluting agents in the atmosphere decreases with the decrease of construction term and quantity of technical equipment.

  18. Canadian pipeline contractors in holding pattern

    Energy Technology Data Exchange (ETDEWEB)

    Caron, G [Pe Ben Pipelines Ltd.; Osadchuk, V; Sharp, M; Stabback, J G

    1979-05-21

    A discussion of papers presented at a Pipe Line Contractors Association of Canada convention includes comments by G. Caron (Pe Ben Pipelines Ltd.) on the continued slack in big-inch pipeline construction into 1980 owing mainly to delayed U.S. and Canadian decisions on outstanding Alaska Highway gas pipeline issues and associated gas export bids and on the use of automatic welding for expeditious construction of the northern sections of the Alaska Highway pipeline; by V. Osadchuk (Majestic Wiley Contract. Ltd.) on the liquidation of surplus construction equipment because of these delays; by M. Sharp (Can. North. Pipeline Agency) on the need for close U.S. and Canadian governmental and industrial cooperation to permit an early 1980 start for construction of the prebuild sections of the Alaska pipeline; and by J. G. Stabback (Can. Natl. Energy Board) on the Alaska oil pipeline applications by Foothills Pipe Lines Ltd., Trans Mountain Pipe Line Co. Ltd., and Kitimat Pipe Line Ltd.

  19. R-GPU : A reconfigurable GPU architecture

    NARCIS (Netherlands)

    van den Braak, G.J.; Corporaal, H.

    2016-01-01

    Over the last decade, Graphics Processing Unit (GPU) architectures have evolved from a fixed-function graphics pipeline to a programmable, energy-efficient compute accelerator for massively parallel applications. The compute power arises from the GPU's Single Instruction/Multiple Threads

  20. Wave Pipelining Using Self Reset Logic

    Directory of Open Access Journals (Sweden)

    Miguel E. Litvin

    2008-01-01

    Full Text Available This study presents a novel design approach combining wave pipelining and self reset logic, which provides an elegant solution at high-speed data throughput with significant savings in power and area as compared with other dynamic CMOS logic implementations. To overcome some limitations in SRL art, we employ a new SRL family, namely, dual-rail self reset logic with input disable (DRSRL-ID. These gates depict fairly constant timing parameters, specially the width of the output pulse, for varying fan-out and logic depth, helping accommodate process, supply voltage, and temperature variations (PVT. These properties simplify the implementation of wave pipelined circuits. General timing analysis is provided and compared with previous implementations. Results of circuit implementation are presented together with conclusions and future work.

  1. 76 FR 28326 - Pipeline Safety: National Pipeline Mapping System Data Submissions and Submission Dates for Gas...

    Science.gov (United States)

    2011-05-17

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR 191... Reports AGENCY: Pipeline and Hazardous Materials Safety Administration (PHMSA), DOT. ACTION: Issuance of... Pipeline and Hazardous Materials Safety Administration (PHMSA) published a final rule on November 26, 2010...

  2. Measures for security and supervision of pipelines; Massnahmen zur Pipeline-Sicherheit und -Ueberwachung

    Energy Technology Data Exchange (ETDEWEB)

    Horlacher, Hans-Burkhard [TU Dresden (Germany). Inst. fuer Wasserbau und Technische Hydromechanik; Giesecke, Juergen [Stuttgart Univ. (Germany). Inst. fuer Wasserbau

    2010-07-01

    In a previous publication, the two authors dealt with the hydraulic problems as regards mineral oil pipelines. The present report describes the measures mainly used to guarantee the safety of such pipelines. (orig.)

  3. Comparing MapReduce and Pipeline Implementations for Counting Triangles

    Directory of Open Access Journals (Sweden)

    Edelmira Pasarella

    2017-01-01

    Full Text Available A common method to define a parallel solution for a computational problem consists in finding a way to use the Divide and Conquer paradigm in order to have processors acting on its own data and scheduled in a parallel fashion. MapReduce is a programming model that follows this paradigm, and allows for the definition of efficient solutions by both decomposing a problem into steps on subsets of the input data and combining the results of each step to produce final results. Albeit used for the implementation of a wide variety of computational problems, MapReduce performance can be negatively affected whenever the replication factor grows or the size of the input is larger than the resources available at each processor. In this paper we show an alternative approach to implement the Divide and Conquer paradigm, named dynamic pipeline. The main features of dynamic pipelines are illustrated on a parallel implementation of the well-known problem of counting triangles in a graph. This problem is especially interesting either when the input graph does not fit in memory or is dynamically generated. To evaluate the properties of pipeline, a dynamic pipeline of processes and an ad-hoc version of MapReduce are implemented in the language Go, exploiting its ability to deal with channels and spawned processes. An empirical evaluation is conducted on graphs of different topologies, sizes, and densities. Observed results suggest that dynamic pipelines allows for an efficient implementation of the problem of counting triangles in a graph, particularly, in dense and large graphs, drastically reducing the execution time with respect to the MapReduce implementation.

  4. Pipeline integrity evaluation of oil pipelines using free-swimming acoustic technology

    Energy Technology Data Exchange (ETDEWEB)

    Ariaratnam, Samuel T. [Arizona State University, Tempe, Arizona (United States); Chandrasekaran, Muthu [Pure Technologies Limited, Calgary, AB (Canada)

    2010-07-01

    In the United States, the Pipeline and Hazardous Materials Safety Administration (PHMSA) funded a joint academy-industry research project, which developed and refined a free-swimming tool called SmartBall. The tool swims through the pipeline and gives results at a much lower cost than current leak detection methods, and it can detect leaks as small as 0.03 gpm of oil. GPS-synchronized above-ground loggers capture acoustic signals and record the passage of the tool through the pipeline. The tool is spherical and smaller than the pipe, through which it rolls silently; it can overcome obstacles that could otherwise make a pipeline unpiggable. SmartBall uses the great potential of acoustic detection, because when a pressurized product leaks from a pipe, it produces a distinctive acoustic signal that travels through the product; at the same time, it overcomes the problem caused by the very limited range of this signal. This technology can prevent enormous economic consequences such as a 50,000-gallon gasoline spill that happened in 2003 between Tucson and Phoenix.

  5. A closed solution for the collapse load of pressurized pipelines in free spans

    Energy Technology Data Exchange (ETDEWEB)

    Bezerra, Luciano M. [Brasilia Univ., DF (Brazil). Dept. de Engenharia Civil; Murray, David W.; Xuejun Song [University of Alberta (Canada). Civil Engineering Dept.

    2005-07-01

    Submarine pipelines for oil exploitation, generally, are under internal pressure and compressive thermal loading. Due to rough see-bottom terrains, these pipelines may be supported only intermittently and span freely. The collapse of such pipelines may produce oil leakage to the environment. A common engineering practice for the determination of the collapse load of such pipelines is the use of finite element modeling. This paper presents an analytical method for the determination of the collapse load of pressurized pipelines extended over free spans. The formulation also takes into account the internal pressure and initial imperfection, generally present in these pipelines. Collapse load is determined from a deduced transcendental equation. Results of the presented formulation are compared with sophisticated finite element analyses. While sophisticated finite element analysis requires hours of computer processing, the present formulation takes practically no time to assess a good approximation for the collapse load of pressurized free span pipelines under compression. The present paper is not intended to substitute the more precise finite element analyses but to provide an easier, faster, and practical way to determine a first approximation of the collapse load of pressurized free span pipelines. (author)

  6. CloudNeo: a cloud pipeline for identifying patient-specific tumor neoantigens.

    Science.gov (United States)

    Bais, Preeti; Namburi, Sandeep; Gatti, Daniel M; Zhang, Xinyu; Chuang, Jeffrey H

    2017-10-01

    We present CloudNeo, a cloud-based computational workflow for identifying patient-specific tumor neoantigens from next generation sequencing data. Tumor-specific mutant peptides can be detected by the immune system through their interactions with the human leukocyte antigen complex, and neoantigen presence has recently been shown to correlate with anti T-cell immunity and efficacy of checkpoint inhibitor therapy. However computing capabilities to identify neoantigens from genomic sequencing data are a limiting factor for understanding their role. This challenge has grown as cancer datasets become increasingly abundant, making them cumbersome to store and analyze on local servers. Our cloud-based pipeline provides scalable computation capabilities for neoantigen identification while eliminating the need to invest in local infrastructure for data transfer, storage or compute. The pipeline is a Common Workflow Language (CWL) implementation of human leukocyte antigen (HLA) typing using Polysolver or HLAminer combined with custom scripts for mutant peptide identification and NetMHCpan for neoantigen prediction. We have demonstrated the efficacy of these pipelines on Amazon cloud instances through the Seven Bridges Genomics implementation of the NCI Cancer Genomics Cloud, which provides graphical interfaces for running and editing, infrastructure for workflow sharing and version tracking, and access to TCGA data. The CWL implementation is at: https://github.com/TheJacksonLaboratory/CloudNeo. For users who have obtained licenses for all internal software, integrated versions in CWL and on the Seven Bridges Cancer Genomics Cloud platform (https://cgc.sbgenomics.com/, recommended version) can be obtained by contacting the authors. jeff.chuang@jax.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  7. Trends in Continuity and Interpolation for Computer Graphics.

    Science.gov (United States)

    Gonzalez Garcia, Francisco

    2015-01-01

    In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).

  8. Offshore Pipeline Locations in the Gulf of Mexico, Geographic NAD27, MMS (2007) [pipelines_vectors_mms_2007

    Data.gov (United States)

    Louisiana Geographic Information Center — Offshore Minerals Management Pipeline Locations for the Gulf of Mexico (GOM). Contains the lines of the pipeline in the GOM. All pipelines existing in the databases...

  9. Offshore Pipeline Locations in the Gulf of Mexico, Geographic NAD27, MMS (2007) [pipelines_points_mms_2007

    Data.gov (United States)

    Louisiana Geographic Information Center — Offshore Minerals Management Pipeline Locations for the Gulf of Mexico (GOM). Contains the points of the pipeline in the GOM. All pipelines existing in the databases...

  10. Optimization of pipeline transport for CO2 sequestration

    International Nuclear Information System (INIS)

    Zhang, Z.X.; Wang, G.X.; Massarotto, P.; Rudolph, V.

    2006-01-01

    Coal fired power generation will continue to provide energy to the world for the foreseeable future. However, this energy use is a significant contributor to increased atmospheric CO 2 concentration and, hence, global warming. Capture and disposal of CO 2 has received increased R and D attention in the last decade as the technology promises to be the most cost effective for large scale reductions in CO 2 emissions. This paper addresses CO 2 transport via pipeline from capture site to disposal site, in terms of system optimization, energy efficiency and overall economics. Technically, CO 2 can be transported through pipelines in the form of a gas, a supercritical fluid or in the subcooled liquid state. Operationally, most CO 2 pipelines used for enhanced oil recovery transport CO 2 as a supercritical fluid. In this paper, supercritical fluid and subcooled liquid transport are examined and compared, including their impacts on energy efficiency and cost. Using a commercially available process simulator, ASPEN PLUS 10.1, the results show that subcooled liquid transport maximizes the energy efficiency and minimizes the cost of CO 2 transport over long distances under both isothermal and adiabatic conditions. Pipeline transport of subcooled liquid CO 2 can be ideally used in areas of cold climate or by burying and insulating the pipeline. In very warm climates, periodic refrigeration to cool the CO 2 below its critical point of 31.1 o C, may prove economical. Simulations have been used to determine the maximum safe pipeline distances to subsequent booster stations as a function of inlet pressure, environmental temperature and ground level heat flux conditions

  11. Graphics on demand: the automatic data visualization on the WEB

    Directory of Open Access Journals (Sweden)

    Ramzi Guetari

    2017-06-01

    Full Text Available Data visualization is an effective tool for communicating the results of opinion surveys, epidemiological studies, statistics on consumer habits, etc. The graphical representation of data usually assists human information processing by reducing demands on attention, working memory, and long-term memory. It allows, among other things, a faster reading of the information (by acting on the forms, directions, colors…, the independence of the language (or culture, a better capture the attention of the audience, etc. Data that could be graphically represented may be structured or unstructured. The unstructured data, whose volume grows exponentially, often hide important and even vital information for society and companies. It, therefore, takes a lot of work to extract valuable information from unstructured data. If it is easier to understand a message through structured data, such as a table, than through a long narrative text, it is even easier to convey a message through a graphic than a table. In our opinion, it is often very useful to synthesize the unstructured data in the form of graphical representations. In this paper, we present an approach for processing unstructured data containing statistics in order to represent them graphically. This approach allows transforming the unstructured data into structured one which globally conveys the same countable information. The graphical representation of such a structured data is then obvious. This approach deals with both quantitative and qualitative data. It is based on Natural Language Processing Techniques and Text Mining. An application that implements this process is also presented in this paper.

  12. Reasons for decision in the matter of TransCanada PipeLines Limited and TransCanada Keystone Pipeline GP Ltd. : application dated 5 June 2006 for leave to transfer pipeline facilities and for a determination of the transfer price

    International Nuclear Information System (INIS)

    2007-01-01

    TransCanada Pipelines Limited and its fully owned subsidiary TransCanada Keystone Pipeline GP Ltd. applied to the National Energy Board in June 2006 for leave to transfer certain pipeline facilities comprising part of TransCanada's mainline natural gas transmission system from TransCanada to Keystone for use in Keystone's proposed new oil pipeline. The transfer would involve the conversion of the facilities from gas service to oil service for use in the Keystone Project. The new oil pipeline would extend from Hardisty Alberta to Wood River and Patoka, Illinois. It would initially provide access for western Canada crude oil producers to the southern Petroleum Administration Defence District (PADD) 2 region of the United states. This is a major refining area which presently has minimal access for western Canada crude oil because of the limited pipeline capacity into the region. The Board held a hearing process to seek the views of interested parties regarding the list of issues that should be considered in dealing with the application. The list of issues included arguments of the Communications, Energy and Paperworkers Union of Canada (CEP) regarding the Board's jurisdiction; regulatory standards; energy supply markets and pipelines; potential impacts of the transfer such as potential costs to gas shippers and the impact of the transfer on mainline operations; and, the transfer at net book value (NBV). This document presented the Board's views on the transfer and the public interest. After considering all factors, the Board approved the sale and purchase of the Facilities from TransCanada to Keystone. The Board further ordered that TransCanada may reduce the mainline rate base by the NBV of the facilities upon their transfer to Keystone, and that Keystone may include the NBV in its pipeline oil plant upon the transfer of the facilities. 5 tabs., 14 figs., 6 appendices

  13. Northern pipelines : challenges and needs

    Energy Technology Data Exchange (ETDEWEB)

    Dean, D.; Brownie, D. [ProLog Canada Inc., Calgary, AB (Canada); Fafara, R. [TransCanada PipeLines Ltd., Calgary, AB (Canada)

    2007-07-01

    Working Group 10 presented experiences acquired from the operation of pipeline systems in a northern environment. There are currently 3 pipelines operating north of 60, notably the Shiha gas pipeline near Fort Liard, the Ikhil gas pipeline in Inuvik and the Norman Wells oil pipeline. Each has its unique commissioning, operating and maintenance challenges, as well as specific training and logistical support requirements for the use of in-line inspection tools and other forms of integrity assessment. The effectiveness of cathodic protection systems in a permafrost northern environment was also discussed. It was noted that the delay of the Mackenzie Gas Pipeline Project by two to three years due to joint regulatory review may lead to resource constraints for the project as well as competition for already scarce human resources. The issue of a potential timing conflict with the Alaskan Pipeline Project was also addressed as well as land use issues for routing of supply roads. Integrity monitoring and assessment issues were outlined with reference to pipe soil interaction monitoring in discontinuous permafrost; south facing denuded slope stability; base lining projects; and reclamation issues. It was noted that automatic welding and inspection will increase productivity, while reducing the need for manual labour. In response to anticipated training needs, companies are planning to involve and train Aboriginal labour and will provide camp living conditions that will attract labour. tabs., figs.

  14. Water level detection pipeline

    International Nuclear Information System (INIS)

    Koshikawa, Yukinobu; Imanishi, Masatoshi; Niizato, Masaru; Takagi, Masahiro

    1998-01-01

    In the present invention, water levels of a feedwater heater and a drain tank in a nuclear power plant are detected at high accuracy. Detection pipeline headers connected to the upper and lower portions of a feedwater heater or a drain tank are connected with each other. The connection line is branched at appropriate two positions and an upper detection pipeline and a lower detection pipeline are connected thereto, and a gauge entrance valve is disposed to each of the detection pipelines. A diaphragm of a pressure difference generator is connected to a flange formed to the end portion. When detecting the change of water level in the feedwater heater or the drain tank as a change of pressure difference, gauge entrance valves on the exit side of the upper and lower detection pipelines are connected by a connection pipe. The gauge entrance valve is closed, a tube is connected to the lower detection pipe to inject water to the diaphragm of the pressure difference generator passing through the connection pipe thereby enabling to calibrate the pressure difference generator. The accuracy of the calibration of instruments is improved and workability thereof upon flange maintenance is also improved. (I.S.)

  15. Simulation of pipeline in the area of the underwater crossing

    International Nuclear Information System (INIS)

    Burkov, P; Chernyavskiy, D; Burkova, S; Konan, E C

    2014-01-01

    The article studies stress-strain behavior of the main oil-pipeline section Alexandrovskoye-Anzhero-Sudzhensk using software system Ansys. This method of examination and assessment of technical conditions of objects of pipeline transport studies the objects and the processes that affect the technical condition of these facilities, including the research on the basis of computer simulation. Such approach allows to develop the theory, methods of calculations and designing of objects of pipeline transport, units and parts of machines, regardless of their industry and destination with a view to improve the existing constructions and create new structures, machines of high performance, durability and reliability, maintainability, low material capacity and cost, which have competitiveness on the world market

  16. Bayesian Graphical Models

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Nielsen, Thomas Dyhre

    2016-01-01

    Mathematically, a Bayesian graphical model is a compact representation of the joint probability distribution for a set of variables. The most frequently used type of Bayesian graphical models are Bayesian networks. The structural part of a Bayesian graphical model is a graph consisting of nodes...

  17. New method for NPP sodium coolant pipeline austenization

    International Nuclear Information System (INIS)

    Malashonok, V.A.; Rotshtejn, A.V.; Gotshalk, A.L.; Miryushchenko, E.F.

    1980-01-01

    Heat treatment technology is considered for pipelines intended for the NPP cooling systems employing sodium coolant. Various techniques are discussed which are used for protecting the pipeline internal surfaces against oxidation in the process of heat treatment. It is noted that the austenite formation of welded joints of steel 12Kh18N9 and steel Kh16N11M3 at temperatures of 1050 and 1100 deg C releases welding-induced stresses and reduces a possibility of local damages. Evacuation down to 1 mm Hg appears to be the most rational protective technique. The considered procedure of the pipeline heat treatment has been utilized for mounting the equipment of the BN-600 reactor at the Beloyarskaya NPP. The economic gain resulting from the use of the procedure, owing to decrease in argon consumption and reduction of labour input, makes up 150 000 roubles

  18. Oil pipeline valve automation for spill reduction

    Energy Technology Data Exchange (ETDEWEB)

    Mohitpour, Mo; Trefanenko, Bill [Enbridge Technology Inc, Calgary (Canada); Tolmasquim, Sueli Tiomno; Kossatz, Helmut [TRANSPETRO - PETROBRAS Transporte S.A., Rio de Janeiro, RJ (Brazil)

    2003-07-01

    Liquid pipeline codes generally stipulate placement of block valves along liquid transmission pipelines such as on each side of major river crossings where environmental hazards could cause or are foreseen to potentially cause serious consequences. Codes, however, do not stipulate any requirement for block valve spacing for low vapour pressure petroleum transportation, nor for remote pipeline valve operations to reduce spills. A review of pipeline codes for valve requirement and spill limitation in high consequence areas is thus presented along with a criteria for an acceptable spill volume that could be caused by pipeline leak/full rupture. A technique for deciding economically and technically effective pipeline block valve automation for remote operation to reduce oil spilled and control of hazards is also provided. In this review, industry practice is highlighted and application of the criteria for maximum permissible oil spill and the technique for deciding valve automation thus developed, as applied to ORSUB pipeline is presented. ORSUB is one of the three initially selected pipelines that have been studied. These pipelines represent about 14% of the total length of petroleum transmission lines operated by PETROBRAS Transporte S.A. (TRANSPETRO) in Brazil. Based on the implementation of valve motorization on these three pipeline, motorization of block valves for remote operation on the remaining pipelines is intended, depending on the success of these implementations, on historical records of failure and appropriate ranking. (author)

  19. PC Graphic file programing

    International Nuclear Information System (INIS)

    Yang, Jin Seok

    1993-04-01

    This book gives description of basic of graphic knowledge and understanding and realization of graphic file form. The first part deals with graphic with graphic data, store of graphic data and compress of data, programing language such as assembling, stack, compile and link of program and practice and debugging. The next part mentions graphic file form such as Mac paint file, GEM/IMG file, PCX file, GIF file, and TIFF file, consideration of hardware like mono screen driver and color screen driver in high speed, basic conception of dithering and conversion of formality.

  20. Stabilité des pipelines non ensouillés. Etude bibliographique Stability of Unburied Pipelines. Bibliographic Study

    OpenAIRE

    Alliot J. M.

    2006-01-01

    The integrity of an unburied subsea pipeline depends to a very large extent on its stability on the seabed along its entire length. Hence the determination of this stability is of great importance in the engineering design of pipelines. This article proposes to examine the principal problems raised by the stability of unburied pipelines in the field of soil mechanics. These problems mainly concern the reactions of the soil to pipelines and their assessment, i. e. the forces of soil resistance...

  1. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  2. Pipeline leakage recognition based on the projection singular value features and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Wei; Zhang, Laibin; Mingda, Wang; Jinqiu, Hu [College of Mechanical and Transportation Engineering, China University of Petroleum, Beijing, (China)

    2010-07-01

    The negative wave pressure method is one of the processes used to detect leaks on oil pipelines. The development of new leakage recognition processes is difficult because it is practically impossible to collect leakage pressure samples. The method of leakage feature extraction and the selection of the recognition model are also important in pipeline leakage detection. This study investigated a new feature extraction approach Singular Value Projection (SVP). It projects the singular value to a standard basis. A new pipeline recognition model based on the multi-class Support Vector Machines was also developed. It was found that SVP is a clear and concise recognition feature of the negative pressure wave. Field experiments proved that the model provided a high recognition accuracy rate. This approach to pipeline leakage detection based on the SVP and SVM has a high application value.

  3. Correlation between designed wall thickness of gas pipelines and external and internal corrosion processes; Adequacao de espessura de parede projetada em funcao de processos de corrosao externa e interna em gasodutos

    Energy Technology Data Exchange (ETDEWEB)

    Gomes, Jose Antonio da Cunha Ponciano [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE). Programa de Engenharia Metalurgica

    2004-07-01

    Corrosion control on gas pipelines plays an important role on the assessment of pipeline integrity and reliability. In many countries a great extension of buried pipelines is used on transport and distribution systems. This extension will be certainly increased in a near future due to the increasing consumption of natural gas. Inadequate corrosion control can drive to pipeline failures, bringing up the possibility of accidents in populated or environmental protected areas, bringing together severe economical, legal and environmental consequences. Corrosion is frequently considered as a natural and inevitable phenomenon. Based upon this assumption, some recommendations are included on design standards of gas pipelines in order to compensate its detrimental effect. The aim of this work is to present a review of the correlation between external corrosion process and the guidelines established during the project phase of gas pipelines. It is intended to contribute for a better understanding of the impacts of corrosion on integrity, reliability and readiness of gas transport and distribution systems. Some aspects regarding external corrosion of pipelines extracted from technical papers will be summarised. Information provided will be compared to design criterion prescribed by the NBR 12712 Standard. (author)

  4. Comparing Existing Pipeline Networks with the Potential Scale of Future U.S. CO2 Pipeline Networks

    Energy Technology Data Exchange (ETDEWEB)

    Dooley, James J.; Dahowski, Robert T.; Davidson, Casie L.

    2008-02-29

    There is growing interest regarding the potential size of a future U.S. dedicated CO2 pipeline infrastructure if carbon dioxide capture and storage (CCS) technologies are commercially deployed on a large scale. In trying to understand the potential scale of a future national CO2 pipeline network, comparisons are often made to the existing pipeline networks used to deliver natural gas and liquid hydrocarbons to markets within the U.S. This paper assesses the potential scale of the CO2 pipeline system needed under two hypothetical climate policies and compares this to the extant U.S. pipeline infrastructures used to deliver CO2 for enhanced oil recovery (EOR), and to move natural gas and liquid hydrocarbons from areas of production and importation to markets. The data presented here suggest that the need to increase the size of the existing dedicated CO2 pipeline system should not be seen as a significant obstacle for the commercial deployment of CCS technologies.

  5. GPUmotif: an ultra-fast and energy-efficient motif analysis program using graphics processing units.

    Science.gov (United States)

    Zandevakili, Pooya; Hu, Ming; Qin, Zhaohui

    2012-01-01

    Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS) that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU)-accelerated motif analysis program named GPUmotif. We proposed a "fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/

  6. GPUmotif: an ultra-fast and energy-efficient motif analysis program using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Pooya Zandevakili

    Full Text Available Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU-accelerated motif analysis program named GPUmotif. We proposed a "fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/

  7. Calculation of NPP pipeline seismic stability

    International Nuclear Information System (INIS)

    Kirillov, A.P.; Ambriashvili, Yu.K.; Kaliberda, I.V.

    1982-01-01

    A simplified design procedure of seismic pipeline stability of NPP at WWER reactor is described. The simplified design procedure envisages during the selection and arrangement of pipeline saddle and hydraulic shock absorbers use of method of introduction of resilient mountings of very high rigidity into the calculated scheme of the pipeline and performance of calculations with step-by-step method. It is concluded that the application of the design procedure considered permits to determine strains due to seismic loads, to analyze stressed state in pipeline elements and supporting power of pipe-line saddle with provision for seismic loads to plan measures on seismic protection

  8. Computer graphics as an information means for power plants

    International Nuclear Information System (INIS)

    Kollmannsberger, J.; Pfadler, H.

    1990-01-01

    Computer-aided graphics have proved increasingly successful as a help in process control in large plants. The specific requirements for the system and the methods of planning and achieving graphic systems in powerstation control rooms are described. Experience from operation is evaluated from completed plants. (orig.) [de

  9. On-line, real-time monitoring for petrochemical and pipeline process control applications

    Energy Technology Data Exchange (ETDEWEB)

    Kane, Russell D.; Eden, D.C.; Cayard, M.S.; Eden, D.A.; Mclean, D.T. [InterCorr International, Inc., 14503 Bammel N. Houston, Suite 300, Houston Texas 77014 (United States); Kintz, J. [BASF Corporation, 602 Copper Rd., Freeport, Texas 77541 (United States)

    2004-07-01

    Corrosion problems in petroleum and petrochemical plants and pipeline may be inherent to the processes, but costly and damaging equipment losses are not. With the continual drive to increase productivity, while protecting both product quality, safety and the environment, corrosion must become a variable that can be continuously monitored and assessed. This millennium has seen the introduction of new 'real-time', online measurement technologies and vast improvements in methods of electronic data handling. The 'replace when it fails' approach is receding into a distant memory; facilities management today is embracing new technology, and rapidly appreciating the value it has to offer. It has offered the capabilities to increase system run time between major inspections, reduce the time and expense associated with turnaround or in-line inspections, and reduce major upsets which cause unplanned shut downs. The end result is the ability to know on a practical basis of how 'hard' facilities can be pushed before excessive corrosion damage will result, so that process engineers can understand the impact of their process control actions and implement true asset management. This paper makes reference to use of a online, real-time electrochemical corrosion monitoring system - SmartCET 1- in a plant running a mostly organic process media. It also highlights other pertinent examples where similar systems have been used to provide useful real-time information to detect system upsets, which would not have been possible otherwise. This monitoring/process control approach has operators and engineers to see, for the first time, changes in corrosion behavior caused by specific variations in process parameters. Process adjustments have been identified that reduce corrosion rates while maintaining acceptable yields and quality. The monitoring system has provided a new window into the chemistry of the process, helping chemical engineers improve their process

  10. ESO Reflex: a graphical workflow engine for data reduction

    Science.gov (United States)

    Hook, Richard; Ullgrén, Marko; Romaniello, Martino; Maisala, Sami; Oittinen, Tero; Solin, Otto; Savolainen, Ville; Järveläinen, Pekka; Tyynelä, Jani; Péron, Michèle; Ballester, Pascal; Gabasch, Armin; Izzo, Carlo

    ESO Reflex is a prototype software tool that provides a novel approach to astronomical data reduction by integrating a modern graphical workflow system (Taverna) with existing legacy data reduction algorithms. Most of the raw data produced by instruments at the ESO Very Large Telescope (VLT) in Chile are reduced using recipes. These are compiled C applications following an ESO standard and utilising routines provided by the Common Pipeline Library (CPL). Currently these are run in batch mode as part of the data flow system to generate the input to the ESO/VLT quality control process and are also exported for use offline. ESO Reflex can invoke CPL-based recipes in a flexible way through a general purpose graphical interface. ESO Reflex is based on the Taverna system that was originally developed within the UK life-sciences community. Workflows have been created so far for three VLT/VLTI instruments, and the GUI allows the user to make changes to these or create workflows of their own. Python scripts or IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available. Taverna is intended for use with web services and experiments using ESO Reflex to access Virtual Observatory web services have been successfully performed. ESO Reflex is the main product developed by Sampo, a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal was to look into the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Sampo concluded early in 2008. This contribution will describe ESO Reflex and show several examples of its use both locally and using Virtual Observatory remote web services. ESO Reflex is expected to be released to the community in early 2009.

  11. ESO Reflex: A Graphical Workflow Engine for Data Reduction

    Science.gov (United States)

    Hook, R.; Romaniello, M.; Péron, M.; Ballester, P.; Gabasch, A.; Izzo, C.; Ullgrén, M.; Maisala, S.; Oittinen, T.; Solin, O.; Savolainen, V.; Järveläinen, P.; Tyynelä, J.

    2008-08-01

    Sampo {http://www.eso.org/sampo} (Hook et al. 2005) is a project led by ESO and conducted by a software development team from Finland as an in-kind contribution to joining ESO. The goal is to assess the needs of the ESO community in the area of data reduction environments and to create pilot software products that illustrate critical steps along the road to a new system. Those prototypes will not only be used to validate concepts and understand requirements but will also be tools of immediate value for the community. Most of the raw data produced by ESO instruments can be reduced using CPL {http://www.eso.org/cpl} recipes: compiled C programs following an ESO standard and utilizing routines provided by the Common Pipeline Library. Currently reduction recipes are run in batch mode as part of the data flow system to generate the input to the ESO VLT/VLTI quality control process and are also made public for external users. Sampo has developed a prototype application called ESO Reflex {http://www.eso.org/sampo/reflex/} that integrates a graphical user interface and existing data reduction algorithms. ESO Reflex can invoke CPL-based recipes in a flexible way through a dedicated interface. ESO Reflex is based on the graphical workflow engine Taverna {http://taverna.sourceforge.net} that was originally developed by the UK eScience community, mostly for work in the life sciences. Workflows have been created so far for three VLT/VLTI instrument modes ( VIMOS/IFU {http://www.eso.org/instruments/vimos/}, FORS spectroscopy {http://www.eso.org/instruments/fors/} and AMBER {http://www.eso.org/instruments/amber/}), and the easy-to-use GUI allows the user to make changes to these or create workflows of their own. Python scripts and IDL procedures can be easily brought into workflows and a variety of visualisation and display options, including custom product inspection and validation steps, are available.

  12. A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture

    Science.gov (United States)

    Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.

    2017-12-01

    This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.

  13. Geospatial informatics applications for assessment of pipeline safety and security

    Energy Technology Data Exchange (ETDEWEB)

    Roper, W. [George Mason University, Fairfax, VA (United States). Dept. of Civil, Environmental and Infrastructure

    2005-07-01

    A variety of advanced technologies are available to enhance planning, designing, managing, operating and maintaining the components of the electric utility system. Aerial and satellite remote sensing represents one area of rapid development that can be leveraged to address some of these challenges. Airborne remote sensing can be an effective technology to assist pipeline risk management to assure safety in design, construction, operation, maintenance, and emergency response of pipeline facilities. Industrial and scientific advances in airborne and satellite remote sensing systems and data processing techniques are opening new technological opportunities for developing an increased capability of accomplishing the pipeline mapping and safety needs of the industry. These technologies have significant and unique potential for application to a number of cross cutting energy system security issues. This paper addresses some of the applications of these technologies to pipeline and power industry infrastructure, economics and relative effectiveness of these technologies and issues related to technology implementation and diffusion. (Author)

  14. Lay Pipeline Abandonment Head during Some

    African Journals Online (AJOL)

    2016-12-01

    Dec 1, 2016 ... is very cruel to the structural integrity of the pipeline structure after ... and properties may be jeopardized should the pipeline structure be used for oil or gas transport when such ... pipelines under bending may alter the material.

  15. Development of the data logging and graphical presentation for gamma scanning, trouble shooting and process evaluation in the petroleum refinery column

    International Nuclear Information System (INIS)

    Saengchantr, Dhanaj; Chueinta Siripone

    2009-07-01

    Full text: Software of data logging and graphical presentation on gamma scanning for trouble shooting and process evaluation of the petroleum refinery column was developed. While setting the gamma source and gamma detector at the opposite orientation along side the column and recording the transmitted radiation through the column at several elevations, the relative density gamma intensity vs. vertical elevation could be obtained in the graphical mode. In comparison with engineering drawing, the physical and process abnormalities could be clearly evaluated during field investigation. The program could also accumulate up to 8 data sets, each of 1,000 points allowing with convenience, the comparison of different operational parameters adjustment during remedy of the problem and/or process optimization. Incorporated with this development and other factors, the technology capability of the TINT Service Center to the petroleum refinery was also enhanced

  16. 78 FR 53190 - Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on...

    Science.gov (United States)

    2013-08-28

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration [Docket No. PHMSA-2013-0185] Pipeline Safety: Notice to Operators of Hazardous Liquid and Natural Gas Pipelines of a Recall on Leak Repair Clamps Due to Defective Seal AGENCY: Pipeline and Hazardous Materials Safety...

  17. Diverless pipeline repair system for deep water

    Energy Technology Data Exchange (ETDEWEB)

    Spinelli, Carlo M. [Eni Gas and Power, Milan (Italy); Fabbri, Sergio; Bachetta, Giuseppe [Saipem/SES, Venice (Italy)

    2009-07-01

    SiRCoS (Sistema Riparazione Condotte Sottomarine) is a diverless pipeline repair system composed of a suite of tools to perform a reliable subsea pipeline repair intervention in deep and ultra deep water which has been on the ground of the long lasting experience of Eni and Saipem in designing, laying and operating deep water pipelines. The key element of SiRCoS is a Connection System comprising two end connectors and a repair spool piece to replace a damaged pipeline section. A Repair Clamp with elastomeric seals is also available for pipe local damages. The Connection System is based on pipe cold forging process, consisting in swaging the pipe inside connectors with suitable profile, by using high pressure seawater. Three swaging operations have to be performed to replace the damaged pipe length. This technology has been developed through extensive theoretical work and laboratory testing, ending in a Type Approval by DNV over pipe sizes ranging from 20 inches to 48 inches OD. A complete SiRCoS system has been realised for the Green Stream pipeline, thoroughly tested in workshop as well as in shallow water and is now ready, in the event of an emergency situation.The key functional requirements for the system are: diverless repair intervention and fully piggability after repair. Eni owns this technology and is now available to other operators under Repair Club arrangement providing stand-by repair services carried out by Saipem Energy Services. The paper gives a description of the main features of the Repair System as well as an insight into the technological developments on pipe cold forging reliability and long term duration evaluation. (author)

  18. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  19. Repairing method for reactor primary system pipeline

    International Nuclear Information System (INIS)

    Hosokawa, Hideyuki; Uetake, Naoto; Hara, Teruo.

    1997-01-01

    Pipelines after decontamination of radioactive nuclides deposited on the pipelines in a nuclear power plant during operation or pipelines to replace pipelines deposited with radioactive nuclide are connected to each system of the nuclear power plant. They are heated in a gas phase containing oxygen to form an oxide film on the surface of the pipelines. The thickness of the oxide film formed in the gas phase is 1nm or greater, preferably 100nm. The concentration of oxygen in the gas phase containing oxygen must be 0.1% or greater. The heating is conducted by circulating a heated gas to the inside of the pipelines or disposing a movable heater such as a high frequency induction heater inside of the pipelines to form the oxide film. Then, redeposition of radioactive nuclide can be suppressed and since the oxide film is formed in the gas phase, a large scaled facilities are not necessary, thereby enabling to repair pipelines of reactor primary system at low cost. (N.H.)

  20. U.S. interstate pipelines ran more efficiently in 1994

    International Nuclear Information System (INIS)

    True, W.R.

    1995-01-01

    Regulated US interstate pipelines began 1995 under the momentum of impressive efficiency improvements in 1994. Annual reports filed with the US Federal Energy Regulatory Commission (FERC) show that both natural-gas and petroleum liquids pipeline companies increased their net incomes last year despite declining operating revenues. This article discusses trends in the pipeline industry and gives data on the following: pipeline revenues, incomes--1994; current pipeline costs; pipeline costs--estimated vs. actual; current compressor construction costs; compressor costs--estimated vs. actual; US interstate mileage; investment in liquids pipelines; 10-years of land construction costs; top 10 interstate liquids pipelines; top 10 interstate gas pipelines; liquids pipeline companies; and gas pipeline companies

  1. Prediction of scour below submerged pipeline crossing a river using ANN.

    Science.gov (United States)

    Azamathulla, H M; Zakaria, Nor Azazi

    2011-01-01

    The process involved in the local scour below pipelines is so complex that it makes it difficult to establish a general empirical model to provide accurate estimation for scour. This paper describes the use of artificial neural networks (ANN) to estimate the pipeline scour depth. The data sets of laboratory measurements were collected from published works and used to train the network or evolve the program. The developed networks were validated by using the observations that were not involved in training. The performance of ANN was found to be more effective when compared with the results of regression equations in predicting the scour depth around pipelines.

  2. Hybrid Pluggable Processing Pipeline (HyP3): Programmatic Access to Cloud-Based Processing of SAR Data

    Science.gov (United States)

    Weeden, R.; Horn, W. B.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    A problem often faced by Earth science researchers is the question of how to scale algorithms that were developed against few datasets and take them to regional or global scales. This problem only gets worse as we look to a future with larger and larger datasets becoming available. One significant hurdle can be having the processing and storage resources available for such a task, not to mention the administration of those resources. As a processing environment, the cloud offers nearly unlimited potential for compute and storage, with limited administration required. The goal of the Hybrid Pluggable Processing Pipeline (HyP3) project was to demonstrate the utility of the Amazon cloud to process large amounts of data quickly and cost effectively. Principally built by three undergraduate students at the ASF DAAC, the HyP3 system relies on core Amazon cloud services such as Lambda, Relational Database Service (RDS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), and Elastic Beanstalk. HyP3 provides an Application Programming Interface (API) through which users can programmatically interface with the HyP3 system; allowing them to monitor and control processing jobs running in HyP3, and retrieve the generated HyP3 products when completed. This presentation will focus on the development techniques and enabling technologies that were used in developing the HyP3 system. Data and process flow, from new subscription through to order completion will be shown, highlighting the benefits of the cloud for each step. Because the HyP3 system can be accessed directly from a user's Python scripts, powerful applications leveraging SAR products can be put together fairly easily. This is the true power of HyP3; allowing people to programmatically leverage the power of the cloud.

  3. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS data. Application and comparative study of selected tools

    Directory of Open Access Journals (Sweden)

    O'Callaghan Sean

    2012-05-01

    Full Text Available Abstract Background Gas chromatography–mass spectrometry (GC-MS is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX, noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI, allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS. Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs

  4. Crystallographic texture control helps improve pipeline steel resistance to hydrogen-induced cracking

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F; Hallen, J M; Herrera, O; Venegas, V [ESIQIE, Instituto Politecnico Nacional, Mexico, (Mexico); Baudin, T [Universite de Paris Sud, Orsay, (France)

    2010-07-01

    The resistance to HIC of sour service pipeline steels has been improved through several strategies but none have proven to be totally efficient in the preservation of HIC in difficult operating conditions. The crystallographic texture plays a significant role in determining the behavior of HIC in pipeline steels. The present study tried to prove that crystallographic texture control, through warm rolling schedules, helps improve pipeline steel resistance to HIC. Several samples of an API 5L X52 grade pipeline steel were produced using different thermomechanical processes (austenization, controlled rolling and recrystallization). These samples were subjected to cathodic charging. Scanning electron microscopy and automated FEG/EBSD were used to perform metallographic inspections and to collect microstructure data. The results showed that the strong y fiber texture significantly reduces or even prevents the HIC damage. It is possible to improve the HIC resistance of pipeline steels using crystallography texture control and grain boundary engineering.

  5. VPipe: Virtual Pipelining for Scheduling of DAG Stream Query Plans

    Science.gov (United States)

    Wang, Song; Gupta, Chetan; Mehta, Abhay

    There are data streams all around us that can be harnessed for tremendous business and personal advantage. For an enterprise-level stream processing system such as CHAOS [1] (Continuous, Heterogeneous Analytic Over Streams), handling of complex query plans with resource constraints is challenging. While several scheduling strategies exist for stream processing, efficient scheduling of complex DAG query plans is still largely unsolved. In this paper, we propose a novel execution scheme for scheduling complex directed acyclic graph (DAG) query plans with meta-data enriched stream tuples. Our solution, called Virtual Pipelined Chain (or VPipe Chain for short), effectively extends the "Chain" pipelining scheduling approach to complex DAG query plans.

  6. Object-oriented graphics programming in C++

    CERN Document Server

    Stevens, Roger T

    2014-01-01

    Object-Oriented Graphics Programming in C++ provides programmers with the information needed to produce realistic pictures on a PC monitor screen.The book is comprised of 20 chapters that discuss the aspects of graphics programming in C++. The book starts with a short introduction discussing the purpose of the book. It also includes the basic concepts of programming in C++ and the basic hardware requirement. Subsequent chapters cover related topics in C++ programming such as the various display modes; displaying TGA files, and the vector class. The text also tackles subjects on the processing

  7. Reliability and risk evaluation of a port oil pipeline transportation system in variable operation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Soszynska, Joanna, E-mail: joannas@am.gdynia.p [Department of Mathematics, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)

    2010-02-15

    The semi-Markov model of the system operation processes is proposed and its selected characteristics are determined. A system composed on multi-state components is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability is applied to the reliability and risk evaluation of the port oil pipeline transportation system. The pipeline system is described and its operation process unknown parameters are identified on the basis of real statistical data. The mean values of the pipeline system operation process unconditional sojourn times in particular operation states are found and applied to determining this process transient probabilities in these states. The piping different reliability structures in various its operation states are fixed and their conditional reliability functions on the basis of data coming from experts are approximately determined. Finally, after applying earlier estimated transient probabilities and system conditional reliability functions in particular operation states the unconditional reliability function, the mean values and standard deviations of the pipeline lifetimes in particular reliability states, risk function and the moment when the risk exceeds a critical value are found.

  8. Reliability and risk evaluation of a port oil pipeline transportation system in variable operation conditions

    International Nuclear Information System (INIS)

    Soszynska, Joanna

    2010-01-01

    The semi-Markov model of the system operation processes is proposed and its selected characteristics are determined. A system composed on multi-state components is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability is applied to the reliability and risk evaluation of the port oil pipeline transportation system. The pipeline system is described and its operation process unknown parameters are identified on the basis of real statistical data. The mean values of the pipeline system operation process unconditional sojourn times in particular operation states are found and applied to determining this process transient probabilities in these states. The piping different reliability structures in various its operation states are fixed and their conditional reliability functions on the basis of data coming from experts are approximately determined. Finally, after applying earlier estimated transient probabilities and system conditional reliability functions in particular operation states the unconditional reliability function, the mean values and standard deviations of the pipeline lifetimes in particular reliability states, risk function and the moment when the risk exceeds a critical value are found.

  9. Corporate social responsibility along pipelines: communities and corporations working together

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Edison D.R.; Lopes, Luciano E.; Danciguer, Lucilene; Macarini, Samuel; Souza, Maira de [Grupo de Aplicacao Interdisciplinar a Aprendizagem (GAIA), Campinas, SP (Brazil)

    2009-07-01

    improving communities' life quality. 7. Follow-up, supporting communities leaders during dissemination of information about pipelines, project fund-raising and implementation. 8. Creation and followup of companies' networks to support some of the projects elaborated by the communities. 9. Impact evaluation, measuring the results accomplished by the whole project after its realization. The overall process is monitored with management and quality tools such as PDCA and processes and results indicators. The elaboration of projects by communities' members, organizing their needs and requests, facilitates management decisions regarding private social investment. During the follow-up, GAIA supports the communities' fund-raising from several organizations, as well as creates networks of potential local supporters. Those initiatives tend to dilute the requests from communities to companies. Thus, companies foment communities' autonomy and citizenship, creating a situation in which both, companies and communities, are benefited. (author)

  10. Optimum thermal design of steam pipelines and its impact on environment pollution

    International Nuclear Information System (INIS)

    Abdallah, A.M.; Karameldin, A.

    1999-01-01

    The majority portion of electric power generated production all over the world - produced by conventional and nuclear fuels produced by steam. Moreover, steam is used extensively in electronic, food, seawater desalination, and many other industries. In the last fifty years, little improvements have been made on the thermal efficiency of steam boilers. The major developments have been carried out in the direction of maintaining this efficiency on low-grade fuel and reducing labor and maintenance charges. Because the annual cost of fuel (nuclear and non-nuclear) is often greater than the combined cost of other expenses in steam power plants, greater amount of money can be saved. Designing steam pipelines in such a way that minimizing the total annual cost of pipes can do this. This can be done by optimal design of the total annual cost of the pipe lines, which includes the cost of insulation material, the cost of burned fuel plus the cost of maintenance. To deal with such situation, a case study of a superheated main steam pipeline at Shobrah Elkhema power plant is investigated. A general simplified working formula for calculating the heat transfer coefficient round a tube has been correlated and verified to facilitate the development for of the heat transfer mathematical model together with the steam pipeline total cost algorithm. The total cost algorithm has been optimized and solved by a digital computer program derived specially for this study. Accordingly, the obtained results are presented in a graphical form and analyzed. The results revealed that the optimal steam pipeline insulation must be chosen carefully. The insulation thickness of 0.225 up to 0.235 m, covers the operating time of 10-20 years, and fuel price of 0.125 up to more than 0.2$/kg. The calculated optimized insulation thickness minimizes the emission of sulfur dioxide, nitrogen oxide and carbon dioxide from 375, 75 and 26,778 kg/m/y to less than 7, 1.7 and 590 kg/m.y respectively

  11. Graphical Argument in the Essayist Prose of the Pesquisa FAPESP Journal

    Directory of Open Access Journals (Sweden)

    Irene Machado

    2016-03-01

    Full Text Available This article investigates the concept of graphical argumentation as an exercise of essayistic prose developed in the process of writing expansion in printed texts. It is understood that by expanding the scope of the word in the context of visual graphics processes such as drawings, photography and infographics, arguments are achievements much more of diagrammatic reasoning than of rhetorical elaboration. Proof of that are graphic arguments, which have become an inalienable modeling from texts of scientific communication, such as the ones produced in the Pesquisa FAPESP journal.

  12. Integrity assessment of pipelines - additional remarks; Avaliacao da integridade de dutos - observacoes adicionais

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Luis F.C. [PETROBRAS S.A., Salvador, BA (Brazil). Unidade de Negocios. Exploracao e Producao

    2005-07-01

    Integrity assessment of pipelines is part of a process that aims to enhance the operating safety of pipelines. During this task, questions related to the interpretation of inspection reports and the way of regarding the impact of several parameters on the pipeline integrity normally come up. In order to satisfactorily answer such questions, the integrity assessment team must be able to suitably approach different subjects such as corrosion control and monitoring, assessment of metal loss and geometric anomalies, and third party activities. This paper presents additional remarks on some of these questions based on the integrity assessment of almost fifty pipelines that has been done at PETROBRAS/E and P Bahia over the past eight years. (author)

  13. The negative binomial distribution as a model for external corrosion defect counts in buried pipelines

    International Nuclear Information System (INIS)

    Valor, Alma; Alfonso, Lester; Caleyo, Francisco; Vidal, Julio; Perez-Baruch, Eloy; Hallen, José M.

    2015-01-01

    Highlights: • Observed external-corrosion defects in underground pipelines revealed a tendency to cluster. • The Poisson distribution is unable to fit extensive count data for these type of defects. • In contrast, the negative binomial distribution provides a suitable count model for them. • Two spatial stochastic processes lead to the negative binomial distribution for defect counts. • They are the Gamma-Poisson mixed process and the compound Poisson process. • A Rogeŕs process also arises as a plausible temporal stochastic process leading to corrosion defect clustering and to negative binomially distributed defect counts. - Abstract: The spatial distribution of external corrosion defects in buried pipelines is usually described as a Poisson process, which leads to corrosion defects being randomly distributed along the pipeline. However, in real operating conditions, the spatial distribution of defects considerably departs from Poisson statistics due to the aggregation of defects in groups or clusters. In this work, the statistical analysis of real corrosion data from underground pipelines operating in southern Mexico leads to conclude that the negative binomial distribution provides a better description for defect counts. The origin of this distribution from several processes is discussed. The analysed processes are: mixed Gamma-Poisson, compound Poisson and Roger’s processes. The physical reasons behind them are discussed for the specific case of soil corrosion.

  14. On the Role of Computer Graphics in Engineering Design Graphics Courses.

    Science.gov (United States)

    Pleck, Michael H.

    The implementation of two- and three-dimensional computer graphics in a freshmen engineering design course at the university level is described. An assessment of the capabilities and limitations of computer graphics is made, along with a presentation of the fundamental role which computer graphics plays in engineering design instruction.…

  15. A quick guide to pipeline engineering

    CERN Document Server

    Alkazraji, D

    2008-01-01

    Pipeline engineering requires an understanding of a wide range of topics. Operators must take into account numerous pipeline codes and standards, calculation approaches, and reference materials in order to make accurate and informed decisions.A Quick Guide to Pipeline Engineering provides concise, easy-to-use, and accessible information on onshore and offshore pipeline engineering. Topics covered include: design; construction; testing; operation and maintenance; and decommissioning.Basic principles are discussed and clear guidance on regulations is provided, in a way that will

  16. Bicycle: a bioinformatics pipeline to analyze bisulfite sequencing data.

    Science.gov (United States)

    Graña, Osvaldo; López-Fernández, Hugo; Fdez-Riverola, Florentino; González Pisano, David; Glez-Peña, Daniel

    2018-04-15

    High-throughput sequencing of bisulfite-converted DNA is a technique used to measure DNA methylation levels. Although a considerable number of computational pipelines have been developed to analyze such data, none of them tackles all the peculiarities of the analysis together, revealing limitations that can force the user to manually perform additional steps needed for a complete processing of the data. This article presents bicycle, an integrated, flexible analysis pipeline for bisulfite sequencing data. Bicycle analyzes whole genome bisulfite sequencing data, targeted bisulfite sequencing data and hydroxymethylation data. To show how bicycle overtakes other available pipelines, we compared them on a defined number of features that are summarized in a table. We also tested bicycle with both simulated and real datasets, to show its level of performance, and compared it to different state-of-the-art methylation analysis pipelines. Bicycle is publicly available under GNU LGPL v3.0 license at http://www.sing-group.org/bicycle. Users can also download a customized Ubuntu LiveCD including bicycle and other bisulfite sequencing data pipelines compared here. In addition, a docker image with bicycle and its dependencies, which allows a straightforward use of bicycle in any platform (e.g. Linux, OS X or Windows), is also available. ograna@cnio.es or dgpena@uvigo.es. Supplementary data are available at Bioinformatics online.

  17. Optimal valve location in long oil pipelines

    OpenAIRE

    Grigoriev, A.; Grigorieva, N.V.

    2007-01-01

    We address the valve location problem, one of the basic problems in design of long oil pipelines. Whenever a pipeline is depressurized, the shutoff valves block the oil flow and seal the damaged part of the pipeline. Thus, the quantity of oil possibly contaminating the area around the pipeline is determined by the volume of the damaged section of the pipeline between two consecutive valves. Then, ecologic damage can be quantified by the amount of leaked oil and the environmental characteristi...

  18. Solving an unpiggable pipeline challenge

    Energy Technology Data Exchange (ETDEWEB)

    Walker, James R. [GE Oil and Gas, PII Pipeline Solutions, Cramlington Northumberland (United Kingdom); Kern, Michael [National Grid, New Hampshire (United Kingdom)

    2009-07-01

    Technically, any pipeline can be retrofitted to enable in line inspection. Sensibly however, the expense of excavations and construction of permanent facilities have been, in many cases, exceedingly prohibitive. Even where traditional modifications are feasible from engineering perspectives, flow interruption may not be an option - either because they are critical supply lines or because the associated lost revenues could be nearly insurmountable. Savvy pipeline integrity managers know the safety issue that is at stake over the long term. They are also well aware of the accuracy benefits that high-quality in-line inspection data offer over potentially supply disruptive alternatives such as hydrostatic testing. To complicate matters further, many operators, particularly in the US, now face regulatory pressure to assess the integrity of their yet-uninspected pipelines located in highly populated areas. This paper describes an important project National Grid undertook that made use of a unique pipeline access method that did not require permanent installation of expensive facilities required for in line inspection of a pipeline previously considered 'unpiggable'. Since the pipeline was located in an urban area, flow disruption had to be minimized. This paper will define the project background, its challenges, outcomes and lessons learned for the future. (author)

  19. A document processing pipeline for annotating chemical entities in scientific documents.

    Science.gov (United States)

    Campos, David; Matos, Sérgio; Oliveira, José L

    2015-01-01

    The recognition of drugs and chemical entities in text is a very important task within the field of biomedical information extraction, given the rapid growth in the amount of published texts (scientific papers, patents, patient records) and the relevance of these and other related concepts. If done effectively, this could allow exploiting such textual resources to automatically extract or infer relevant information, such as drug profiles, relations and similarities between drugs, or associations between drugs and potential drug targets. The objective of this work was to develop and validate a document processing and information extraction pipeline for the identification of chemical entity mentions in text. We used the BioCreative IV CHEMDNER task data to train and evaluate a machine-learning based entity recognition system. Using a combination of two conditional random field models, a selected set of features, and a post-processing stage, we achieved F-measure results of 87.48% in the chemical entity mention recognition task and 87.75% in the chemical document indexing task. We present a machine learning-based solution for automatic recognition of chemical and drug names in scientific documents. The proposed approach applies a rich feature set, including linguistic, orthographic, morphological, dictionary matching and local context features. Post-processing modules are also integrated, performing parentheses correction, abbreviation resolution and filtering erroneous mentions using an exclusion list derived from the training data. The developed methods were implemented as a document annotation tool and web service, freely available at http://bioinformatics.ua.pt/becas-chemicals/.

  20. Write Is Right: Using Graphic Organizers to Improve Student Mathematical Problem Solving

    Science.gov (United States)

    Zollman, Alan

    2012-01-01

    Teachers have used graphic organizers successfully in teaching the writing process. This paper describes graphic organizers and their potential mathematics benefits for both students and teachers, elucidates a specific graphic organizer adaptation for mathematical problem solving, and discusses results using the "four-corners-and-a-diamond"…