WorldWideScience

Sample records for unit process flow

  1. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  2. Performance Analysis of the United States Marine Corps War Reserve Materiel Program Process Flow

    Science.gov (United States)

    2016-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT PERFORMANCE ANALYSIS OF THE UNITED STATES MARINE CORPS...PERFORMANCE ANALYSIS OF THE UNITED STATES MARINE CORPS WAR RESERVE MATERIEL PROGRAM PROCESS FLOW 5. FUNDING NUMBERS 6. AUTHOR(S) Nathan A. Campbell...an item is requested but not maintained in the WRM inventory. By conducting a process analysis and using computer modeling, our recommendations are

  3. Large eddy simulations of turbulent flows on graphics processing units: Application to film-cooling flows

    Science.gov (United States)

    Shinn, Aaron F.

    Computational Fluid Dynamics (CFD) simulations can be very computationally expensive, especially for Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) of turbulent ows. In LES the large, energy containing eddies are resolved by the computational mesh, but the smaller (sub-grid) scales are modeled. In DNS, all scales of turbulence are resolved, including the smallest dissipative (Kolmogorov) scales. Clusters of CPUs have been the standard approach for such simulations, but an emerging approach is the use of Graphics Processing Units (GPUs), which deliver impressive computing performance compared to CPUs. Recently there has been great interest in the scientific computing community to use GPUs for general-purpose computation (such as the numerical solution of PDEs) rather than graphics rendering. To explore the use of GPUs for CFD simulations, an incompressible Navier-Stokes solver was developed for a GPU. This solver is capable of simulating unsteady laminar flows or performing a LES or DNS of turbulent ows. The Navier-Stokes equations are solved via a fractional-step method and are spatially discretized using the finite volume method on a Cartesian mesh. An immersed boundary method based on a ghost cell treatment was developed to handle flow past complex geometries. The implementation of these numerical methods had to suit the architecture of the GPU, which is designed for massive multithreading. The details of this implementation will be described, along with strategies for performance optimization. Validation of the GPU-based solver was performed for fundamental bench-mark problems, and a performance assessment indicated that the solver was over an order-of-magnitude faster compared to a CPU. The GPU-based Navier-Stokes solver was used to study film-cooling flows via Large Eddy Simulation. In modern gas turbine engines, the film-cooling method is used to protect turbine blades from hot combustion gases. Therefore, understanding the physics of

  4. Fast blood flow visualization of high-resolution laser speckle imaging data using graphics processing unit.

    Science.gov (United States)

    Liu, Shusen; Li, Pengcheng; Luo, Qingming

    2008-09-15

    Laser speckle contrast analysis (LASCA) is a non-invasive, full-field optical technique that produces two-dimensional map of blood flow in biological tissue by analyzing speckle images captured by CCD camera. Due to the heavy computation required for speckle contrast analysis, video frame rate visualization of blood flow which is essentially important for medical usage is hardly achieved for the high-resolution image data by using the CPU (Central Processing Unit) of an ordinary PC (Personal Computer). In this paper, we introduced GPU (Graphics Processing Unit) into our data processing framework of laser speckle contrast imaging to achieve fast and high-resolution blood flow visualization on PCs by exploiting the high floating-point processing power of commodity graphics hardware. By using GPU, a 12-60 fold performance enhancement is obtained in comparison to the optimized CPU implementations.

  5. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Science.gov (United States)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  6. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    Science.gov (United States)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  7. Real-time blood flow visualization using the graphics processing unit.

    Science.gov (United States)

    Yang, Owen; Cuccia, David; Choi, Bernard

    2011-01-01

    Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.

  8. Evaluation of the Synthoil process. Volume III. Unit block flow diagrams for a 100,000 barrel/stream day facility

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, R.; Edwards, M.S.; Ulrich, W.C.

    1977-06-01

    This volume consists of individual block flowsheets for the various units of the Synthoil facility, showing the overall flows into and out of each unit. Material balances for the following units are incomplete because these are proprietary processes and the information was not provided by the respective vendors: Unit 24-Claus Sulfur Plant; Unit 25-Oxygen Plant; Unit 27-Sulfur Plant (Redox Type); and Unit 28-Sour Water Stripper and Ammonia Recovery Plant. The process information in this form was specifically requested by ERDA/FE for inclusion in the final report.

  9. Investigation of crossover processes in a unitized bidirectional vanadium/air redox flow battery

    Science.gov (United States)

    grosse Austing, Jan; Nunes Kirchner, Carolina; Komsiyska, Lidiya; Wittstock, Gunther

    2016-02-01

    In this paper the losses in coulombic efficiency are investigated for a vanadium/air redox flow battery (VARFB) comprising a two-layered positive electrode. Ultraviolet/visible (UV/Vis) spectroscopy is used to monitor the concentrations cV2+ and cV3+ during operation. The most likely cause for the largest part of the coulombic losses is the permeation of oxygen from the positive to the negative electrode followed by an oxidation of V2+ to V3+. The total vanadium crossover is followed by inductively coupled plasma mass spectroscopy (ICP-MS) analysis of the positive electrolyte after one VARFB cycle. During one cycle 6% of the vanadium species initially present in the negative electrolyte are transferred to the positive electrolyte, which can account at most for 20% of the coulombic losses. The diffusion coefficients of V2+ and V3+ through Nafion® 117 are determined as DV2+ ,N 117 = 9.05 ·10-6 cm2 min-1 and DV3+ ,N 117 = 4.35 ·10-6 cm2 min-1 and are used to calculate vanadium crossover due to diffusion which allows differentiation between vanadium crossover due to diffusion and migration/electroosmotic convection. In order to optimize coulombic efficiency of VARFB, membranes need to be designed with reduced oxygen permeation and vanadium crossover.

  10. Development of a Chemically Reacting Flow Solver on the Graphic Processing Units

    Science.gov (United States)

    2011-05-10

    been implemented on the GPU by Schive et al. (2010). The outcome of their work is the GAMER code for astrophysical simulation. Thibault and...model all the elementary reactions and their reverse processes. 4.2 Chemistry Model An elementary reaction takes the form       N s sr K...are read from separated data files which contain all the species information used for the computation along with the elementary reactions. 4.3

  11. Signal processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Boswell, J.

    1983-01-01

    The architecture of the signal processing unit (SPU) comprises an ROM connected to a program bus, and an input-output bus connected to a data bus and register through a pipeline multiplier accumulator (pmac) and a pipeline arithmetic logic unit (palu), each associated with a random access memory (ram1,2). The system pulse frequency is from 20 mhz. The pmac is further detailed, and has a capability of 20 mega operations per second. There is also a block diagram for the palu, showing interconnections between the register block (rbl), separator for bus (bs), register (reg), shifter (sh) and combination unit. The first and second rams have formats 64*16 and 32*32 bits, respectively. Further data are a 5-v power supply and 2.5 micron n-channel silicon gate mos technology with about 50000 transistors.

  12. TEP process flow diagram

    Energy Technology Data Exchange (ETDEWEB)

    Wilms, R Scott [Los Alamos National Laboratory; Carlson, Bryan [Los Alamos National Laboratory; Coons, James [Los Alamos National Laboratory; Kubic, William [Los Alamos National Laboratory

    2008-01-01

    This presentation describes the development of the proposed Process Flow Diagram (PFD) for the Tokamak Exhaust Processing System (TEP) of ITER. A brief review of design efforts leading up to the PFD is followed by a description of the hydrogen-like, air-like, and waterlike processes. Two new design values are described; the mostcommon and most-demanding design values. The proposed PFD is shown to meet specifications under the most-common and mostdemanding design values.

  13. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    Science.gov (United States)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  14. Material flow of production process

    OpenAIRE

    Hanzelová Marcela

    2001-01-01

    This paper deals with material flow of the production process. We present the block diagram of material flow and capacities of engine in various plants each other. In this paper is used IPO (Input Process Output) diagram. IPO diagram described process with aspect to input and output. Production program regards string of precision, branch and paralel processes with aspect IPO diagram.Process is not important with aspect to events. We are looking on the process as a black box. For process is ...

  15. Flow generating processes

    NARCIS (Netherlands)

    Lanen, van H.A.J.; Fendeková, M.; Kupczyk, E.; Kasprzyk, A.; Pokojski, W.

    2004-01-01

    This chapter starts with an overview of how climatic water deficits affect hydrological processes in different type of catchments. It then continues with a more comprehensive description of drought relevant processes. Two catchments in climatologically contrasting regions are used for illustrative p

  16. Material flow of production process

    Directory of Open Access Journals (Sweden)

    Hanzelová Marcela

    2001-12-01

    Full Text Available This paper deals with material flow of the production process. We present the block diagram of material flow and capacities of engine in various plants each other. In this paper is used IPO (Input – Process – Output diagram. IPO diagram described process with aspect to input and output. Production program regards string of precision, branch and paralel processes with aspect IPO diagram.Process is not important with aspect to events. We are looking on the process as a „black box“. For process is used different materials and raw materials. The foudation for material analysis is detailed model of production process with defined flow material, energy, waste etc.Material flow is organised move of mass (material, money, informations, people etc.. Material analysis is made against destination of material flow (i.e. from ending to beginning. Material analysis is performed on the detection demand of individual materials, stocks, forms, etc.For elementary materials and raw materials in which is based production program and which to create better part of production costs is mainly necessary to dedicate the remark. The fluency of material flow concentrates on the respect of the capacitive parameters for individual node from aspect to standardized qualitative parameters and allowed limits.

  17. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  18. Stability of Armour Units in Oscillatory Flow

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Thompson, A. C.

    1983-01-01

    As part of a program to study the hydraulics of wave attack on rubble mound breakwaters tests were made on model armour units in a steady flow through a layer laid on a slope. The flow angle has little effect on stability for dolosse or rock layers. The head drop at failure across each type...... of layer is similar but the dolosse layer is more permeable and fails as a whole. There was no viscous scale effect. These results and earlier tests in oscillating flow suggest a 'reservoir' effect is important in the stability in steep waves....

  19. Simulation-based patient flow analysis in an endoscopy unit

    DEFF Research Database (Denmark)

    Koo, Pyung-Hoi; Nielsen, Karl Brian; Jang, Jaejin

    2010-01-01

    One of the major elements in improving efficiency of healthcare services is patient flow. Patients require a variety of healthcare resources as they receive healthcare services. Poor management of patient flow results in long waiting time of patients, under/over utilization of medical resources......, low quality of care and high healthcare cost. This article addresses patient flow problems at a Gastrointestinal endoscopy unit. We attempt to analyze the main factors that contribute to the inefficient patient flow and process bottlenecks and to propose efficient patient scheduling and staff...

  20. Estimated Water Flows in 2005: United States

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C A; Belles, R D; Simon, A J

    2011-03-16

    Flow charts depicting water use in the United States have been constructed from publicly available data and estimates of water use patterns. Approximately 410,500 million gallons per day of water are managed throughout the United States for use in farming, power production, residential, commercial, and industrial applications. Water is obtained from four major resource classes: fresh surface-water, saline (ocean) surface-water, fresh groundwater and saline (brackish) groundwater. Water that is not consumed or evaporated during its use is returned to surface bodies of water. The flow patterns are represented in a compact 'visual atlas' of 52 state-level (all 50 states in addition to Puerto Rico and the Virgin Islands) and one national water flow chart representing a comprehensive systems view of national water resources, use, and disposition.

  1. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process

  2. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process improv

  3. Control structures for flow process

    Directory of Open Access Journals (Sweden)

    Mircea Dulău

    2011-12-01

    Full Text Available In the industrial domain, a large number of applications is covered by slow processes, including the flow, the pressure, the temperature and the level control. Each control system must be treated in steady and dynamic states and from the point of view of the possible technical solutions. Based on mathematical models of the processes and design calculations, PC programs allow simulation and the determination of the control system performances.The paper presents a part of an industrial process with classical control loops of flow and temperature. The mathematical model of the flow control process was deducted, the control structure, based on experimental criterions, was designed and the version witch ensure the imposed performances was chosen. Using Matlab, the robustness performances were studied.

  4. Flow Logic for Process Calculi

    DEFF Research Database (Denmark)

    Nielson, Hanne Riis; Nielson, Flemming; Pilegaard, Henrik

    2012-01-01

    Flow Logic is an approach to statically determining the behavior of programs and processes. It borrows methods and techniques from Abstract Interpretation, Data Flow Analysis and Constraint Based Analysis while presenting the analysis in a style more reminiscent of Type Systems. Traditionally...... developed for programming languages, this article provides a tutorial development of the approach of Flow Logic for process calculi based on a decade of research. We first develop a simple analysis for the π-calculus; this consists of the specification, semantic soundness (in the form of subject reduction...... and adequacy results), and a Moore Family result showing that a least solution always exists, as well as providing insights on how to implement the analysis. We then show how to strengthen the analysis technology by introducing reachability components, interaction points, and localized environments...

  5. Stability of Armour Units in Oscillatory Flow

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; Thompson, A. C.

    Despite numerous breakwater model tests very little is known today about the various phenomena and parameters that determine the hydraulic stability characteristics of different types of armour. This is because separation of parameters is extremely difficult in traditional tests.With the object...... of separating some of the factors a deterministic test, in which horizontal beds of armour units were exposed to oscillatory flow, was performed in a pulsating water tunnel....

  6. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  7. Temperature of the Central Processing Unit

    Directory of Open Access Journals (Sweden)

    Ivan Lavrov

    2016-10-01

    Full Text Available Heat is inevitably generated in the semiconductors during operation. Cooling in a computer, and in its main part – the Central Processing Unit (CPU, is crucial, allowing the proper functioning without overheating, malfunctioning, and damage. In order to estimate the temperature as a function of time, it is important to solve the differential equations describing the heat flow and to understand how it depends on the physical properties of the system. This project aims to answer these questions by considering a simplified model of the CPU + heat sink. A similarity with the electrical circuit and certain methods from electrical circuit analysis are discussed.

  8. Process Flow Diagrams for Training and Operations

    Science.gov (United States)

    Venter, Jacobus

    This paper focuses on the use of process flow diagrams for training first responders who execute search and seizure warrants at electronic crime scenes. A generic process flow framework is presented, and the design goals and layout characteristics of process flow diagrams are discussed. An evaluation of the process flow diagrams used in training courses indicates that they are beneficial to first responders performing searches and seizures, and they speed up investigations, including those conducted by experienced personnel.

  9. Hedging Cash Flows from Commodity Processing

    OpenAIRE

    Dahlgran, Roger A.

    2005-01-01

    Agribusinesses make long-term plant-investment decisions based on discounted cash flow. It is therefore incongruous for an agribusiness firm to use cash flow as a plant-investment criterion and then to completely discard cash flow in favor of batch profits as an operating objective. This paper assumes that cash flow and its stability is important to commodity processors and examines methods for hedging cash flows under continuous processing. Its objectives are (a) to determine how standard he...

  10. ON DEVELOPING CLEANER ORGANIC UNIT PROCESSES

    Science.gov (United States)

    Organic waste products, potentially harmful to the human health and the environment, are primarily produced in the synthesis stage of manufacturing processes. Many such synthetic unit processes, such as halogenation, oxidation, alkylation, nitration, and sulfonation are common to...

  11. Group flow, complex flow, unit vector flow, and the (2+ϵ)-flow conjecture

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2014-01-01

    If F is a (possibly infinite) subset of an abelian group Γ, then we define f(F,Γ) as the smallest natural number such that every f(F,Γ)-edge-connected (finite) graph G has a flow where all flow values are elements in F. We prove that f(F,Γ) exists if and only if some odd sum of elements in F equals...... some even sum. We discuss various instances of this problem. We prove that every 6-edge-connected graph has a flow whose flow values are the three roots of unity in the complex plane. If the edge-connectivity 6 can be reduced, then it can be reduced to 4, and the 3-flow conjecture follows. We prove...... that every 14-edge-connected graph has a flow whose flow values are the five roots of unity in the complex plane. Any such flow is balanced modulo 5. So, if the edge-connectivity 14 can be reduced to 9, then the 5-flow conjecture follows, as observed by F. Jaeger. We use vector flow to prove that, for each...

  12. Base-flow index grid for the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This 1-kilometer raster (grid) dataset for the conterminous United States was created by interpolating base-flow index (BFI) values estimated at U.S. Geological...

  13. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    Science.gov (United States)

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  14. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  15. Concentrated flow erosion processes under planned fire

    Science.gov (United States)

    Langhans, Christoph; Noske, Phil; Van Der Sant, Rene; Lane, Patrick; Sheridan, Gary

    2016-04-01

    The role of wildfire in accelerating erosion rates for a certain period after fire has been well documented. Much less information is available on the erosion rates and processes after planned fires that typically burn at much lower intensity. Observational evidence, and some studies in southern and southeastern Australia suggest that erosion after planned fire can be significant if rainfall intensities exceed critical intensities and durations. Understanding erosion processes and rates under these event conditions is of critical importance for planning of burn locations away from critical human assets such as water supplies and infrastructure. We conducted concentrated flow experiments with the purpose to understand what critical conditions are required for significant erosion to occur on planned burn hillslopes. Concentrated flow runon was applied on pre-wetted, unbounded plots of 10 m at rates of 0.5, 1, 1.5 and 2 L/s, with three replicates for each rates applied at 1m distance of each other. The experiments were carried out at three sites within one burn perimeter with different burn severities ranging from low to high, with two replicates at each site. Runon was applied until an apparent steady state in runoff was reached at the lower plot boundary, which was typically between 0.7 and 2.5 minutes. The experiments were filmed and erosion depth was measured by survey methods at 1m intervals. Soil surface properties, including potential sediment trapping objects were measured and surveyed near the plots. We found that fire severity increased plot scale average erosion depth significantly even as experiments were typically much shorter on the high severity plots. Unit stream power was a good predictor for average erosion depth. Uncontrolled for variations in soil surface properties explained process behaviour: finer, ash rich surface material was much less likely to be trapped by fallen, charred branches and litter than coarser, ash-depleted material. Furthermore

  16. The importance of shallow confining units to submarine groundwater flow

    Science.gov (United States)

    Bratton, J.F.

    2007-01-01

    In addition to variable density flow, the lateral and vertical heterogeneity of submarine sediments creates important controls on coastal aquifer systems. Submarine confining units produce semi-confined offshore aquifers that are recharged on shore. These low-permeability deposits are usually either late Pleistocene to Holocene in age, or date to the period of the last interglacial highstand. Extensive confining units consisting of peat form in tropical mangrove swamps, and in salt marshes and freshwater marshes and swamps at mid-latitudes. At higher latitudes, fine-grained glaciomarine sediments are widespread. The net effect of these shallow confining units is that groundwater from land often flows farther offshore before discharging than would normally be expected. In many settings, the presence of such confining units is critical to determining how and where pollutants from land will be discharged into coastal waters. Alternatively, these confining units may also protect fresh groundwater supplies from saltwater intrusion into coastal wells.

  17. Analysis and Optimization of Central Processing Unit Process Parameters

    Science.gov (United States)

    Kaja Bantha Navas, R.; Venkata Chaitana Vignan, Budi; Durganadh, Margani; Rama Krishna, Chunduri

    2017-05-01

    The rapid growth of computer has made processing more data capable, which increase the heat dissipation. Hence the system unit CPU must be cooled against operating temperature. This paper presents a novel approach for the optimization of operating parameters on Central Processing Unit with single response based on response graph method. These methods have a series of steps from of proposed approach which are capable of decreasing uncertainty caused by engineering judgment in the Taguchi method. Orthogonal Array value was taken from ANSYS report. The method shows a good convergence with the experimental and the optimum process parameters.

  18. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  19. Syllables as Processing Units in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Alvarez, Carlos J.; Vallee, Nathalie

    2006-01-01

    This research focused on the syllable as a processing unit in handwriting. Participants wrote, in uppercase letters, words that had been visually presented. The interletter intervals provide information on the timing of motor production. In Experiment 1, French participants wrote words that shared the initial letters but had different syllable…

  20. Graphics processing unit-assisted lossless decompression

    Science.gov (United States)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  1. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  2. Process flows for cyber forensic training and operations

    CSIR Research Space (South Africa)

    Venter, JP

    2006-02-01

    Full Text Available In this paper the development and testing of Cyber First Responder Process Flows is discussed. A generic process flow framework is presented and design principles and layout characteristics as well as important points within the process flows...

  3. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  4. Stability of Armour Units in Flow Through a Layer

    DEFF Research Database (Denmark)

    Burcharth, Hans F.; C. Thompson, Alex

    1984-01-01

    As part of a program to study the hydraulics of wave attack on rubble mound breakwaters tests were made on model armour units in a steady flow through a layer laid on a slope. The flow angle has little effect on stability for dolosse or rock layers. The head drop at failure across each type...... of layer is similar but the dolosse layer is more permeable and fails as a whole. There was no viscous scale effect. These results and earlier tests in oscillating flow suggest a 'reservoir' effect is important in the stability in steep waves....

  5. Disjunctive Information Flow for Communicating Processes

    DEFF Research Database (Denmark)

    Li, Ximeng; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    The security validation of practical computer systems calls for the ability to specify and verify information flow policies that are dependent on data content. Such policies play an important role in concurrent, communicating systems: consider a scenario where messages are sent to different...... processes according to their tagging. We devise a security type system that enforces content-dependent information flow policies in the presence of communication and concurrency. The type system soundly guarantees a compositional noninterference property. All theoretical results have been formally proved...

  6. Numerical Integration with Graphical Processing Unit for QKD Simulation

    Science.gov (United States)

    2014-03-27

    existing and proposed Quantum Key Distribution (QKD) systems. This research investigates using graphical processing unit ( GPU ) technology to more...Time Pad GPU graphical processing unit API application programming interface CUDA Compute Unified Device Architecture SIMD single-instruction-stream...and can be passed by value or reference [2]. 2.3 Graphical Processing Units Programming with graphical processing unit ( GPU ) requires a different

  7. 单体泵断油过程中控制阀区域流场特性%Flow Characteristics of Solenoid Control Valve During Fuel Stopping Process for the Unit Pump

    Institute of Scientific and Technical Information of China (English)

    仇滔; 雷艳; 彭璟; 李旭初; 李彬

    2013-01-01

    The solenoid valve in the unit pump system (UPS) controls the pressure buildup within the highpressure fuel line.When it is opened,the high-pressure fuel flows with high velocity from the narrow cross-section into the low-pressure fuel line.During its operation,the flow in the field of solenoid valve is quite complex and affects the performance of UPS.This study presents a visualization test method of the solenoid valve of the UPS.For the optical observation,a glass window was incorporated into the pump.The bubbles induced by the cavitation under various operating conditions were observed by a high-speed camera.All color images were transformed into the gray pictures.Simulation was conducted on the basis of the test data such as the pressure and the displacement of valve core.The test results show that cavitation phenomenon occurs during the opening process of control valve.The simulated results of cavitation regulation in control valve agree well with the experimental results.Simulation results show that cavitations in the control valve mainly occur at three positions,they are,the cone angle of valve port,the downstream of cone angle and the clearance between the valve stop and the core.The reasons of cavitation generated in those positions were discussed based on the simulation results of the flow and the pressure fields.%电控单体泵控制阀打开将导通高低压油路,实现断油控制.在断油过程中,大量高压燃油将通过控制阀狭小区域高速流入低压油路,该区域流场十分复杂,对单体泵卸压断油特性影响很大.在电控单体泵泵体上设计了光学透视窗口,采用高速摄像仪器,获得控制阀开启过程在不同开度下控制阀出口端的流场照片,并对照片进行处理.结合测量获得的控制阀位移以及对应的进出口瞬态压力,采用三维仿真软件开展了控制阀区域瞬态流场的仿真计算.试验证明了该区域在阀打开过程中存在两相流.仿真表明控制

  8. Estimating overland flow erosion capacity using unit stream power

    Institute of Scientific and Technical Information of China (English)

    Hui-Ming SHIH; Chih Ted YANG

    2009-01-01

    Soil erosion caused by water flow is a complex problem.Both empirical and physically based approaches were used for the estimation of surface erosion rates.Their applications are mainly limited to experimental areas or laboratory studies.The maximum sediment concentration overland flow can carry is not considered in most of the existing surface erosion models.The lack of erosion capacity limitation may cause over estimations of sediment concentration.A correlation analysis is used in this study to determine significant factors that impact surface erosion capacity.The result shows that the unit stream power is the most dominant factor for overland flow erosion which is consistent with experimental data.A bounded regression formula is used to reflect the limits that sediment concentration cannot be less than zero nor greater than a maximum value.The coefficients used in the model are calibrated using published laboratory data.The computed results agree with laboratory data very well.A one dimensional overland flow diffusive wave model is used in conjunction with the developed soil erosion equation to simulate field experimental results.This study concludes that the non-linear regression method using unit stream power as the dominant factor performs well for estimating overland flow erosion capacity.

  9. Modeling and design of a combined transverse and axial flow threshing unit for rice harvesters

    Directory of Open Access Journals (Sweden)

    Zhong Tang

    2014-11-01

    Full Text Available The thorough investigation of both grain threshing and grain separating processes is a crucial consideration for effective structural design and variable optimization of the tangential flow threshing cylinder and longitudinal axial flow threshing cylinder composite units (TLFC unit of small and medium-sized (SME combine harvesters. The objective of this paper was to obtain the structural variables of a TLFC unit by theoretical modeling and experimentation on a tangential flow threshing cylinder unit (TFC unit and longitudinal axial flow threshing cylinder unit (LFC unit. Threshing and separation equations for five types of threshing teeth (knife bar, trapezoidal tooth, spike tooth, rasp bar, and rectangular bar, were obtained using probability theory. Results demonstrate that the threshing and separation capacity of the knife bar TFC unit was stronger than the other threshing teeth. The length of the LFC unit was divided into four sections, with helical blades on the first section (0-0.17 m, the spike tooth on the second section (0.17-1.48 m, the trapezoidal tooth on the third section (1.48-2.91 m, and the discharge plate on the fourth section (2.91-3.35 m. Test results showed an un-threshed grain rate of 0.243%, un-separated grain rate of 0.346%, and broken grain rate of 0.184%. Evidenced by these results, threshing and separation performance is significantly improved by analyzing and optimizing the structure and variables of a TLFC unit. The results of this research can be used to successfully design the TLFC unit of small and medium-sized (SME combine harvesters.

  10. Graphics Processing Unit Assisted Thermographic Compositing

    Science.gov (United States)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  11. Accelerating the Fourier split operator method via graphics processing units

    CERN Document Server

    Bauke, Heiko

    2010-01-01

    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\\"odinger equation and the time dependent Dirac equation.

  12. Flow field measurements in the cell culture unit

    Science.gov (United States)

    Walker, Stephen; Wilder, Mike; Dimanlig, Arsenio; Jagger, Justin; Searby, Nancy

    2002-01-01

    The cell culture unit (CCU) is being designed to support cell growth for long-duration life science experiments on the International Space Station (ISS). The CCU is a perfused loop system that provides a fluid environment for controlled cell growth experiments within cell specimen chambers (CSCs), and is intended to accommodate diverse cell specimen types. Many of the functional requirements depend on the fluid flow field within the CSC (e.g., feeding and gas management). A design goal of the CCU is to match, within experimental limits, all environmental conditions, other than the effects of gravity on the cells, whether the hardware is in microgravity ( micro g), normal Earth gravity, or up to 2g on the ISS centrifuge. In order to achieve this goal, two steps are being taken. The first step is to characterize the environmental conditions of current 1g cell biology experiments being performed in laboratories using ground-based hardware. The second step is to ensure that the design of the CCU allows the fluid flow conditions found in 1g to be replicated from microgravity up to 2g. The techniques that are being used to take these steps include flow visualization, particle image velocimetry (PIV), and computational fluid dynamics (CFD). Flow visualization using the injection of dye has been used to gain a global perspective of the characteristics of the CSC flow field. To characterize laboratory cell culture conditions, PIV is being used to determine the flow field parameters of cell suspension cultures grown in Erlenmeyer flasks on orbital shakers. These measured parameters will be compared to PIV measurements in the CSCs to ensure that the flow field that cells encounter in CSCs is within the bounds determined for typical laboratory experiments. Using CFD, a detailed simulation is being developed to predict the flow field within the CSC for a wide variety of flow conditions, including microgravity environments. Results from all these measurements and analyses of the

  13. Visualizing Flow of Uncertainty through Analytical Processes.

    Science.gov (United States)

    Wu, Yingcai; Yuan, Guo-Xun; Ma, Kwan-Liu

    2012-12-01

    Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.

  14. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  15. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  16. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  17. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  18. Energy Efficient Iris Recognition With Graphics Processing Units

    National Research Council Canada - National Science Library

    Rakvic, Ryan; Broussard, Randy; Ngo, Hau

    2016-01-01

    .... In the past few years, however, this growth has slowed for central processing units (CPUs). Instead, there has been a shift to multicore computing, specifically with the general purpose graphic processing units (GPUs...

  19. Numerical simulation of the flow around a steerable propulsion unit

    Energy Technology Data Exchange (ETDEWEB)

    Pacuraru, F; Lungu, A; Ungureanu, C; Marcu, O, E-mail: florin.pacuraru@ugal.r [Department of Ship Hydrodynamics, ' Dunarea de Jos' University of Galati 47 Domneasca Street, Galati 800008 (Romania)

    2010-08-15

    Azimuth propulsion units have become during the last decade a more and more popular solution for all kinds of vessels. Azimuth thruster system, combining the propulsion and steering units of conventional ships replaces traditional propellers and lengthy drive shafts and rudders ensuring an excellent vessel steering. In many cases the interaction between the propeller and other components of the propulsion system strongly affects the inflow to the propeller and therefore its performance. The correct estimation of this influence is important for propulsion systems which consist of more than one element, such as pods (shaft, gondola and propeller), ducted propellers (duct, struts and propeller) or bow thrusters (ship form, tunnel, gondola and propeller). The paper proposes a numerical investigation based on RANS computation for solving the viscous flow around an azimuth thruster system to provide a detailed insight into the critical flow regions for determining the optimum inclination angle for struts, for studying the hydrodynamic interactions between various components of the system, for predicting the hydrodynamic performance of the propulsion system and to investigate regions with possible flow separations.

  20. Utilization of milli-scale coiled flow inverter in combination with phase separator for continuous flow liquid-liquid extraction processes

    NARCIS (Netherlands)

    Vural Gürsel, Iris; Kurt, Safa Kutup; Aalders, Jasper; Wang, Qi; Noël, Timothy; Nigam, Krishna D P; Kockmann, Norbert; Hessel, Volker

    2016-01-01

    Process-design intensification situated under the umbrella of Novel Process Windows heads for process integration and here most development is needed for flow separators. The vision is to achieve multi-step synthesis in flow on pilot scale. This calls for scale-up of separation units. This study is

  1. Hydrogeologic unit flow characterization using transition probability geostatistics.

    Science.gov (United States)

    Jones, Norman L; Walker, Justin R; Carle, Steven F

    2005-01-01

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.

  2. Nitrocarburizing treatments using flowing afterglow processes

    Science.gov (United States)

    Jaoul, C.; Belmonte, T.; Czerwiec, T.; David, N.

    2006-09-01

    Nitrocarburizing of pure iron samples is achieved at 853 K and is easily controlled by introducing C 3H 8 in the afterglow of a flowing microwave Ar-N 2-H 2 plasma. The carbon uptake in the solid is actually possible with methane but strongly limited. The use of propane enhances the carbon flux and the ɛ/α configuration is synthesized for the first time by this kind of process. For this stack, diffusion paths in the ternary system determined from chemical analyses by secondary neutral mass spectrometry reproduce satisfactorily X-ray diffraction results which only reveal, as optical micrographs, ɛ and α phases. Propane offers an accurate control of the nitrocarburizing conditions. As an example, a modulation of N and C contents in iron could be achieved to create new carbonitride multilayers.

  3. Nitrocarburizing treatments using flowing afterglow processes

    Energy Technology Data Exchange (ETDEWEB)

    Jaoul, C. [Laboratoire de Science et Genie des Surfaces (UMR CNRS 7570), Ecole des Mines, Parc de Saurupt, 54042 Nancy Cedex (France); Belmonte, T. [Laboratoire de Science et Genie des Surfaces (UMR CNRS 7570), Ecole des Mines, Parc de Saurupt, 54042 Nancy Cedex (France)]. E-mail: Thierry.Belmonte@mines.inpl-nancy.fr; Czerwiec, T. [Laboratoire de Science et Genie des Surfaces (UMR CNRS 7570), Ecole des Mines, Parc de Saurupt, 54042 Nancy Cedex (France); David, N. [Laboratoire de Chimie du Solide Mineral, Universite Henri Poincare Nancy-I, Vandoeuvre-Les-Nancy (France)

    2006-09-30

    Nitrocarburizing of pure iron samples is achieved at 853 K and is easily controlled by introducing C{sub 3}H{sub 8} in the afterglow of a flowing microwave Ar-N{sub 2}-H{sub 2} plasma. The carbon uptake in the solid is actually possible with methane but strongly limited. The use of propane enhances the carbon flux and the {epsilon}/{alpha} configuration is synthesized for the first time by this kind of process. For this stack, diffusion paths in the ternary system determined from chemical analyses by secondary neutral mass spectrometry reproduce satisfactorily X-ray diffraction results which only reveal, as optical micrographs, {epsilon} and {alpha} phases. Propane offers an accurate control of the nitrocarburizing conditions. As an example, a modulation of N and C contents in iron could be achieved to create new carbonitride multilayers.

  4. Active microchannel fluid processing unit and method of making

    Science.gov (United States)

    Bennett, Wendy D [Kennewick, WA; Martin, Peter M [Kennewick, WA; Matson, Dean W [Kennewick, WA; Roberts, Gary L [West Richland, WA; Stewart, Donald C [Richland, WA; Tonkovich, Annalee Y [Pasco, WA; Zilka, Jennifer L [Pasco, WA; Schmitt, Stephen C [Dublin, OH; Werner, Timothy M [Columbus, OH

    2001-01-01

    The present invention is an active microchannel fluid processing unit and method of making, both relying on having (a) at least one inner thin sheet; (b) at least one outer thin sheet; (c) defining at least one first sub-assembly for performing at least one first unit operation by stacking a first of the at least one inner thin sheet in alternating contact with a first of the at least one outer thin sheet into a first stack and placing an end block on the at least one inner thin sheet, the at least one first sub-assembly having at least a first inlet and a first outlet; and (d) defining at least one second sub-assembly for performing at least one second unit operation either as a second flow path within the first stack or by stacking a second of the at least one inner thin sheet in alternating contact with second of the at least one outer thin sheet as a second stack, the at least one second sub-assembly having at least a second inlet and a second outlet.

  5. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  6. FlowCal: A User-Friendly, Open Source Software Tool for Automatically Converting Flow Cytometry Data from Arbitrary to Calibrated Units.

    Science.gov (United States)

    Castillo-Hair, Sebastian M; Sexton, John T; Landry, Brian P; Olson, Evan J; Igoshin, Oleg A; Tabor, Jeffrey J

    2016-07-15

    Flow cytometry is widely used to measure gene expression and other molecular biological processes with single cell resolution via fluorescent probes. Flow cytometers output data in arbitrary units (a.u.) that vary with the probe, instrument, and settings. Arbitrary units can be converted to the calibrated unit molecules of equivalent fluorophore (MEF) using commercially available calibration particles. However, there is no convenient, nonproprietary tool available to perform this calibration. Consequently, most researchers report data in a.u., limiting interpretation. Here, we report a software tool named FlowCal to overcome current limitations. FlowCal can be run using an intuitive Microsoft Excel interface, or customizable Python scripts. The software accepts Flow Cytometry Standard (FCS) files as inputs and is compatible with different calibration particles, fluorescent probes, and cell types. Additionally, FlowCal automatically gates data, calculates common statistics, and produces publication quality plots. We validate FlowCal by calibrating a.u. measurements of E. coli expressing superfolder GFP (sfGFP) collected at 10 different detector sensitivity (gain) settings to a single MEF value. Additionally, we reduce day-to-day variability in replicate E. coli sfGFP expression measurements due to instrument drift by 33%, and calibrate S. cerevisiae Venus expression data to MEF units. Finally, we demonstrate a simple method for using FlowCal to calibrate fluorescence units across different cytometers. FlowCal should ease the quantitative analysis of flow cytometry data within and across laboratories and facilitate the adoption of standard fluorescence units in synthetic biology and beyond.

  7. Influence of Processing Parameters on the Flow Path in Friction Stir Welding

    Science.gov (United States)

    Schneider, J. A.; Nunes, A. C., Jr.

    2006-01-01

    Friction stir welding (FSW) is a solid phase welding process that unites thermal and mechanical aspects to produce a high quality joint. The process variables are rpm, translational weld speed, and downward plunge force. The strain-temperature history of a metal element at each point on the cross-section of the weld is determined by the individual flow path taken by the particular filament of metal flowing around the tool as influenced by the process variables. The resulting properties of the weld are determined by the strain-temperature history. Thus to control FSW properties, improved understanding of the processing parameters on the metal flow path is necessary.

  8. Fluid flow for chemical and process engineers

    CERN Document Server

    Holland, F

    1995-01-01

    This major new edition of a popular undergraduate text covers topics of interest to chemical engineers taking courses on fluid flow. These topics include non-Newtonian flow, gas-liquid two-phase flow, pumping and mixing. It expands on the explanations of principles given in the first edition and is more self-contained. Two strong features of the first edition were the extensive derivation of equations and worked examples to illustrate calculation procedures. These have been retained. A new extended introductory chapter has been provided to give the student a thorough basis to understand the methods covered in subsequent chapters.

  9. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  10. Classification and Denomination of Flow Units for Clastic Reservoirs of Continental Deposit

    Institute of Scientific and Technical Information of China (English)

    CHANG Xue-jun; TANG Yue-gang; HAO Jian-ming; ZHANG Kai; ZHENG Jia-peng

    2004-01-01

    On the basis of other researchers' achievements and the authors' understanding of flow units, a proposal on classification and denomination of flow units for clastic reservoirs of continental deposit is put forward according to the practical need of oilfield development and relevant theories. The specific implications of development and geology are given to each type of flow units, which has provided a scientific basis for oil development.

  11. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  12. Similarities in basalt and rhyolite lava flow emplacement processes

    Science.gov (United States)

    Magnall, Nathan; James, Mike; Tuffen, Hugh; Vye-Brown, Charlotte

    2016-04-01

    Here we use field observations of rhyolite and basalt lava flows to show similarities in flow processes that span compositionally diverse lava flows. The eruption, and subsequent emplacement, of rhyolite lava flows is currently poorly understood due to the infrequency with which rhyolite eruptions occur. In contrast, the emplacement of basaltic lava flows are much better understood due to very frequent eruptions at locations such as Mt Etna and Hawaii. The 2011-2012 eruption of Cordón Caulle in Chile enabled the first scientific observations of the emplacement of an extensive rhyolite lava flow. The 30 to 100 m thick flow infilled a topographic depression with a negligible slope angle (0 - 7°). The flow split into two main channels; the southern flow advanced 4 km while the northern flow advanced 3 km before stalling. Once the flow stalled the channels inflated and secondary flows or breakouts formed from the flow front and margins. This cooling rather than volume-limited flow behaviour is common in basaltic lava flows but had never been observed in rhyolite lava flows. We draw on fieldwork conducted at Cordón Caulle and at Mt Etna to compare the emplacement of rhyolite and basaltic flows. The fieldwork identified emplacement features that are present in both lavas, such as inflation, breakouts from the flow font and margins, and squeeze-ups on the flow surfaces. In the case of Cordón Caulle, upon extrusion of a breakout it inflates due to a combination of continued lava supply and vesicle growth. This growth leads to fracturing and breakup of the breakout surface, and in some cases a large central fracture tens of metres deep forms. In contrast, breakouts from basaltic lava flows have a greater range of morphologies depending on the properties of the material in the flows core. In the case of Mt Etna, a range of breakout morphologies are observed including: toothpaste breakouts, flows topped with bladed lava as well as breakouts of pahoehoe or a'a lava. This

  13. A process model for work-flow management in construction

    OpenAIRE

    Jongeling, Rogier

    2006-01-01

    This thesis describes a novel approach for management of work-flow in construction, based on the combination of location-based scheduling and modelling with 4D CAD. Construction planners need to carefully design a process that ensures a continuous and reliable flow of resources through different locations in a project. The flow of resources through locations, termed work-flow, and the resultant ability to control the hand-over between both locations and crews, greatly empowers the management ...

  14. Optimized Technology for Residuum Processing in the ARGG Unit

    Institute of Scientific and Technical Information of China (English)

    Pan Luoqi; Yuan hongxing; Nie Baiqiu

    2006-01-01

    The influence of feedstock property on operation in the FCC unit was studied to identify the cause leading to deteriorated products distribution related with increasingly heavier feedstock for the ARGG unit. In order to maximize the economic benefits of the ARGG unit a string of measures, including the modification of catalyst formulation, retention of high catalyst activity, application of mixed termination agents to control the reaction temperature and once-through operation, and optimization of catalyst regeneration technique, were adopted to adapt the ARGG unit to processing of the heavy feedstock with its carbon residue equating to 7% on an average. The heavy oil processing technology has brought about apparent economic benefits.

  15. Features, Events, and Processes in SZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    K. Economy

    2004-11-16

    This analysis report evaluates and documents the inclusion or exclusion of the saturated zone (SZ) features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for license application (LA) of a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either ''Included'' or ''Excluded'', is given for each FEP along with the technical basis for the decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.114 (d), (e), (f) (DIRS 156605). This scientific report focuses on FEP analysis of flow and transport issues relevant to the SZ (e.g., fracture flow in volcanic units, anisotropy, radionuclide transport on colloids, etc.) to be considered in the TSPA model for the LA. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded).

  16. Features, Events, and Processes in SZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    S. Kuzio

    2005-08-20

    This analysis report evaluates and documents the inclusion or exclusion of the saturated zone (SZ) features, events, and processes (FEPs) with respect to modeling used to support the total system performance assessment (TSPA) for license application (LA) of a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either Included or Excluded, is given for each FEP along with the technical basis for the decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) at 10 CFR 63.11(d), (e), (f) [DIRS 173273]. This scientific report focuses on FEP analysis of flow and transport issues relevant to the SZ (e.g., fracture flow in volcanic units, anisotropy, radionuclide transport on colloids, etc.) to be considered in the TSPA model for the LA. For included FEPs, this analysis summarizes the implementation of the FEP in TSPA-LA (i.e., how the FEP is included). For excluded FEPs, this analysis provides the technical basis for exclusion from TSPA-LA (i.e., why the FEP is excluded).

  17. State Space Reduction of Linear Processes Using Control Flow Reconstruction

    NARCIS (Netherlands)

    Pol, van de Jaco; Timmer, Mark; Liu, Z.; Ravn, A.P.

    2009-01-01

    We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters

  18. State Space Reduction of Linear Processes using Control Flow Reconstruction

    NARCIS (Netherlands)

    Pol, van de Jaco; Timmer, Mark

    2009-01-01

    We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters

  19. Interpretation of the exergy equation for steady-flow processes

    NARCIS (Netherlands)

    Siemons, Roland V.

    1986-01-01

    We define and discuss the terms in exergy equations, with particular reference to the role of chemical terms in the exergy loss for steady-flow processes. Although there is a chemical contribution to exergy, exergy losses of steady-flow processes may be calculated by using a simple expression for th

  20. Improving emergency department flow through Rapid Medical Evaluation unit.

    Science.gov (United States)

    Chartier, Lucas; Josephson, Timothy; Bates, Kathy; Kuipers, Meredith

    2015-01-01

    The Toronto Western Hospital is an academic hospital in Toronto, Canada, with an annual Emergency Department (ED) volume of 64,000 patients. Despite increases in patient volumes of almost six percent per annum over the last decade, there have been no commensurate increases in resources, infrastructure, and staffing. This has led to substantial increase in patient wait times, most specifically for those patients with lower acuity presentations. Despite requiring only minimal care, these patients contribute disproportionately to ED congestion, which can adversely impact resource utilization and quality of care for all patients. We undertook a retrospective evaluation of a quality improvement initiative aimed at improving wait times experienced by patients with lower acuity presentations. A rapid improvement event was organized by frontline workers to rapidly overhaul processes of care, leading to the creation of the Rapid Medical Evaluation (RME) unit - a new pathway of care for patients with lower acuity presentations. The RME unit was designed by re-purposing existing resources and re-assigning one physician and one nurse towards the specific care of these patients. We evaluated the performance of the RME unit through measurement of physician initial assessment (PIA) times and total length of stay (LOS) times for multiple groups of patients assigned to various ED care pathways, during three periods lasting three months each. Weekly measurements of mean and 90th percentile of PIA and LOS times showed special cause variation in all targeted patient groups. Of note, the patients seen in the RME unit saw their median PIA and LOS times decrease from 98min to 70min and from 165min to 130min, respectively, from baseline. Despite ever-growing numbers of patient visits, wait times for all patients with lower acuity presentations remained low, and wait times of patients with higher acuity presentations assigned to other ED care pathways were not adversely affected. By

  1. Progress in modeling of fluid flows in crystal growth processes

    Institute of Scientific and Technical Information of China (English)

    Qisheng Chen; Yanni Jiang; Junyi Yan; Ming Qin

    2008-01-01

    Modeling of fluid flows in crystal growth processes has become an important research area in theoretical and applied mechanics.Most crystal growth processes involve fluid flows,such as flows in the melt,solution or vapor.Theoretical modeling has played an important role in developing technologies used for growing semiconductor crystals for high performance electronic and optoelectronic devices.The application of devices requires large diameter crystals with a high degree of crystallographic perfection,low defect density and uniform dopant distribution.In this article,the flow models developed in modeling of the crystal growth processes such as Czochralski,ammono-thermal and physical vapor transport methods are reviewed.In the Czochralski growth modeling,the flow models for thermocapillary flow,turbulent flow and MHD flow have been developed.In the ammonothermal growth modeling,the buoyancy and porous media flow models have been developed based on a single-domain and continuum approach for the composite fluid-porous layer systems.In the physical vapor transport growth modeling,the Stefan flow model has been proposed based on the flow-kinetics theory for the vapor growth.In addition,perspectives for future studies on crystal growth modeling are proposed.

  2. Flow units from integrated WFT and NMR data

    Energy Technology Data Exchange (ETDEWEB)

    Kasap, E.; Altunbay, M.; Georgi, D.

    1997-08-01

    Reliable and continuous permeability profiles are vital as both hard and soft data required for delineating reservoir architecture. They can improve the vertical resolution of seismic data, well-to-well stratigraphic correlations, and kriging between the well locations. In conditional simulations, permeability profiles are imposed as the conditioning data. Variograms, covariance functions and other geostatistical indicators are more reliable when based on good quality permeability data. Nuclear Magnetic Resonance (NMR) logging and Wireline Formation Tests (WFT) separately generate a wealth of information, and their synthesis extends the value of this information further by providing continuous and accurate permeability profiles without increasing the cost. NMR and WFT data present a unique combination because WFTs provide discrete, in situ permeability based on fluid-flow, whilst NMR responds to the fluids in the pore space and yields effective porosity, pore-size distribution, bound and moveable fluid saturations, and permeability. The NMR permeability is derived from the T{sub 2}-distribution data. Several equations have been proposed to transform T{sub 2} data to permeability. Regardless of the transform model used, the NMR-derived permeabilities depend on interpretation parameters that may be rock specific. The objective of this study is to integrate WFT permeabilities with NMR-derived, T{sub 2} distribution-based permeabilities and thereby arrive at core quality, continuously measured permeability profiles. We outlined the procedures to integrate NMR and WFT data and applied the procedure to a field case. Finally, this study advocates the use of hydraulic unit concepts to extend the WFT-NMR derived, core quality permeabilities to uncored intervals or uncored wells.

  3. Parallelization of heterogeneous reactor calculations on a graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Malofeev, V. M., E-mail: vm-malofeev@mail.ru; Pal’shin, V. A. [National Research Center Kurchatov Institute (Russian Federation)

    2016-12-15

    Parallelization is applied to the neutron calculations performed by the heterogeneous method on a graphics processing unit. The parallel algorithm of the modified TREC code is described. The efficiency of the parallel algorithm is evaluated.

  4. Diffusion tensor fiber tracking on graphics processing units.

    Science.gov (United States)

    Mittmann, Adiel; Comunello, Eros; von Wangenheim, Aldo

    2008-10-01

    Diffusion tensor magnetic resonance imaging has been successfully applied to the process of fiber tracking, which determines the location of fiber bundles within the human brain. This process, however, can be quite lengthy when run on a regular workstation. We present a means of executing this process by making use of the graphics processing units of computers' video cards, which provide a low-cost parallel execution environment that algorithms like fiber tracking can benefit from. With this method we have achieved performance gains varying from 14 to 40 times on common computers. Because of accuracy issues inherent to current graphics processing units, we define a variation index in order to assess how close the results obtained with our method are to those generated by programs running on the central processing units of computers. This index shows that results produced by our method are acceptable when compared to those of traditional programs.

  5. Flow-units verification, using statistical zonation and application of Stratigraphic Modified Lorenz Plot in Tabnak

    Directory of Open Access Journals (Sweden)

    Seyed Kourosh Mahjour

    2016-06-01

    Full Text Available The relationship between two main reservoir parameters being porosity and permeability, in the carbonate rocks is very complex and obscure. To get a better understanding on flow behavior, the relationship of porosity and permeability of reservoir units, reservoir zonation and flow units were defined. The significance of dividing the sedimentary intervals into flow units reflects groups of rocks that have similar geologic, physical properties and depositional environment that affect fluid flow. Variations in rock properties result from depositional, diagenetic and post-depositional changes. A flow unit is a volume of a reservoir rock that is continuous laterally and vertically and has similar averages of those rock properties that affect fluid flow. Different methods exist for the zonation of reservoirs based on petrophysical data and well logs; among them are: Permeability–Porosity cross plot, Pickett and Soder and Gill methods. In this study, the flow units are determined in Tabnak gas field in South of Iran based on Testerman Zonation Technique and Stratigraphic Modified Lorenz Plot (SMLP methods. For determining these units, conflation of petrophysical data and comparing porosity and permeability of cores are done for verification three wells. By comparing flow-units derived from two methods, it was realized that in permeable zones they have a relatively valid correlation.

  6. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    Shumm, D.; Turetken, O.; Kokash, N.; Elgammal, A.; Leymann, F.; Heuvel, J. van den

    2010-01-01

    Compliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act or ISO 17799

  7. RILL EROSION PROCESS AND RILL FLOW HYDRAULIC PARAMETERS

    Institute of Scientific and Technical Information of China (English)

    Fen-li ZHENG; Pei-qing XIAO; Xue-tian GAO

    2004-01-01

    In the rill erosion process,run-on water and sediment from upslope areas,and rill flow hydraulic parameters have significant effects on sediment detachment and transport.However,there is a lack of data to quantify the effects of run-on water and sediment and rill flow hydraulic parameters on rill erosion process at steep hillslopes,especially in the Loess Plateau of China.A dual-box system,consisting of a 2-m-long feeder box and a 5-m-long test box with 26.8% slope gradient was used to quantify the effects of upslope runoff and sediment,and of rill flow hydraulic parameters on the rill erosion process.The results showed that detachment-transport was dominated in rill erosion processes; upslope runoff always caused the net rill detachment at the downslope rill flow channel,and the net rill detachment caused by upslope runoff increased with a decrease of runoff sediment concentration from the feeder box or an increase of rainfall intensity.Upslope runoff discharging into the rill flow channel or an increase of rainfall intensity caused the rill flow to shift from a stratum flow into a turbulent flow.Upslope runoff had an important effect on rill flow hydraulic parameters,such as rill flow velocity,hydraulic radius,Reynolds number,Froude number and the Darcy-Weisbach resistance coefficient.The net rill detachment caused by upslope runoff increased as the relative increments of rill flow velocity,Reynolds number and Froude number caused by upslope runoff increased.In contrast,the net rill detachment decreased with an increase of the relative decrement of the Darcy-Weisbach resistance coefficient caused by upslope runoff.These findings will help to improve the understanding of the effects of run-on water and sediment on the erosion process and to find control strategies to minimize the impact of run-on water.

  8. Recharge and flow processes in a till aquitard

    DEFF Research Database (Denmark)

    Schrøder, Thomas Morville; Høgh Jensen, Karsten; Dahl, Mette

    1999-01-01

    Eastern Denmark is primarily covered by clay till. The transformation of the excess rainfall into laterally diverted groundwater flow, drain flow, stream flow, and recharge to the underlying aquifer is governed by complicatedinterrelated processes. Distributed hydrological models provide a framew......Eastern Denmark is primarily covered by clay till. The transformation of the excess rainfall into laterally diverted groundwater flow, drain flow, stream flow, and recharge to the underlying aquifer is governed by complicatedinterrelated processes. Distributed hydrological models provide...... a framework for assessing the individual flow components and forestablishing the overall water balance. Traditionally such models are calibrated against measurements of stream flow, head in the aquiferand perhaps drainage flow. The head in the near surface clay till deposits have generally not been measured...... the shallow wells and one in the valley adjacent to the stream. Precipitation and stream flow gauging along with potential evaporation estimates from a nearby weather station provide the basic data for the overall water balance assessment. The geological composition was determined from geoelectrical surveys...

  9. Simulation on flow process of filtered molten metals

    Institute of Scientific and Technical Information of China (English)

    房文斌; 耿耀宏; 魏尊杰; 安阁英; 叶荣茂

    2002-01-01

    Filtration and flow process of molten metals was analyzed by water simulation experiments. Fluid dynamic phenomena of molten metal cells through a foam ceramic filter was described and calculated by ERGOR equation as well. The results show that the filter is most useful for stable molten metals and the filtered flow is laminar, so that inclusions can be removed more effectively.

  10. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    production of a heat-integrated distillation column separating benzene and toluene was influenced by changing two important system parameters. The two parameters were the ratio between the pressure in the rectifying and stripping section and the total rate of heat transfer per Kelvin (UA{sub total}). In Chapter 4, UA{sub total} was evenly distributed in the column. The results showed that there was an upper and a lower bound on the pressure ratio, for which the heat-integrated column had a lower entropy production than the adiabatic column. A lower bound was also found on UA{sub total}. In Chapter 5, we allowed the UA{sub total} to distribute itself in an optimal way. This enabled even lower entropy productions and widened the range of the two parameters for which the heat-integrated distillation column performed better than the adiabatic. As in Chapter 3, we found that heat exchange was most important close to the condenser and re boiler. This made us propose a new design for the heat-integrated distillation column, with heat transfer between the topmost and bottommost trays only. This enabled further reductions in the entropy production. The next step in the development was to study several units in connection. In Chapter 6, we minimized the entropy production of a heat exchanger, a plug-flow reactor, and a heat exchanger in series. This was a preparatory study for the larger process optimization in Chapter 7. By shifting heat transfer from the reactor to the heat exchanger up-front, the entropy production was reduced. It was also found that the ambient temperature profile along the reactor was of less important to the entropy production. Finally, in Chapter 7, we were able to minimize the entropy production of a process, producing propylene from propane. We showed that it is meaningful to use the entropy production in a chemical process as objective function in an optimization that aims to find the most energy efficient state of operation and, in some aspects, design. By

  11. The perceptual flow of phonetic feature processing

    DEFF Research Database (Denmark)

    Greenberg, Steven; Christiansen, Thomas Ulrich

    2008-01-01

    How does the brain process spoken language? It is our thesis that word intelligibility and consonant identification are insufficient by themselves to model how the speech signal is decoded - a finer-grained approach is required. In this study, listeners identified 11 different Danish consonants s...

  12. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  13. Information Flow in the Launch Vehicle Design/Analysis Process

    Science.gov (United States)

    Humphries, W. R., Sr.; Holland, W.; Bishop, R.

    1999-01-01

    This paper describes the results of a team effort aimed at defining the information flow between disciplines at the Marshall Space Flight Center (MSFC) engaged in the design of space launch vehicles. The information flow is modeled at a first level and is described using three types of templates: an N x N diagram, discipline flow diagrams, and discipline task descriptions. It is intended to provide engineers with an understanding of the connections between what they do and where it fits in the overall design process of the project. It is also intended to provide design managers with a better understanding of information flow in the launch vehicle design cycle.

  14. 4D flow mri post-processing strategies for neuropathologies

    Science.gov (United States)

    Schrauben, Eric Mathew

    4D flow MRI allows for the measurement of a dynamic 3D velocity vector field. Blood flow velocities in large vascular territories can be qualitatively visualized with the added benefit of quantitative probing. Within cranial pathologies theorized to have vascular-based contributions or effects, 4D flow MRI provides a unique platform for comprehensive assessment of hemodynamic parameters. Targeted blood flow derived measurements, such as flow rate, pulsatility, retrograde flow, or wall shear stress may provide insight into the onset or characterization of more complex neuropathologies. Therefore, the thorough assessment of each parameter within the context of a given disease has important medical implications. Not surprisingly, the last decade has seen rapid growth in the use of 4D flow MRI. Data acquisition sequences are available to researchers on all major scanner platforms. However, the use has been limited mostly to small research trials. One major reason that has hindered the more widespread use and application in larger clinical trials is the complexity of the post-processing tasks and the lack of adequate tools for these tasks. Post-processing of 4D flow MRI must be semi-automated, fast, user-independent, robust, and reliably consistent for use in a clinical setting, within large patient studies, or across a multicenter trial. Development of proper post-processing methods coupled with systematic investigation in normal and patient populations pushes 4D flow MRI closer to clinical realization while elucidating potential underlying neuropathological origins. Within this framework, the work in this thesis assesses venous flow reproducibility and internal consistency in a healthy population. A preliminary analysis of venous flow parameters in healthy controls and multiple sclerosis patients is performed in a large study employing 4D flow MRI. These studies are performed in the context of the chronic cerebrospinal venous insufficiency hypothesis. Additionally, a

  15. Model for Understanding Flow Processes and Distribution in Rock Rubble

    Science.gov (United States)

    Green, R. T.; Manepally, C.; Fedors, R.; Gwo, J.

    2006-12-01

    Recent studies of the potential high-level nuclear waste repository at Yucca Mountain, Nevada, suggest that degradation of emplacement drifts may be caused by either persistent stresses induced by thermal decay of the spent nuclear fuel disposed in the drifts or seismic ground motion. Of significant interest to the performance of the repository is how seepage of water onto the engineered barriers in degraded emplacement drifts would be altered by rubble. Difficulty arises because of the uncertainty associated with the heterogeneity of the natural system complicated by the unknown fragment size and distribution of the rock rubble. A prototype experiment was designed to understand the processes that govern the convergence and divergence of flow in the rubble. This effort is expected to provide additional realism in the corresponding process models and performance assessment of the repository, and to help evaluate the chemistry of water contacting the waste as well as conditions affecting waste package corrosion in the presence of rubble. The rubble sample for the experiment was collected from the lower lithophysal unit of the Topopah Spring (Tptpll) unit in the Enhanced Characterization of the Repository Block Cross Drift and is used as an approximate analog. Most of the potential repository is planned to be built in the Tptpll unit. Sample fragment size varied from 1.0 mm [0.04 in] to 15 cm [6 in]. Ongoing experiments use either a single or multiple sources of infiltration at the top to simulate conditions that could exist in a degraded drift. Seepage is evaluated for variable infiltration rates, rubble particle size distribution, and rubble layering. Comparison of test results with previous bench-scale tests performed on smaller-sized fragments and different geological media will be presented. This paper is an independent product of CNWRA and does not necessarily reflect the view or regulatory position of NRC. The NRC staff views expressed herein are preliminary

  16. Peripheral processing facilitates optic flow-based depth perception

    Directory of Open Access Journals (Sweden)

    Jinglin Li

    2016-10-01

    Full Text Available Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (`optic flow' during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs. However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light

  17. Adaptive-optics Optical Coherence Tomography Processing Using a Graphics Processing Unit*

    Science.gov (United States)

    Shafer, Brandon A.; Kriske, Jeffery E.; Kocaoglu, Omer P.; Turner, Timothy L.; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T.

    2015-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  18. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    Science.gov (United States)

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  19. Information systems for material flow management in construction processes

    Science.gov (United States)

    Mesároš, P.; Mandičák, T.

    2015-01-01

    The article describes the options for the management of material flows in the construction process. Management and resource planning is one of the key factors influencing the effectiveness of construction project. It is very difficult to set these flows correctly. The current period offers several options and tools to do this. Information systems and their modules can be used just for the management of materials in the construction process.

  20. Stochastic equations, flows and measure-valued processes

    CERN Document Server

    Dawson, Donald A

    2010-01-01

    We first prove some general results on pathwise uniqueness, comparison property and existence of non-negative strong solutions of stochastic equations driven by white noises and Poisson random measures. The results are then used to prove the strong existence of two classes of stochastic flows associated with coalescents with multiple collisions, that is, generalized Fleming-Viot flows and flows of continuous-state branching processes with immigration. One of them unifies the different treatments of three kinds of flows in Bertoin and Le Gall (2005). Two scaling limit theorems for the generalized Fleming-Viot flows are proved, which lead to sub-critical branching immigration superprocesses. {From} those theorems we derive easily a generalization of the limit theorem for finite point motions of the flows in Bertoin and Le Gall (2006).

  1. Unit Operations for the Food Industry: Equilibrium Processes & Mechanical Operations

    OpenAIRE

    Guiné, Raquel

    2013-01-01

    Unit operations are an area of engineering that is at the same time very fascinating and most essential for the industry in general and the food industry in particular. This book was prepared in a way to achieve simultaneously the academic and practical perspectives. It is organized into two parts: the unit operations based on equilibrium processes and the mechanical operations. Each topic starts with a presentation of the fundamental concepts and principles, followed by a discussion of ...

  2. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  3. Fast Pyrolysis Process Development Unit for Validating Bench Scale Data

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.; Jones, Samuel T. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.

    2010-03-31

    The purpose of this project was to prepare and operate a fast pyrolysis process development unit (PDU) that can validate experimental data generated at the bench scale. In order to do this, a biomass preparation system, a modular fast pyrolysis fluidized bed reactor, modular gas clean-up systems, and modular bio-oil recovery systems were designed and constructed. Instrumentation for centralized data collection and process control were integrated. The bio-oil analysis laboratory was upgraded with the addition of analytical equipment needed to measure C, H, O, N, S, P, K, and Cl. To provide a consistent material for processing through the fluidized bed fast pyrolysis reactor, the existing biomass preparation capabilities of the ISU facility needed to be upgraded. A stationary grinder was installed to reduce biomass from bale form to 5-10 cm lengths. A 25 kg/hr rotary kiln drier was installed. It has the ability to lower moisture content to the desired level of less than 20% wt. An existing forage chopper was upgraded with new screens. It is used to reduce biomass to the desired particle size of 2-25 mm fiber length. To complete the material handling between these pieces of equipment, a bucket elevator and two belt conveyors must be installed. The bucket elevator has been installed. The conveyors are being procured using other funding sources. Fast pyrolysis bio-oil, char and non-condensable gases were produced from an 8 kg/hr fluidized bed reactor. The bio-oil was collected in a fractionating bio-oil collection system that produced multiple fractions of bio-oil. This bio-oil was fractionated through two separate, but equally important, mechanisms within the collection system. The aerosols and vapors were selectively collected by utilizing laminar flow conditions to prevent aerosol collection and electrostatic precipitators to collect the aerosols. The vapors were successfully collected through a selective condensation process. The combination of these two mechanisms

  4. Accurate, reliable control of process gases by mass flow controllers

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, J.; McKnight, T.

    1997-02-01

    The thermal mass flow controller, or MFC, has become an instrument of choice for the monitoring and controlling of process gas flow throughout the materials processing industry. These MFCs are used on CVD processes, etching tools, and furnaces and, within the semiconductor industry, are used on 70% of the processing tools. Reliability and accuracy are major concerns for the users of the MFCs. Calibration and characterization technologies for the development and implementation of mass flow devices are described. A test facility is available to industry and universities to test and develop gas floe sensors and controllers and evaluate their performance related to environmental effects, reliability, reproducibility, and accuracy. Additional work has been conducted in the area of accuracy. A gravimetric calibrator was invented that allows flow sensors to be calibrated in corrosive, reactive gases to an accuracy of 0.3% of reading, at least an order of magnitude better than previously possible. Although MFCs are typically specified with accuracies of 1% of full scale, MFCs may often be implemented with unwarranted confidence due to the conventional use of surrogate gas factors. Surrogate gas factors are corrections applied to process flow indications when an MFC has been calibrated on a laboratory-safe surrogate gas, but is actually used on a toxic, or corrosive process gas. Previous studies have indicated that the use of these factors may cause process flow errors of typically 10%, but possibly as great as 40% of full scale. This paper will present possible sources of error in MFC process gas flow monitoring and control, and will present an overview of corrective measures which may be implemented with MFC use to significantly reduce these sources of error.

  5. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. Presentation at the ICWL 2008 conference. August, 20, 2008, Jinhua, China.

  6. Infiltration-excess overland flow estimated by TOPMODEL for the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of infiltration-excess overland flow in total...

  7. Saturation overland flow estimated by TOPMODEL for the conterminous United States

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This 5-kilometer resolution raster (grid) dataset for the conterminous United States represents the average percentage of saturation overland flow in total...

  8. Characterization of suspended bacteria from processing units in an advanced drinking water treatment plant of China.

    Science.gov (United States)

    Wang, Feng; Li, Weiying; Zhang, Junpeng; Qi, Wanqi; Zhou, Yanyan; Xiang, Yuan; Shi, Nuo

    2017-05-01

    For the drinking water treatment plant (DWTP), the organic pollutant removal was the primary focus, while the suspended bacterial was always neglected. In this study, the suspended bacteria from each processing unit in a DWTP employing an ozone-biological activated carbon process was mainly characterized by using heterotrophic plate counts (HPCs), a flow cytometer, and 454-pyrosequencing methods. The results showed that an adverse changing tendency of HPC and total cell counts was observed in the sand filtration tank (SFT), where the cultivability of suspended bacteria increased to 34%. However, the cultivability level of other units stayed below 3% except for ozone contact tank (OCT, 13.5%) and activated carbon filtration tank (ACFT, 34.39%). It meant that filtration processes promoted the increase in cultivability of suspended bacteria remarkably, which indicated biodegrading capability. In the unit of OCT, microbial diversity indexes declined drastically, and the dominant bacteria were affiliated to Proteobacteria phylum (99.9%) and Betaproteobacteria class (86.3%), which were also the dominant bacteria in the effluent of other units. Besides, the primary genus was Limnohabitans in the effluents of SFT (17.4%) as well as ACFT (25.6%), which was inferred to be the crucial contributors for the biodegradable function in the filtration units. Overall, this paper provided an overview of community composition of each processing units in a DWTP as well as reference for better developing microbial function for drinking water treatment in the future.

  9. A Conductivity Relationship for Steady-state Unsaturated Flow Processes under Optimal Flow Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Liu, H. H.

    2010-09-15

    Optimality principles have been used for investigating physical processes in different areas. This work attempts to apply an optimal principle (that water flow resistance is minimized on global scale) to steady-state unsaturated flow processes. Based on the calculus of variations, we show that under optimal conditions, hydraulic conductivity for steady-state unsaturated flow is proportional to a power function of the magnitude of water flux. This relationship is consistent with an intuitive expectation that for an optimal water flow system, locations where relatively large water fluxes occur should correspond to relatively small resistance (or large conductance). Similar results were also obtained for hydraulic structures in river basins and tree leaves, as reported in other studies. Consistence of this theoretical result with observed fingering-flow behavior in unsaturated soils and an existing model is also demonstrated.

  10. Numerical simulations of rarefied gas flows in thin film processes

    NARCIS (Netherlands)

    Dorsman, R.

    2007-01-01

    Many processes exist in which a thin film is deposited from the gas phase, e.g. Chemical Vapor Deposition (CVD). These processes are operated at ever decreasing reactor operating pressures and with ever decreasing wafer feature dimensions, reaching into the rarefied flow regime. As numerical

  11. Environmental Data Flow Six Sigma Process Improvement Savings Overview

    Energy Technology Data Exchange (ETDEWEB)

    Paige, Karen S [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-20

    An overview of the Environmental Data Flow Six Sigma improvement project covers LANL’s environmental data processing following receipt from the analytical laboratories. The Six Sigma project identified thirty-three process improvements, many of which focused on cutting costs or reducing the time it took to deliver data to clients.

  12. Numerical simulations of rarefied gas flows in thin film processes

    NARCIS (Netherlands)

    Dorsman, R.

    2007-01-01

    Many processes exist in which a thin film is deposited from the gas phase, e.g. Chemical Vapor Deposition (CVD). These processes are operated at ever decreasing reactor operating pressures and with ever decreasing wafer feature dimensions, reaching into the rarefied flow regime. As numerical simulat

  13. Study of an ammonia-based wet scrubbing process in a continuous flow system

    Energy Technology Data Exchange (ETDEWEB)

    Resnik, Kevin P.; Pennline, Henry W.

    2013-03-01

    A continuous gas and liquid flow, regenerative scrubbing process for CO{sub 2} capture was demonstrated at the bench-scale level. An aqueous ammonia-based solution captures CO{sub 2} from simulated flue gas in an absorber and releases a nearly pure stream of CO{sub 2} in the regenerator. After the regeneration, the solution of ammonium compounds is recycled to the absorber. The design of a continuous flow unit was based on earlier exploratory results from a semi-batch reactor, where a CO{sub 2} and N{sub 2} simulated flue gas mixture flowed through a well-mixed batch of ammonia-based solution. During the semi-batch tests, the solution was cycled between absorption and regeneration steps to measure the carrying capacity of the solution at various initial ammonia concentrations and temperatures. Consequentially, a series of tests were conducted on the continuous unit to observe the effect of various parameters on CO{sub 2} removal efficiency and regenerator effectiveness within the flow system. The parameters that were studied included absorber temperature, regenerator temperature, initial NH{sub 3} concentration, simulated flue gas flow rate, liquid solvent inventory in the flow system, and height of the packed-bed absorber. From this testing and subsequent testing, ammonia losses from both the absorption and regeneration steps were quantified, and attempts were made to maintain steady state during operations. Implications of experimental results with respect to process design are discussed.

  14. Heat flow and ground water flow in the Great Plains of the United States

    Science.gov (United States)

    Gosnold, William D.

    1985-12-01

    Regional groundwater flow in deep aquifers adds advective components to the surface heat flow over extensive areas within the Great Plains province. The regional groundwater flow is driven by topographically controlled piezometric surfaces for confined aquifers that recharge either at high elevations on the western edge of the province or from subcrop contacts. The aquifers discharge at lower elevations to the east. The assymetrical geometry for the Denver and Kennedy Basins is such that the surface areas of aquifer recharge are small compared to the areas of discharge. Consequently, positive advective heat flow occurs over most of the province. The advective component of heat flow in the Denver Basin is on the order of 15 mW m -2 along a zone about 50 km wide that parallels the structure contours of the Dakota aquifer on the eastern margin of the Basin. The advective component of heat flow in the Kennedy Basin is on the order of 20 mW m -2 and occurs over an extensive area that coincides with the discharge areas of the Madison (Mississippian) and Dakota (Cretaceous) aquifers. Groundwater flow in Paleozoic and Mesozoic aquifers in the Williston Basin causes thermal anomalies that are seen in geothermal gradient data and in oil well temperature data. The pervasive nature of advective heat flow components in the Great Plains tends to mask the heat flow structure of the crust, and only heat flow data from holes drilled into the crystalline basement can be used for tectonic heat flow studies.

  15. Rethinking the process of detrainment: jets in obstructed natural flows

    Science.gov (United States)

    Mossa, Michele; de Serio, Francesca

    2016-12-01

    A thorough understanding of the mixing and diffusion of turbulent jets released in porous obstructions is still lacking in literature. This issue is undoubtedly of interest because it is not strictly limited to vegetated flows, but also includes outflows which come from different sources and which spread among oyster or wind farms, as well as aerial pesticide treatments sprayed onto orchards. The aim of the present research is to analyze this process from a theoretical point of view. Specifically, by examining the entrainment coefficient, it is deduced that the presence of a canopy prevents a momentum jet from having an entrainment process, but rather promotes its detrainment. In nature, detrainment is usually associated with buoyancy-driven flows, such as plumes or density currents flowing in a stratified environment. The present study proves that detrainment occurs also when a momentum-driven jet is issued in a not-stratified obstructed current, such as a vegetated flow.

  16. Modeling of material flow in friction stir welding process

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a 3D numerical model to study the material flow in the friction stir welding process. Results indicate that the material in front of the pin moves upwards due to the extrusion of the pin, and then the upward material rotates with the pin. Behind the rotating tool, the material starts to move downwards and to deposit in the wake. This process is the real cause to make friction stir welding process continuing successfully. The tangent movement of the material takes the main contribution to the flow of the material in friction stir welding process. There exists a swirl on the advancing side and with the increase of the translational velocity the inverse flow of the material on the advancing side becomes faster. The shoulder can increase the velocity of material flow in both radial direction and tangent direction near the top surface. The variations of process parameters do have an effect on the velocity field near the pin, especially in the region in which the material flow is faster.

  17. Rotating thermal flows in natural and industrial processes

    CERN Document Server

    Lappa, Marcello

    2012-01-01

    Rotating Thermal Flows in Natural and Industrial Processes provides the reader with a systematic description of the different types of thermal convection and flow instabilities in rotating systems, as present in materials, crystal growth, thermal engineering, meteorology, oceanography, geophysics and astrophysics. It expressly shows how the isomorphism between small and large scale phenomena becomes beneficial to the definition and ensuing development of an integrated comprehensive framework.  This allows the reader to understand and assimilate the underlying, quintessential mechanisms withou

  18. Prediction of hygiene in food processing equipment using flow modelling

    DEFF Research Database (Denmark)

    Friis, Alan; Jensen, Bo Boye Busk

    2002-01-01

    Computational fluid dynamics (CFD) has been applied to investigate the design of closed process equipment with respect to cleanability. The CFD simulations were validated using the standardized cleaning test proposed by the European Hygienic Engineering and Design Group. CFD has been proven...... expansions in tubes. Results show that cleaning can be efficient in complex geometries even when the critical wall shear stress (determined in uni-axial flow) is not exceeded. This renders the need for considerations concerning three-dimensional flow, the degree of turbulence and the type of flow pattern....... The controlling factors for cleaning identified were the wall shear stress and the nature and magnitude of recirculation zones present....

  19. Predication of Plastic Flow Characteristics in Ferrite/Pearlite Steel Using a Fern Unit Cell Method

    Institute of Scientific and Technical Information of China (English)

    Hong Li; Jingtao Han; Jing Liu; Lv Zhang

    2004-01-01

    The flow stress of ferrite/pearlite steel under uni-axial tension was simulated with finite element method (FEM) by applying commercial software MARC/MENTAT. Flow stress curves of ferrite/pearlite steels were calculated based on unit cell model. The effects of volume fraction, distribution and the aspect ratio of pearlite on tensile properties have been investigated.

  20. Comparison of Inflation Processes at the 1859 Mauna Loa Flow, HI, and the McCartys Flow Field, NM

    Science.gov (United States)

    Bleacher, Jacob E.; Garry, W. Brent; Zimbelman, James R.; Crumpler, Larry S.

    2012-01-01

    Basaltic lavas typically form channels or tubes during flow emplacement. However, the importance of sheet flow in the development of basalt ic terrains received recognition over the last 15 years. George Walke r?s research on the 1859 Mauna Loa Flow was published posthumously in 2009. In this paper he discusses the concept of endogenous growth, or inflation, for the distal portion of this otherwise channeldominated lava flow. We used this work as a guide when visiting the 1859 flow to help us better interpret the inflation history of the McCartys flow field in NM. Both well preserved flows display similar clues about the process of inflation. The McCartys lava flow field is among the you ngest (approx.3000 yrs) basaltic lava flows in the continental United States. It was emplaced over slopes of crust or sa gging along fractures that enable gas release. It is not clear which of these processes is responsible for polygonal terrains, and it is po ssible that one explanation is not the sole cause of this morphology between all inflated flows. Often, these smooth surfaces within an inflated sheet display lineated surfaces and occasional squeeze-ups alon g swale contacts. We interpret the lineations to preserve original fl ow direction and have begun mapping these orientations to better interpret the emplacement history. At the scale of 10s to 100s of meters t he flow comprises multiple topographic plateaus and depressions. Some depressions display level floors with surfaces as described above, while some are bowl shaped with floors covered in broken lava slabs. Th e boundaries between plateaus and depressions are also typically smoo th, grooved surfaces that have been tilted to angles sometimes approaching vertical. The upper margin of these tilted surfaces displays lar ge cracks, sometimes containing squeeze-ups. The bottom boundary with smooth floored depressions typically shows embayment by younger lavas. It appears that this style of terrain represents the

  1. COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES

    Science.gov (United States)

    Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...

  2. Determinants of profitability of smallholder palm oil processing units ...

    African Journals Online (AJOL)

    ... of profitability of smallholder palm oil processing units in Ogun state, Nigeria. ... as well as their geographical spread covering the entire land space of the state. ... The F-ratio value is statistically significant (P<0.01) implying that the model is ...

  3. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...... the performance of an implementation and demonstrate that while there are some implementational pitfalls, a careful implementation can result in impressive improvements....

  4. Utilizing Graphics Processing Units for Network Anomaly Detection

    Science.gov (United States)

    2012-09-13

    matching system using deterministic finite automata and extended finite automata resulting in a speedup of 9x over the CPU implementation [SGO09]. Kovach ...pages 14–18, 2009. [Kov10] Nicholas S. Kovach . Accelerating malware detection via a graphics processing unit, 2010. http://www.dtic.mil/dtic/tr

  5. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2010-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the Graphics Processing Unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  6. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2014-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the graphics processing unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  7. Flows of engineered nanomaterials through the recycling process in Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Caballero-Guzman, Alejandro; Sun, Tianyin; Nowack, Bernd, E-mail: nowack@empa.ch

    2015-02-15

    Highlights: • Recycling is one of the likely end-of-life fates of nanoproducts. • We assessed the material flows of four nanomaterials in the Swiss recycling system. • After recycling, most nanomaterials will flow to landfills or incineration plants. • Recycled construction waste, plastics and textiles may contain nanomaterials. - Abstract: The use of engineered nanomaterials (ENMs) in diverse applications has increased during the last years and this will likely continue in the near future. As the number of applications increase, more and more waste with nanomaterials will be generated. A portion of this waste will enter the recycling system, for example, in electronic products, textiles and construction materials. The fate of these materials during and after the waste management and recycling operations is poorly understood. The aim of this work is to model the flows of nano-TiO{sub 2}, nano-ZnO, nano-Ag and CNT in the recycling system in Switzerland. The basis for this study is published information on the ENMs flows on the Swiss system. We developed a method to assess their flow after recycling. To incorporate the uncertainties inherent to the limited information available, we applied a probabilistic material flow analysis approach. The results show that the recycling processes does not result in significant further propagation of nanomaterials into new products. Instead, the largest proportion will flow as waste that can subsequently be properly handled in incineration plants or landfills. Smaller fractions of ENMs will be eliminated or end up in materials that are sent abroad to undergo further recovery processes. Only a reduced amount of ENMs will flow back to the productive process of the economy in a limited number of sectors. Overall, the results suggest that risk assessment during recycling should focus on occupational exposure, release of ENMs in landfills and incineration plants, and toxicity assessment in a small number of recycled inputs.

  8. Grout long radius flow testing to support Saltstone disposal Unit 5 design

    Energy Technology Data Exchange (ETDEWEB)

    Stefanko, D. B.; Langton, C. A.; Serrato, M. G.; Brooks, T. E. II; Huff, T. H.

    2013-02-24

    The Saltstone Facility, located within the Savannah River Site (SRS) near Aiken, South Carolina, consists of two facility segments: The Saltstone Production Facility (SPF) and the Saltstone Disposal Facility (SDF). The SPF receives decontaminated legacy low level sodium salt waste solution that is a byproduct of prior nuclear material processing. The salt solution is mixed with cementitious materials to form a grout slurry known as “Saltstone”. The grout is pumped to the SDF where it is placed in a Saltstone Disposal Unit (SDU) to solidify. SDU 6 is referred to as a “mega vault” and is currently in the design stage. The conceptual design for SDU 6 is a single cell, cylindrical geometry approximately 114.3 meters in diameter by 13.1 meter high and is larger than previous cylindrical SDU designs, 45.7 meters in diameter by 7.01 meters high (30 million gallons versus 2.9 million gallons of capacity). Saltstone slurry will be pumped into the new waste disposal unit through roof openings at a projected flow rate of about 34.1 cubic meters per hour. Nine roof openings are included in the design to discharge material into the SDU with an estimated grout pour radius of 22.9 to 24.4 meters and initial drop height of 13.1 meters. The conceptual design for the new SDU does not include partitions to limit the pour radius of the grout slurry during placement other than introducing material from different pour points. This paper addresses two technical issues associated with the larger diameter of SDU 6; saltstone flow distance in a tank 114.3 meters in diameter and quality of the grout. A long-radius flow test scaled to match the velocity of an advancing grout front was designed to address these technology gaps. The emphasis of the test was to quantify the flow distance and to collect samples to evaluate cured properties including compressive strength, porosity, density, and saturated hydraulic conductivity. Two clean cap surrogate mixes (saltstone premix plus water

  9. Grout Long Radius Flow Testing to Support Saltstone Disposal Unit 6 Design - 13352

    Energy Technology Data Exchange (ETDEWEB)

    Stefanko, D.B.; Langton, C.A.; Serrato, M.G. [Savannah River National Laboratory, Savannah River Nuclear Solutions, LLC, Savannah River Site, Aiken, SC 29808 (United States); Brooks, T.E. II; Huff, T.H. [Savannah River Remediation, LLC, Savannah River Site, Aiken, SC 29808 (United States)

    2013-07-01

    The Saltstone Facility, located within the Savannah River Site (SRS) near Aiken, South Carolina, consists of two facility segments: The Saltstone Production Facility (SPF) and the Saltstone Disposal Facility (SDF). The SPF receives decontaminated legacy low level sodium salt waste solution that is a byproduct of prior nuclear material processing. The salt solution is mixed with cementitious materials to form a grout slurry known as 'Saltstone'. The grout is pumped to the SDF where it is placed in a Saltstone Disposal Unit (SDU) to solidify. SDU 6 is referred to as a 'mega vault' and is currently in the design stage. The conceptual design for SDU 6 is a single cell, cylindrical geometry approximately 114.3 meters in diameter by 13.1 meter high and is larger than previous cylindrical SDU designs, 45.7 meters in diameter by 7.01 meters high (30 million gallons versus 2.9 million gallons of capacity). Saltstone slurry will be pumped into the new waste disposal unit through roof openings at a projected flow rate of about 34.1 cubic meters per hour. Nine roof openings are included in the design to discharge material into the SDU with an estimated grout pour radius of 22.9 to 24.4 meters and initial drop height of 13.1 meters. The conceptual design for the new SDU does not include partitions to limit the pour radius of the grout slurry during placement other than introducing material from different pour points. This paper addresses two technical issues associated with the larger diameter of SDU 6; Saltstone flow distance in a tank 114.3 meters in diameter and quality of the grout. A long-radius flow test scaled to match the velocity of an advancing grout front was designed to address these technology gaps. The emphasis of the test was to quantify the flow distance and to collect samples to evaluate cured properties including compressive strength, porosity, density, and saturated hydraulic conductivity. Two clean cap surrogate mixes (Saltstone premix

  10. Analysis of impact of general-purpose graphics processor units in supersonic flow modeling

    Science.gov (United States)

    Emelyanov, V. N.; Karpenko, A. G.; Kozelkov, A. S.; Teterina, I. V.; Volkov, K. N.; Yalozo, A. V.

    2017-06-01

    Computational methods are widely used in prediction of complex flowfields associated with off-normal situations in aerospace engineering. Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of external and internal flows on unstructured meshes are discussed. The finite volume method is applied to solve three-dimensional unsteady compressible Euler and Navier-Stokes equations on unstructured meshes with high resolution numerical schemes. CUDA technology is used for programming implementation of parallel computational algorithms. Solutions of some benchmark test cases on GPUs are reported, and the results computed are compared with experimental and computational data. Approaches to optimization of the CFD code related to the use of different types of memory are considered. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared. Performance measurements show that numerical schemes developed achieve 20-50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

  11. Flow manipulation and control methodologies for vacuum infusion processes

    Science.gov (United States)

    Alms, Justin B.

    Vacuum Infusion Processes (VIPs) are very attractive composite manufacturing processes since large structures such as fuselages and wind blades can be fabricated in a cost effective manner. In VIPs, the fabric layers are placed on a one sided mold which is closed by enveloping the entire mold with a thin plastic film and evacuating the air out. The vacuum compresses the fabric and when a resin inlet is opened, resin flows into the mold. The resin is allowed to cure before demolding the structure. However, VIPs causes non-repeatable and problematic resin filling patterns due to the heterogeneous nature of the material, nesting between various layers, and the hand labor utilized for laying up the fabric. The design of the manufacturing process routinely involves a trial and error model which make manufacturing costs and development time difficult to estimate. The clear solution to improving the reliability and robustness of VIPs is to implement a system capable of on-line flow control. While on-line flow control has been studied and developed for other composite manufacturing processes, the VIPs have been largely ignored as there are few process parameters that lend themselves to effective flow control. In this work, two new processes were discovered with the goal of on-line control of VIPs in mind. These two processes referred to as Flow Flooding Chamber (FFC) and Vacuum Induced Preform Relaxation (VIPR) will be discussed. They both employ an external vacuum chamber to influence the permeability of the fabric temporarily which allows one to redirect the resin flow to resin starved regions of the mold. The VIPR process in addition uses a low and regulated vacuum pressure in the external chamber to increase the permeability of the fabric in a controllable manner. The objective is to understand how the VIPR process affects the resin flow in order to implement it into a complete flow control and automated environment which will reduce or eliminate the variability

  12. Compositional variation within thick (>10 m) flow units of Mauna Kea Volcano cored by the Hawaii Scientific Drilling Project

    Science.gov (United States)

    Huang, Shichun; Vollinger, Michael J.; Frey, Frederick A.; Rhodes, J. Michael; Zhang, Qun

    2016-07-01

    Geochemical analyses of stratigraphic sequences of lava flows are necessary to understand how a volcano works. Typically one sample from each lava flow is collected and studied with the assumption that this sample is representative of the flow composition. This assumption may not be valid. The thickness of flows ranges from 100 m. Geochemical heterogeneity in thin flows may be created by interaction with the surficial environment whereas magmatic processes occurring during emplacement may create geochemical heterogeneities in thick flows. The Hawaii Scientific Drilling Project (HSDP) cored ∼3.3 km of basalt erupted at Mauna Kea Volcano. In order to determine geochemical heterogeneities in a flow, multiple samples from four thick (9.3-98.4 m) HSDP flow units were analyzed for major and trace elements. We found that major element abundances in three submarine flow units are controlled by the varying proportion of olivine, the primary phenocryst phase in these samples. Post-magmatic alteration of a subaerial flow led to loss of SiO2, CaO, Na2O, K2O and P2O5, and as a consequence, contents of immobile elements, such as Fe2O3 and Al2O3, increase. The mobility of SiO2 is important because Mauma Kea shield lavas divide into two groups that differ in SiO2 content. Post-magmatic mobility of SiO2 adds complexity to determining if these groups reflect differences in source or process. The most mobile elements during post-magmatic subaerial and submarine alteration are K and Rb, and Ba, Sr and U were also mobile, but their abundances are not highly correlated with K and Rb. The Ba/Th ratio has been used to document an important role for a plagioclase-rich source component for basalt from the Galapagos, Iceland and Hawaii. Although Ba/Th is anomalously high in Hawaiian basalt, variation in Ba abundance within a single flow shows that it is not a reliable indicator of a deep source component. In contrast, ratios involving elements that are typically immobile, such as La/Nb, La

  13. Point process models for household distributions within small areal units

    Directory of Open Access Journals (Sweden)

    Zack W. Almquist

    2012-06-01

    Full Text Available Spatio-demographic data sets are increasingly available worldwide, permitting ever more realistic modeling and analysis of social processes ranging from mobility to disease trans- mission. The information provided by these data sets is typically aggregated by areal unit, for reasons of both privacy and administrative cost. Unfortunately, such aggregation does not permit fine-grained assessment of geography at the level of individual households. In this paper, we propose to partially address this problem via the development of point pro- cess models that can be used to effectively simulate the location of individual households within small areal units.

  14. A Brief Review of High Entropy Alloys and Serration Behavior and Flow Units

    Institute of Scientific and Technical Information of China (English)

    Yong ZHANG; Jun-wei QIAO; Peter KLIAW

    2016-01-01

    Multicomponent alloys with high entropy of mixing,e.g.,high entropy alloys (HEAs)and/or multiprin-cipal-element alloys (MEAs),are attracting increasing attentions,because the materials with novel properties are being developed,based on the design strategy of the equiatomic ratio,multicomponent,and high entropy of mixing in their liquid or random solution state.Recently,HEAs with the ultrahigh strength and fracture toughness,excel-lent magnetic properties,high fatigue,wear and corrosion resistance,great phase stability/high resistance to heat-softening behavior,sluggish diffusion effects,and potential superconductivity,etc.,were developed.The HEAs can even have very high irradiation resistance and may have some self-healing effects,and can potentially be used as the first wall and nuclear fuel cladding materials.Serration behaviors and flow units are powerful methods to understand the plastic deformation or fracture of materials.The methods have been successfully used to study the plasticity of amorphous alloys (also bulk metallic glasses,BMGs).The flow units are proposed as:free volumes,shear transi-tion zones (STZs),tension-transition zones (TTZs),liquid-like regions,soft regions or soft spots,etc.The flow units in the crystalline alloys are usually dislocations,which may interact with the solute atoms,interstitial types,or sub-stitution types.Moreover,the flow units often change with the testing temperatures and loading strain rates,e.g., at the low temperature and high strain rate,plastic deformation will be carried out by the flow unit of twinning,and at high temperatures,the grain boundary will be the weak area,and play as the flow unit.The serration shapes are related to the types of flow units,and the serration behavior can be analyzed using the power law and modified power law.

  15. Carbonate gravity-flow processes on the Lower Permian slope, northwest Delaware basin

    Energy Technology Data Exchange (ETDEWEB)

    Loucks, R.G.; Brown, A.A.; Achauer, C.W. (ARCO Oil and Gas Co., Plano, TX (United States))

    1991-03-01

    Wolfcampian carbonate gravity-flow deposits accumulated on a low-angle slope in front of a platform of relatively low relief ({approximately}220 m). A 25 m core, located approximately 15 km basinward of the self margin, was examined to determine processes of carbonate deposition in the middle to distal slope environments. The majority of the deposits are cohesive debris-flows composed of clast-supported conglomerates with a calcareous siliciclastic mudstone matrix. Other deposits include high- and low-density turbidites of lime packstones (sand- to boulder-size range), lime grainstones, and siliclastic muddy silstones and suspension deposits of calcareous siliciclastic mudstones. Cohesive debris flows are generally massive and structureless, although several flows show an inverse-graded zone at their base indicating dispersive pressure forces that developed in a traction carpet. Other flows display coarse-tail fining-upward sequences indicating deposition by suspension settling from liquefied flow. At the base of each high-density, gravelly turbidite is one to several inversely graded zones of carbonated clasts indicating a traction carpet zone. These traction carpets are overlain by normal-graded units of shell and clast material. The upper units appear to be deposited directly out of suspension. The low-density turbidites are interpreted to be the residual products of more shelfward-deposited debris flows and high-density turbidity currents. Many of the depositional features described here for carbonate gravity-flow deposits are identical to those in siliclastic deposits, therefore the depositional processes controlling these features are probably similar.

  16. Erosional processes in channelized water flows on Mars

    Science.gov (United States)

    Baker, V. R.

    1979-01-01

    A hypothesis is investigated according to which the Martian outflow channels were formed by high-velocity flows of water or dynamically similar liquid. It is suggested that the outflow channels are largely the result of several interacting erosional mechanisms, including fluvial processes involving ice covers, macroturbulence, streamlining, and cavitation.

  17. Numerical Modeling of Fluid Flow in the Tape Casting Process

    DEFF Research Database (Denmark)

    Jabbari, Masoud; Hattel, Jesper Henri

    2011-01-01

    The flow behavior of the fluid in the tape casting process is analyzed. A simple geometry is assumed for running the numerical calculations in ANSYS Fluent and the main parameters are expressed in non-dimensional form. The effect of different values for substrate velocity and pressure force...

  18. Coaching, lean processes and the concept of flow

    DEFF Research Database (Denmark)

    Skytte Gørtz, Kim Erik

    2008-01-01

    The chapter takes us inside Nordea Bank to look at how coaching was used to support their leadership development as they underwent a major change effort implementation. Drawing on the literature on Lean processes, flow and coaching, it demonstrates some of the challenges and opportunities...

  19. Accelerated space object tracking via graphic processing unit

    Science.gov (United States)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  20. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  1. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  2. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  3. An Architecture of Deterministic Quantum Central Processing Unit

    OpenAIRE

    Xue, Fei; Chen, Zeng-Bing; Shi, Mingjun; Zhou, Xianyi; Du, Jiangfeng; Han, Rongdian

    2002-01-01

    We present an architecture of QCPU(Quantum Central Processing Unit), based on the discrete quantum gate set, that can be programmed to approximate any n-qubit computation in a deterministic fashion. It can be built efficiently to implement computations with any required accuracy. QCPU makes it possible to implement universal quantum computation with a fixed, general purpose hardware. Thus the complexity of the quantum computation can be put into the software rather than the hardware.

  4. Process Measurement Deviation Analysis for Flow Rate due to Miscalibration

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Eunsuk; Kim, Byung Rae; Jeong, Seog Hwan; Choi, Ji Hye; Shin, Yong Chul; Yun, Jae Hee [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)

    2016-10-15

    An analysis was initiated to identify the root cause, and the exemption of high static line pressure correction to differential pressure (DP) transmitters was one of the major deviation factors. Also the miscalibrated DP transmitter range was identified as another major deviation factor. This paper presents considerations to be incorporated in the process flow measurement instrumentation calibration and the analysis results identified that the DP flow transmitter electrical output decreased by 3%. Thereafter, flow rate indication decreased by 1.9% resulting from the high static line pressure correction exemption and measurement range miscalibration. After re-calibration, the flow rate indication increased by 1.9%, which is consistent with the analysis result. This paper presents the brief calibration procedures for Rosemount DP flow transmitter, and analyzes possible three cases of measurement deviation including error and cause. Generally, the DP transmitter is required to be calibrated with precise process input range according to the calibration procedure provided for specific DP transmitter. Especially, in case of the DP transmitter installed in high static line pressure, it is important to correct the high static line pressure effect to avoid the inherent systematic error for Rosemount DP transmitter. Otherwise, failure to notice the correction may lead to indicating deviation from actual value.

  5. Gene flow among different taxonomic units: evidence from nuclear and cytoplasmic markers in Cedrus plantation forests.

    Science.gov (United States)

    Fady, B; Lefèvre, F; Reynaud, M; Vendramin, G G; Bou Dagher-Kharrat, M; Anzidei, M; Pastorelli, R; Savouré, A; Bariteau, M

    2003-10-01

    Hybridization and introgression are important natural evolutionary processes that can be successfully investigated using molecular markers and open- and controlled-pollinated progeny. In this study, we collected open-pollinated seeds from Cedrus atlantica, Cedrus libani and C. libani x C. atlantica hybrids from three French-plantation forests. We also used pollen from C. libani and Cedrus brevifolia to pollinate C. atlantica trees. The progeny were analyzed using three different types of molecular markers: RAPDs, AFLPs and cpSSRs. Chloroplast DNA was found to be paternally inherited in Cedrus from the progeny of controlled-crosses. Heteroplasmy, although possible, could not be undoubtedly detected. There was no indication of strong reproductive isolating barriers among the three Mediterranean Cedrus taxa. Gene flow between C. atlantica and C. libani accounted for 67 to 81% of viable open-pollinated seedlings in two plantation forests. We propose that Mediterranean Cedrus taxa should be considered as units of a single collective species comprising two regional groups, North Africa and the Middle East. We recommend the use of cpSSRs for monitoring gene flow between taxa in plantation forests, especially in areas where garden specimens of one species are planted in the vicinity of selected seed-stands and gene-conservation reserves of another species.

  6. Wildfire impacts on the processes that generate debris flows in burned watersheds

    Science.gov (United States)

    Parise, M.; Cannon, S.H.

    2012-01-01

    Every year, and in many countries worldwide, wildfires cause significant damage and economic losses due to both the direct effects of the fires and the subsequent accelerated runoff, erosion, and debris flow. Wildfires can have profound effects on the hydrologic response of watersheds by changing the infiltration characteristics and erodibility of the soil, which leads to decreased rainfall infiltration, significantly increased overland flow and runoff in channels, and movement of soil. Debris-flow activity is among the most destructive consequences of these changes, often causing extensive damage to human infrastructure. Data from the Mediterranean area and Western United States of America help identify the primary processes that result in debris flows in recently burned areas. Two primary processes for the initiation of fire-related debris flows have been so far identified: (1) runoff-dominated erosion by surface overland flow; and (2) infiltration-triggered failure and mobilization of a discrete landslide mass. The first process is frequently documented immediately post-fire and leads to the generation of debris flows through progressive bulking of storm runoff with sediment eroded from the hillslopes and channels. As sediment is incorporated into water, runoff can convert to debris flow. The conversion to debris flow may be observed at a position within a drainage network that appears to be controlled by threshold values of upslope contributing area and its gradient. At these locations, sufficient eroded material has been incorporated, relative to the volume of contributing surface runoff, to generate debris flows. Debris flows have also been generated from burned basins in response to increased runoff by water cascading over a steep, bedrock cliff, and incorporating material from readily erodible colluvium or channel bed. Post-fire debris flows have also been generated by infiltration-triggered landslide failures which then mobilize into debris flows. However

  7. BitTorrent Processing Unit BPU发展观望

    Institute of Scientific and Technical Information of China (English)

    Zone; 杨原青

    2007-01-01

    在电脑发展的早期,无论是运算处理、还是图形处理、还是输入、输出处理,都由CPU(Central Processing Unit,中央处理器)一力承担,然而随着处理专用化发展,1999年NVIDIA率先将图形处理独立出来,提出了GPU(Graphics Processing unit,绘图处理单元)概念。八年过去,现在GPU已经成为图形处理的中坚力量,并让所玩家耳熟能详。而近期,台湾2家公刊则提出了BPU(BitTorrent Processing Unit,BT处理单元)概念。下面,就让我们一起看看,这款极为新鲜的概念产品。

  8. Flows of engineered nanomaterials through the recycling process in Switzerland.

    Science.gov (United States)

    Caballero-Guzman, Alejandro; Sun, Tianyin; Nowack, Bernd

    2015-02-01

    The use of engineered nanomaterials (ENMs) in diverse applications has increased during the last years and this will likely continue in the near future. As the number of applications increase, more and more waste with nanomaterials will be generated. A portion of this waste will enter the recycling system, for example, in electronic products, textiles and construction materials. The fate of these materials during and after the waste management and recycling operations is poorly understood. The aim of this work is to model the flows of nano-TiO2, nano-ZnO, nano-Ag and CNT in the recycling system in Switzerland. The basis for this study is published information on the ENMs flows on the Swiss system. We developed a method to assess their flow after recycling. To incorporate the uncertainties inherent to the limited information available, we applied a probabilistic material flow analysis approach. The results show that the recycling processes does not result in significant further propagation of nanomaterials into new products. Instead, the largest proportion will flow as waste that can subsequently be properly handled in incineration plants or landfills. Smaller fractions of ENMs will be eliminated or end up in materials that are sent abroad to undergo further recovery processes. Only a reduced amount of ENMs will flow back to the productive process of the economy in a limited number of sectors. Overall, the results suggest that risk assessment during recycling should focus on occupational exposure, release of ENMs in landfills and incineration plants, and toxicity assessment in a small number of recycled inputs.

  9. Transient flow analysis of integrated valve opening process

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xinming; Qin, Benke; Bo, Hanliang, E-mail: bohl@tsinghua.edu.cn; Xu, Xingxing

    2017-03-15

    Highlights: • The control rod hydraulic driving system (CRHDS) is a new type of built-in control rod drive technology and the integrated valve (IV) is the key control component. • The transient flow experiment induced by IV is conducted and the test results are analyzed to get its working mechanism. • The theoretical model of IV opening process is established and applied to get the changing rule of the transient flow characteristic parameters. - Abstract: The control rod hydraulic driving system (CRHDS) is a new type of built-in control rod drive technology and the IV is the key control component. The working principle of integrated valve (IV) is analyzed and the IV hydraulic experiment is conducted. There is transient flow phenomenon in the valve opening process. The theoretical model of IV opening process is established by the loop system control equations and boundary conditions. The valve opening boundary condition equation is established based on the IV three dimensional flow field analysis results and the dynamic analysis of the valve core movement. The model calculation results are in good agreement with the experimental results. On this basis, the model is used to analyze the transient flow under high temperature condition. The peak pressure head is consistent with the one under room temperature and the pressure fluctuation period is longer than the one under room temperature. Furthermore, the changing rule of pressure transients with the fluid and loop structure parameters is analyzed. The peak pressure increases with the flow rate and the peak pressure decreases with the increase of the valve opening time. The pressure fluctuation period increases with the loop pipe length and the fluctuation amplitude remains largely unchanged under different equilibrium pressure conditions. The research results lay the base for the vibration reduction analysis of the CRHDS.

  10. Hyperconcentrated flows as influenced by coupled wind-water processes

    Institute of Scientific and Technical Information of China (English)

    XU; Jiongxin

    2005-01-01

    Using data from more than 40 rivers in the middle Yellow River basin, a study has been made of the influence of coupled wind-water processes on hyperconcentrated flows. A simple "vehicle" model has been proposed to describe hyperconcentrated flows. The liquid phase of two-phase flows is a "vehicle", in which coarse sediment particles are carried as solid-phase. The formation and characteristics of hyperconcentrated flows are closely related with the formation and characteristics of this liquid-phase and solid-phase. Surface materials and geomorphic agents of the middle Yellow River basin form some patterns of combination, which have deep influence on the formation and characteristics of liquid- and solid-phases of hyperconcentrated flows. The combination of high percentages of relatively coarse material with low percentages of fine material appears in the area predominated by the wind process, where the supply of relatively coarse sediment is sufficient, but the supply of relatively coarse sediment is not. The combination of low percentages of relatively coarse material with high percentages of fine material appears in the area predominated by the water process, where the supply of fine sediment is sufficient, but the supply of fine sediment is not. In the area predominated by coupled wind-water processes appears the combination of medium percentages of coarse and fine materials, and thus both coarse and fine sediments are in relatively sufficient supply. The manner in which the mean annual sediment concentrations of liquid- and solid-phases vary with total suspended sediment concentration is different. With the increased total suspended sediment concentration, mean annual sediment concentration of liquid-phase increased to a limit and then remained constant; however, mean annual sediment concentrations of solid-phase increased continuously. Thus, the magnitude of total suspended sediment concentration depends on the supply conditions of relatively coarse sediment

  11. Features, Events, and Processes in UZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    J.E. Houseworth

    2001-04-10

    Unsaturated zone (UZ) flow and radionuclide transport is a component of the natural barriers that affects potential repository performance. The total system performance assessment (TSPA) model, and underlying process models, of this natural barrier component capture some, but not all, of the associated features, events, and processes (FEPs) as identified in the FEPs Database (Freeze, et al. 2001 [154365]). This analysis and model report (AMR) discusses all FEPs identified as associated with UZ flow and radionuclide transport. The purpose of this analysis is to give a comprehensive summary of all UZ flow and radionuclide transport FEPs and their treatment in, or exclusion from, TSPA models. The scope of this analysis is to provide a summary of the FEPs associated with the UZ flow and radionuclide transport and to provide a reference roadmap to other documentation where detailed discussions of these FEPs, treated explicitly in TSPA models, are offered. Other FEPs may be screened out from treatment in TSPA by direct regulatory exclusion or through arguments concerning low probability and/or low consequence of the FEPs on potential repository performance. Arguments for exclusion of FEPs are presented in this analysis. Exclusion of specific FEPs from the UZ flow and transport models does not necessarily imply that the FEP is excluded from the TSPA. Similarly, in the treatment of included FEPs, only the way in which the FEPs are included in the UZ flow and transport models is discussed in this document. This report has been prepared in accordance with the technical work plan for the unsaturated zone subproduct element (CRWMS M&O 2000 [153447]). The purpose of this report is to document that all FEPs are either included in UZ flow and transport models for TSPA, or can be excluded from UZ flow and transport models for TSPA on the basis of low probability or low consequence. Arguments for exclusion are presented in this analysis. Exclusion of specific FEPs from UZ flow and

  12. Installation of a Low Flow Unit at the Abiquiu Hydroelectric Facility

    Energy Technology Data Exchange (ETDEWEB)

    Jack Q. Richardson

    2012-06-28

    Final Technical Report for the Recovery Act Project for the Installation of a Low Flow Unit at the Abiquiu Hydroelectric Facility. The Abiquiu hydroelectric facility existed with two each 6.9 MW vertical flow Francis turbine-generators. This project installed a new 3.1 MW horizontal flow low flow turbine-generator. The total plant flow range to capture energy and generate power increased from between 250 and 1,300 cfs to between 75 and 1,550 cfs. Fifty full time equivalent (FTE) construction jobs were created for this project - 50% (or 25 FTE) were credited to ARRA funding due to the ARRA 50% project cost match. The Abiquiu facility has increased capacity, increased efficiency and provides for an improved aquatic environment owing to installed dissolved oxygen capabilities during traditional low flow periods in the Rio Chama. A new powerhouse addition was constructed to house the new turbine-generator equipment.

  13. On-line sample processing methods in flow analysis

    DEFF Research Database (Denmark)

    Miró, Manuel; Hansen, Elo Harald

    2008-01-01

    In this chapter, the state of the art of flow injection and related approaches thereof for automation and miniaturization of sample processing regardless of the aggregate state of the sample medium is overviewed. The potential of the various generation of flow injection for implementation of in......-line dilution, derivatization, separation and preconcentration methods encompassing solid reactors, solvent extraction, sorbent extraction, precipitation/coprecipitation, hydride/vapor generation and digestion/leaching protocols as hyphenated to a plethora of detection devices is discussed in detail...

  14. Structured Process Energy-Exergy-Flow Diagram and Ideality Index for Analysis of Energy Transformation in Chemical Processes (Part 1)

    National Research Council Canada - National Science Library

    Hiroshi OAKI; Masaru ISHIDA; Tsuneo IKAWA

    1981-01-01

      A new diagram called structured process energy-exergy-flow diagram (SPEED) is proposed to systematically analyze the structure of energy flow in chemical processes and to design the process structures effectively...

  15. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  16. Numerical simulation for thermal flow filling process of casting

    Institute of Scientific and Technical Information of China (English)

    CHEN Ye; ZHAO Yu-hong; HOU Hua

    2006-01-01

    The solution algorithm (SOLA) method was used to solve the velocity and pressure field of the thermal flow filling process, and the volume of fluid (VOF) method for the free surface problem. Since the "donor-acceptor" rule often results in the free interface vague, the explicit difference method was adopted, and a method describing the free surface state at 0<F<1 was proposed to deal with this problem. In order to raise the computation efficiency, such algorithms were investigated and invalidated as: 1) internal and external area separation simplification algorithm; 2) the reducing necessary search area method. With the improved algorithms, the filling processes of the valve cover castings with gravity cast and an up cylinder block casting with low-pressure cast were simulated, the simulation results are believable and the computation efficiency is greatly improved. The SOLA-VOF model and its difference method for thermal fluid flow filling process were introduced.

  17. Sustaining processes from recurrent flows in body-forced turbulence

    CERN Document Server

    Lucas, Dan

    2016-01-01

    By extracting unstable invariant solutions directly from body-forced three-dimensional turbulence, we study the dynamical processes at play when the forcing is large scale and either unidirectional in the momentum or the vorticity equations. In the former case, the dynamical processes familiar from recent work on linearly-stable shear flows - variously called the Self-Sustaining Process (Waleffe 1997) or Vortex-Wave Interaction (Hall & Smith 1991; Hall & Sherwin 2010) - are important even when the base flow is linearly unstable. In the latter case, where the forcing drives Taylor-Green vortices, a number of mechanisms are observed from the various types of periodic orbits isolated. In particular, two different transient growth mechanisms are discussed to explain the more complex states found.

  18. Simulation of Evaporator for Two-phase Flow in the New Plate-fin Desalination Unit

    Directory of Open Access Journals (Sweden)

    Shu Xu

    2013-04-01

    Full Text Available In this study a new desalination unit is established. It has four cells such as cooling cell, heating cell, evaporation cell and condensation cell. Seawater is pumped into cooling cell to be preheated and then goes to evaporation cell. In the new desalination unit the evaporation and condensation cell is heated and cooled by the heating and cooling cells respectively. The heating of the evaporation cell is ensured by hot water flowing upward along heating cells. The cooling of the condensation cell is ensured by seawater in cooling cell. Fluent 6.3 is used to simulate gas-liquid two-phase flow of boiling evaporation numerically. A simulation calculation to get fluid in a new desalination unit under the influence of the flow, pressure distribution and heat transfer performance of the evaporator.

  19. Geological Factors Affecting Flow Spatial Continuity in Water Injection of Units Operating in the LGITJ–0102 Ore Body

    Directory of Open Access Journals (Sweden)

    Ilver M. Soto-Loaiza

    2016-05-01

    Full Text Available The objective of the investigation was to identify the geological factors affecting the spatial continuity of the flow during the process of flank water injection in the units operating in the Lower Lagunilla Hydrocarbon Ore Body. This included the evaluation of the recovery factor, the petro-physic properties such as porosity, permeability, water saturation and rock type and quality in each flow unit. it was observed that the rock type of the geologic structure in the ore body is variable. The lowest values for the petro-physic properties were found in the southern area while a high variability of these parameters was observed in the northern and central areas. It was concluded that the northern area has a great potential for the development of new injection projects for petroleum recovery.

  20. Numerical investigations on dynamic process of muzzle flow

    Institute of Scientific and Technical Information of China (English)

    JIANG Xiao-hai; FAN Bao-chun; LI Hong-zhi

    2008-01-01

    The integrative process of a quiescent projectile accelerated by high-pressure gas to shoot out at a supersonic speed and beyond the range of a precursor flow field Was simulated numerically.The calculation was based on ALE equations and a second-order precision Roe method that adopted chimera grids and a dynamic mesh.From the predicted results,the coupling and interaction among the precursor flow field,propellant gas flow field and high-speed projectile were discussed in detail.The shock-vortex interaction,shockwave reflection,shock-projectile interaction with shock diffraction,and shock focus were clearly demonstrated to explain the effect on the acceleration of the projectile.

  1. Material flow-based economic assessment of landfill mining processes.

    Science.gov (United States)

    Kieckhäfer, Karsten; Breitenstein, Anna; Spengler, Thomas S

    2017-02-01

    This paper provides an economic assessment of alternative processes for landfill mining compared to landfill aftercare with the goal of assisting landfill operators with the decision to choose between the two alternatives. A material flow-based assessment approach is developed and applied to a landfill in Germany. In addition to landfill aftercare, six alternative landfill mining processes are considered. These range from simple approaches where most of the material is incinerated or landfilled again to sophisticated technology combinations that allow for recovering highly differentiated products such as metals, plastics, glass, recycling sand, and gravel. For the alternatives, the net present value of all relevant cash flows associated with plant installation and operation, supply, recycling, and disposal of material flows, recovery of land and landfill airspace, as well as landfill closure and aftercare is computed with an extensive sensitivity analyses. The economic performance of landfill mining processes is found to be significantly influenced by the prices of thermal treatment (waste incineration as well as refuse-derived fuels incineration plant) and recovered land or airspace. The results indicate that the simple process alternatives have the highest economic potential, which contradicts the aim of recovering most of the resources. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th

  3. Fast calculation of HELAS amplitudes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use the graphics processing unit (GPU) for fast calculations of helicity amplitudes of physics processes. As our first attempt, we compute $u\\overline{u}\\to n\\gamma$ ($n=2$ to 8) processes in $pp$ collisions at $\\sqrt{s} = 14$TeV by transferring the MadGraph generated HELAS amplitudes (FORTRAN) into newly developed HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes written in CUDA, a C-platform developed by NVIDIA for general purpose computing on the GPU. Compared with the usual CPU programs, we obtain 40-150 times better performance on the GPU.

  4. Analysis on contact and flow features in CMP process

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chaohui; LUO Jianbin; LIU Jinquan; DU Yongping

    2006-01-01

    Contact pressure and flow features of chemical mechanical polishing/planarization (CMP) process were analyzed, taking advantage of the one-dimensional contact model of two layers and considering slurry flows. In this model, deformations of the bulk pad substrate and the asperities were considered. The deformations of the bulk pad substrate and the asperity layer, as well as the contact pressure and fluid pressure, were revealed with numerical methods. Numerical simulation results show a counterintuitive phenomenon: a diverging clearance is formed in the leading region of the wafer and thereby it gives rise to a suction pressure (subambient pressure). A high stress concentration is presented at the wafer edge and thereby over polishing can be introduced. The research provides some theoretical explanations for these two fundamental features of usual CMP processes.

  5. Preface "Nonlinear processes in oceanic and atmospheric flows"

    Directory of Open Access Journals (Sweden)

    E. García-Ladona

    2010-05-01

    Full Text Available Nonlinear phenomena are essential ingredients in many oceanic and atmospheric processes, and successful understanding of them benefits from multidisciplinary collaboration between oceanographers, meteorologists, physicists and mathematicians. The present Special Issue on "Nonlinear Processes in Oceanic and Atmospheric Flows" contains selected contributions from attendants to the workshop which, in the above spirit, was held in Castro Urdiales, Spain, in July 2008. Here we summarize the Special Issue contributions, which include papers on the characterization of ocean transport in the Lagrangian and in the Eulerian frameworks, generation and variability of jets and waves, interactions of fluid flow with plankton dynamics or heavy drops, scaling in meteorological fields, and statistical properties of El Niño Southern Oscillation.

  6. Preface "Nonlinear processes in oceanic and atmospheric flows"

    CERN Document Server

    Mancho, A M; Turiel, A; Hernandez-Garcia, E; Lopez, C; Garcia-Ladona, E; 10.5194/npg-17-283-2010

    2010-01-01

    Nonlinear phenomena are essential ingredients in many oceanic and atmospheric processes, and successful understanding of them benefits from multidisciplinary collaboration between oceanographers, meteorologists, physicists and mathematicians. The present Special Issue on ``Nonlinear Processes in Oceanic and Atmospheric Flows'' contains selected contributions from attendants to the workshop which, in the above spirit, was held in Castro Urdiales, Spain, in July 2008. Here we summarize the Special Issue contributions, which include papers on the characterization of ocean transport in the Lagrangian and in the Eulerian frameworks, generation and variability of jets and waves, interactions of fluid flow with plankton dynamics or heavy drops, scaling in meteorological fields, and statistical properties of El Ni\\~no Southern Oscillation.

  7. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...... than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text...... reception, production, and translation processes. In this talk I describe some of the functions and features of the TPR-DB v1.4, and how they can be deployed in empirical human translation process research....

  8. Processes of Turbulent Liquid Flows in Pipelines and Channels

    Directory of Open Access Journals (Sweden)

    R. I. Yesman

    2011-01-01

    Full Text Available The paper proposes a methodology for an analysis and calculation of processes pertaining to turbulent liquid flows in pipes and channels. Various modes of liquid motion in pipelines of thermal power devices and equipment have been considered in the paper.The presented dependences can be used while making practical calculations of losses due to friction in case of transportation of various energy carriers.

  9. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  10. Sulfur Flow Analysis for New Generation Steel Manufacturing Process

    Institute of Scientific and Technical Information of China (English)

    HU Chang-qing; ZHANG Chun-xia; HAN Xiao-wei; YIN Rui-yu

    2008-01-01

    Sulfur flow for new generation steel manufacturing process is analyzed by the method of material flow analysis,and measures for SO2 emission reduction are put forward as assessment and target intervention of the results.The results of sulfur flow analysis indicate that 90% of sulfur comes from fuels.Sulfur finally discharges from the steel manufacturing route in various steps,and the main point is BF and BOF slag desulfurization.In sintering process,the sulfur is removed by gasification,and sintering process is the main source of SO2 emission.The sulfur content of coke oven gas (COG) is an important factor affecting SO2 emission.Therefore,SO2 emission reduction should be started from the optimization and integration of steel manufacturing route,sulfur burden should be reduced through energy saving and consumption reduction,and the sulfur content of fuel should be controlled.At the same time,BF and BOF slag desulfurization should be optimized further and coke oven gas and sintering exhausted gas desulfurization should be adopted for SO2 emission reduction and reuse of resource,to achieve harmonic coordination of economic,social,and environmental effects for sustainable development.

  11. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  12. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  13. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  14. Line-by-line spectroscopic simulations on graphics processing units

    Science.gov (United States)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  15. Prediction of hygiene in food processing equipment using flow modelling

    DEFF Research Database (Denmark)

    Friis, Alan; Jensen, Bo Boye Busk

    2002-01-01

    Computational fluid dynamics (CFD) has been applied to investigate the design of closed process equipment with respect to cleanability. The CFD simulations were validated using the standardized cleaning test proposed by the European Hygienic Engineering and Design Group. CFD has been proven...... as a tool which can be used by manufacturers to facilitate their equipment design for high hygienic standards before constructing any prototypes. The study of hydrodynamic cleanability of closed processing equipment was discussed based on modelling the flow in a valve house, an up-stand and various...

  16. Process Improvements to Reform Patient Flow in the Emergency Department.

    Science.gov (United States)

    Whatley, Shawn D; Leung, Alexander K; Duic, Marko

    2016-01-01

    Emergency departments (ED) function to diagnose, stabilize, manage and dispose patients as efficiently as possible. Although problems may be suspected at triage, ED physician input is required at each step of the patient journey through the ED, from diagnosis to disposition. If we want timely diagnosis, appropriate treatment and great outcomes, then ED processes should connect patients and physicians as quickly as possible. This article discusses the key concepts of ED patient flow, value and efficiency. Based on these fundamentals, it describes the significant impact of ED process improvements implemented on measures of ED efficiency at a large community ED in Ontario, Canada.

  17. Analysis of Optimal Process Flow Diagrams of Light Naphtha Isomerization Process by Mathematic Modelling Method

    Directory of Open Access Journals (Sweden)

    Chuzlov Vjacheslav

    2016-01-01

    Full Text Available An approach to simulation of hydrocarbons refining processes catalytic reactors. The kinetic and thermodynamic research of light naphtha isomerization process was conducted. The kinetic parameters of hydrocarbon feedstock chemical conversion on different types of platinum-content catalysts was established. The estimation of efficiency of including different types of isomerization technologies in oil refinery flow diagram was performed.

  18. Impact of trucking network flow on preferred biorefinery locations in the southern United States

    Science.gov (United States)

    Timothy M. Young; Lee D. Han; James H. Perdue; Stephanie R. Hargrove; Frank M. Guess; Xia Huang; Chung-Hao Chen

    2017-01-01

    The impact of the trucking transportation network flow was modeled for the southern United States. The study addresses a gap in existing research by applying a Bayesian logistic regression and Geographic Information System (GIS) geospatial analysis to predict biorefinery site locations. A one-way trucking cost assuming a 128.8 km (80-mile) haul distance was estimated...

  19. Detection and quantification of flow consistency in business process models

    DEFF Research Database (Denmark)

    Burattin, Andrea; Bernstein, Vered; Neurauter, Manuel

    2017-01-01

    Business process models abstract complex business processes by representing them as graphical models. Their layout, as determined by the modeler, may have an effect when these models are used. However, this effect is currently not fully understood. In order to systematically study this effect......, a basic set of measurable key visual features is proposed, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold: first, to empirically identify key visual features of business process models which are perceived as meaningful to the user and second......, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics...

  20. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  1. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    Purpose/Objective For online IGRT, rapid image processing is needed. Fast parallel computations using graphics processing units (GPUs) have recently been made more accessible through general purpose programming interfaces. We present a GPU implementation of the Horn and Schunck method...... respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  2. Fast free-form deformation using graphics processing units.

    Science.gov (United States)

    Modat, Marc; Ridgway, Gerard R; Taylor, Zeike A; Lehmann, Manja; Barnes, Josephine; Hawkes, David J; Fox, Nick C; Ourselin, Sébastien

    2010-06-01

    A large number of algorithms have been developed to perform non-rigid registration and it is a tool commonly used in medical image analysis. The free-form deformation algorithm is a well-established technique, but is extremely time consuming. In this paper we present a parallel-friendly formulation of the algorithm suitable for graphics processing unit execution. Using our approach we perform registration of T1-weighted MR images in less than 1 min and show the same level of accuracy as a classical serial implementation when performing segmentation propagation. This technology could be of significant utility in time-critical applications such as image-guided interventions, or in the processing of large data sets. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Mantle flow in subduction systems: The mantle wedge flow field and implications for wedge processes

    Science.gov (United States)

    Long, Maureen D.; Wirth, Erin A.

    2013-02-01

    The mantle wedge above subducting slabs is associated with many important processes, including the transport of melt and volatiles. Our understanding of mantle wedge dynamics is incomplete, as the mantle flow field above subducting slabs remains poorly understood. Because seismic anisotropy is a consequence of deformation, measurements of shear wave splitting can constrain the geometry of mantle flow. In order to identify processes that make first-order contributions to the pattern of wedge flow, we have compiled a data set of local S splitting measurements from mantle wedges worldwide. There is a large amount of variability in splitting parameters, with average delay times ranging from ~0.1 to 0.3 s up to ~1.0-1.5 s and large variations in fast directions. We tested for relationships between splitting parameters and a variety of parameters related to subduction processes. We also explicitly tested the predictions made by 10 different models that have been proposed to explain splitting patterns in the mantle wedge. We find that no simple model can explain all of the trends observed in the global data set. Mantle wedge flow is likely controlled by a combination of downdip motion of the slab, trench migration, ambient mantle flow, small-scale convection, proximity to slab edges, and slab morphology, with the relative contributions of these in any given subduction system controlled by the subduction kinematics and mantle rheology. There is also a likely contribution from B-type olivine and/or serpentinite fabric in many subduction zones, governed by the local thermal structure and volatile distribution.

  4. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  5. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  6. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  7. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2016-07-08

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  8. Controlling a Linear Process in Turbulent Channel Flow

    Science.gov (United States)

    Lim, Junwoo; Kim, John

    1999-11-01

    Recent studies have shown that controllers developed based on a linear system theory work surprisingly well in reducing the viscous drag in turbulent boundary layers, suggesting that the essential dynamics of near-wall turbulence may well be approximated by the linearized model. Of particular interest is the linear process due to the coupling term between the wall-normal velocity and wall-normal vorticity terms in the linearized Navier-Stokes (N-S) equations, which enhances non-normality of the linearized system. This linear process is investigated through numerical simulations of a turbulent channel flow. It is shown that the linear coupling term plays an important role in fully turbulent -- and hence, nonlinear -- flows. Near-wall turbulence is shown to decay in the absence of the linear coupling term. The fact that the coupling term plays an essential role in maintaining near-wall turbulence suggests that an effective control algorithm for the drag reduction in turbulent flows should be aimed at reducing the effect of the coupling term in the wall region. Designing a control algorithm that directly accounts for the coupling term in a cost to be minimized will be discussed.

  9. Explicit isospectral flows associated to the AKNS operator on the unit interval. II

    Science.gov (United States)

    Amour, Laurent

    2012-10-01

    Explicit flows associated to any tangent vector fields on any isospectral manifold for the AKNS operator acting in L2 × L2 on the unit interval are written down. The manifolds are of infinite dimension (and infinite codimension). The flows are called isospectral and also are Hamiltonian flows. It is proven that they may be explicitly expressed in terms of regularized determinants of infinite matrix-valued functions with entries depending only on the spectral data at the starting point of the flow. The tangent vector fields are decomposed as ∑ξkTk where ξ ∈ ℓ2 and the Tk ∈ L2 × L2 form a particular basis of the tangent vector spaces of the infinite dimensional manifold. The paper here is a continuation of Amour ["Explicit isospectral flows for the AKNS operator on the unit interval," Inverse Probl. 25, 095008 (2009)], 10.1088/0266-5611/25/9/095008 where, except for a finite number, all the components of the sequence ξ are zero in order to obtain an explicit expression for the isospectral flows. The regularized determinants induce counter-terms allowing for the consideration of finite quantities when the sequences ξ run all over ℓ2.

  10. Pore size determination using normalized J-function for different hydraulic flow units

    Directory of Open Access Journals (Sweden)

    Ali Abedini

    2015-06-01

    Full Text Available Pore size determination of hydrocarbon reservoirs is one of the main challenging areas in reservoir studies. Precise estimation of this parameter leads to enhance the reservoir simulation, process evaluation, and further forecasting of reservoir behavior. Hence, it is of great importance to estimate the pore size of reservoir rocks with an appropriate accuracy. In the present study, a modified J-function was developed and applied to determine the pore radius in one of the hydrocarbon reservoir rocks located in the Middle East. The capillary pressure data vs. water saturation (Pc–Sw as well as routine reservoir core analysis include porosity (φ and permeability (k were used to develop the J-function. First, the normalized porosity (φz, the rock quality index (RQI, and the flow zone indicator (FZI concepts were used to categorize all data into discrete hydraulic flow units (HFU containing unique pore geometry and bedding characteristics. Thereafter, the modified J-function was used to normalize all capillary pressure curves corresponding to each of predetermined HFU. The results showed that the reservoir rock was classified into five separate rock types with the definite HFU and reservoir pore geometry. Eventually, the pore radius for each of these HFUs was determined using a developed equation obtained by normalized J-function corresponding to each HFU. The proposed equation is a function of reservoir rock characteristics including φz, FZI, lithology index (J*, and pore size distribution index (ɛ. This methodology used, the reservoir under study was classified into five discrete HFU with unique equations for permeability, normalized J-function and pore size. The proposed technique is able to apply on any reservoir to determine the pore size of the reservoir rock, specially the one with high range of heterogeneity in the reservoir rock properties.

  11. Post-processing methods of PIV instantaneous flow fields for unsteady flows in turbomachines

    OpenAIRE

    Cavazzini, G.; A. Dazin; Pavesi, G; Dupont, P; G. Bois

    2012-01-01

    The Particle Image Velocimetry is undoubtedly one of the most important technique in Fluid-dynamics since it allows to obtain a direct and instantaneous visualization of the flow field in a non-intrusive way. This innovative technique spreads in a wide number of research fields, from aerodynamics to medicine, from biology to turbulence researches, from aerodynamics to combustion processes. The book is aimed at presenting the PIV technique and its wide range of possible applications so as to p...

  12. Analysis of stochastic characteristics of the Benue River flow process

    Institute of Scientific and Technical Information of China (English)

    Martins Y.OTACHE; Mohammad BAKIR; LI Zhijia

    2008-01-01

    Stochastic characteristics of the Benue River streamflow process are examined under conditions of data austerity.The streamflow process is investigated for trend,non-stationarity and seasonality for a time period of 26 years.Results of trend analyses with Mann-Kendall test show that there is no trend in the annual mean discharges.Monthly flow series examined with seasonal Kendall test indicate the presence of positive change in the trend for some months,especially the months of August,January,and February.For the stationarity test,daily and monthly flow series appear to be stationary whereas at 1%,5%,and 10% significant levels,the stationarity alternative hypothesis is rejected for the annual flow series.Though monthly flow appears to be stationary going by this test,because of high seasonality,it could be said to exhibit periodic stationarity based on the seasonality analysis.The following conclusions are drawn:(1) There is seasonality in both the mean and variance with unimodal distribution.(2) Days with high mean also have high variance.(3) Skewness coefficients for the months within the dry season period are greater than those of the wet season period,and seasonal autocorrelations for streamflow during dry season are generally larger than those of the wet season.Precisely,they are significantly different for most of the months.(4) The autocorrelation functions estimated "over time" are greater in the absolute value for data that have not been deseasonalised but were initially normalised by logarithmic transformation only,while autocorrelation functions for i=1,2,…,365 estimated "over realisations" have their coefficients significantly different from other coefficients.

  13. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  14. Four-dimensional structural and Doppler optical coherence tomography imaging on graphics processing units.

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczynska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2012-10-01

    The authors present the application of graphics processing unit (GPU) programming for real-time three-dimensional (3-D) Fourier domain optical coherence tomography (FdOCT) imaging with implementation of flow visualization algorithms. One of the limitations of FdOCT is data processing time, which is generally longer than data acquisition time. Utilizing additional algorithms, such as Doppler analysis, further increases computation time. The general purpose computing on GPU (GPGPU) has been used successfully for structural OCT imaging, but real-time 3-D imaging of flows has so far not been presented. We have developed software for structural and Doppler OCT processing capable of visualization of two-dimensional (2-D) data (2000 A-scans, 2048 pixels per spectrum) with an image refresh rate higher than 120 Hz. The 3-D imaging of 100×100 A-scans data is performed at a rate of about 9 volumes per second. We describe the software architecture, organization of threads, and optimization. Screen shots recorded during real-time imaging of a flow phantom and the human eye are presented.

  15. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  16. Continuous Flow in Labour-Intensive Manufacturing Process

    Science.gov (United States)

    Pacheco Eng., Jhonny; Carbajal MSc., Eduardo; Stoll-Ing., Cesar, Dr.

    2017-06-01

    A continuous-flow manufacturing represents the peak of standard production, and usually it means high production in a strict line production. Furthermore, low-tech industry demands high labour-intensive, in this context the efficient of the line production is tied at the job shop organization. Labour-intensive manufacturing processes are a common characteristic for developing countries. This research aims to propose a methodology for production planning in order to fulfilment a variable monthly production quota. The main idea is to use a clock as orchestra director in order to synchronize the rate time (takt time) of customer demand with the manufacturing time. In this way, the study is able to propose a stark reduction of stock in process, over-processing, and unnecessary variability.

  17. Design flow for implementing image processing in FPGAs

    Science.gov (United States)

    Trakalo, M.; Giles, G.

    2007-04-01

    A design flow for implementing a dynamic gamma algorithm in an FPGA is described. Real-time video processing makes enormous demands on processing resources. An FPGA solution offers some advantages over commercial video chip and DSP implementation alternatives. The traditional approach to FPGA development involves a system engineer designing, modeling and verifying an algorithm and writing a specification. A hardware engineer uses the specification as a basis for coding in VHDL and testing the algorithm in the FPGA with supporting electronics. This process is work intensive and the verification of the image processing algorithm executing on the FPGA does not occur until late in the program. The described design process allows the system engineer to design and verify a true VHDL version of the algorithm, executing in an FPGA. This process yields reduced risk and development time. The process is achieved by using Xilinx System Generator in conjunction with Simulink® from The MathWorks. System Generator is a tool that bridges the gap between the high level modeling environment and the digital world of the FPGA. System Generator is used to develop the dynamic gamma algorithm for the contrast enhancement of a candidate display product. The results of this effort are to increase the dynamic range of the displayed video, resulting in a more useful image for the user.

  18. Shock detachment process on cones in hypervelocity flows

    Science.gov (United States)

    Leyva, Ivett A.

    1999-11-01

    The shock detachment process on cones in hypervelocity flows is one of the most sensitive flows to relaxation effects. The critical angle for shock detachment under frozen conditions can be very different from the critical angle under chemical and thermal equilibrium. The rate of increase of the detachment distance with cone angle is also affected by the relaxation rate. The purpose of this study is to explain the effects of nonequilibrium on the shock detachment distance and its growth rate on cones in hypervelocity flows. The study consists of an experimental and a computational program. The experimental part has been carried out at Caltech's hypervelocity reflected shock tunnel (T5). Six different free-stream conditions have been chosen, four using N2 as the test gas and two using CO2. About 170 shots were performed on 24 cones. The cones range in diameter from 2 cm to 16 cm with half-angles varying from 55° to 75°. The experimental data obtained are holographic interferograms of every shot, and surface temperature and pressure measurements for the bigger cones. Extensive numerical simulations were made for the N2 flows and some were also made for the CO2 flows. The code employed is a Navier-Stokes solver that can account for thermal and chemical nonequilibrium in axisymmetric flows. The experimental and computational data obtained for the shock detachment distance confirms a previous theoretical model that predicts the detachment distance will grow more slowly for relaxing flows than for frozen or equilibrium flows. This difference is explained in terms of the behavior of the sonic line inside the shock layer. Different growth rates result when the detachment distance is controlled by the diameter of the cone (frozen and equilibrium cases) than when it is controlled by the extent of the relaxation zone inside the shock layer (nonequilibrium flows). The experimental data are also complemented with computational data to observe the behavior of the detachment

  19. A laminar flow unit for the care of critically ill newborn infants

    Directory of Open Access Journals (Sweden)

    Perez JM

    2013-10-01

    Full Text Available Jose MR Perez,1 Sergio G Golombek,2 Carlos Fajardo,3 Augusto Sola41Stella Maris Hospital, International Neurodevelopment Neonatal Center (CINN, Sao Paulo, Brazil; 2M Fareri Children’s Hospital, Westchester Medical Center, New York Medical College, Valhalla, NY, USA; 3University of Calgary, Calgary, Canada; 4St Jude Hospital, Fullerton, California, CA, USAIntroduction: Medical and nursing care of newborns is predicated on the delicate control and balance of several vital parameters. Closed incubators and open radiant warmers are the most widely used devices for the care of neonates in intensive care; however, several well-known limitations of these devises have not been resolved. The use of laminar flow is widely used in many fields of medicine, and may have applications in neonatal care.Objective: To describe the neonatal laminar flow unit, a new equipment we designed for care of ill newborns.Methods: The idea, design, and development of this device was completed in Sao Paulo, Brazil. The unit is an open mobile bed designed with the objective of maintaining the advantages of the incubator and radiant warmer, while overcoming some of their inherent shortcomings; these shortcomings include noise, magnetic fields and acrylic barriers in incubators, and lack of isolation and water loss through skin in radiant warmers. The unit has a pump that aspirates environmental air which is warmed by electrical resistance and decontaminated with High Efficiency Particulate Air Filter (HEPA filters (laminar flow. The flow is directed by an air flow directioner. The unit has an embedded humidifier to increase humidity in the infant’s microenvironment and a servo control mechanism for regulation of skin temperature.Results: The laminar flow unit is open and facilitates access of care providers and family, which is not the case in incubators. It provides warming by convection at an air velocity of 0.45 m/s, much faster than an incubator (0.1 m/s. The system

  20. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  1. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  2. GENETIC ALGORITHM ON GENERAL PURPOSE GRAPHICS PROCESSING UNIT: PARALLELISM REVIEW

    Directory of Open Access Journals (Sweden)

    A.J. Umbarkar

    2013-01-01

    Full Text Available Genetic Algorithm (GA is effective and robust method for solving many optimization problems. However, it may take more runs (iterations and time to get optimal solution. The execution time to find the optimal solution also depends upon the niching-technique applied to evolving population. This paper provides the information about how various authors, researchers, scientists have implemented GA on GPGPU (General purpose Graphics Processing Units with and without parallelism. Many problems have been solved on GPGPU using GA. GA is easy to parallelize because of its SIMD nature and therefore can be implemented well on GPGPU. Thus, speedup can definitely be achieved if bottleneck in GAs are identified and implemented effectively on GPGPU. Paper gives review of various applications solved using GAs on GPGPU with the future scope in the area of optimization.

  3. Centralization of Intensive Care Units: Process Reengineering in a Hospital

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    2010-03-01

    Full Text Available Centralization of intensive care units (ICUs is a concept that has been around for several decades and the OECD countries have led the way in adopting this in their operations. Singapore Hospital was built in 1981, before the concept of centralization of ICUs took off. The hospital's ICUs were never centralized and were spread out across eight different blocks with the specialization they were associated with. Coupled with the acquisitions of the new concept of centralization and its benefits, the hospital recognizes the importance of having a centralized ICU to better handle major disasters. Using simulation models, this paper attempts to study the feasibility of centralization of ICUs in Singapore Hospital, subject to space constraints. The results will prove helpful to those who consider reengineering the intensive care process in hospitals.

  4. Simulating Lattice Spin Models on Graphics Processing Units

    CERN Document Server

    Levy, Tal; Rabani, Eran; 10.1021/ct100385b

    2012-01-01

    Lattice spin models are useful for studying critical phenomena and allow the extraction of equilibrium and dynamical properties. Simulations of such systems are usually based on Monte Carlo (MC) techniques, and the main difficulty is often the large computational effort needed when approaching critical points. In this work, it is shown how such simulations can be accelerated with the use of NVIDIA graphics processing units (GPUs) using the CUDA programming architecture. We have developed two different algorithms for lattice spin models, the first useful for equilibrium properties near a second-order phase transition point and the second for dynamical slowing down near a glass transition. The algorithms are based on parallel MC techniques, and speedups from 70- to 150-fold over conventional single-threaded computer codes are obtained using consumer-grade hardware.

  5. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  6. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  7. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  8. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  9. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards....

  10. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  11. Polymer Field-Theory Simulations on Graphics Processing Units

    CERN Document Server

    Delaney, Kris T

    2012-01-01

    We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. Running on NVIDIA Tesla T20 series GPUs, we find double-precision speedups of up to 30x compared to single-core serial calculations on a recent reference CPU, while single-precision calculations proceed up to 60x faster than those on the single CPU core. Due to intensive communications overhead, an MPI implementation running on 64 CPU cores remains two times slower than a single GPU.

  12. Graphics Processing Units and High-Dimensional Optimization.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A

    2010-08-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board.

  13. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  14. Implementing wide baseline matching algorithms on a graphics processing unit.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  15. Groundwater flow and sorption processes in fractured rocks (I)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Won Young; Woo, Nam Chul; Yum, Byoung Woo; Choi, Young Sub; Chae, Byoung Kon; Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja; Lee, Kil Yong; Lee, Seung Gu; Youn, Youn Yul; Choon, Sang Ki [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1996-12-01

    This study is objected to characterize groundwater flow and sorption processes of the contaminants (ground-water solutes) along the fractured crystalline rocks in Korea. Considering that crystalline rock mass is an essential condition for using underground space cannot be overemphasized the significance of the characterizing fractured crystalline rocks. the behavior of the groundwater contaminants is studied in related to the subsurface structure, and eventually a quantitative technique will be developed to evaluate the impacts of the contaminants on the subsurface environments. The study has been carried at the Samkwang mine area in the Chung-Nam Province. The site has Pre-Cambrian crystalline gneiss as a bedrock and the groundwater flow system through the bedrock fractures seemed to be understandable with the study on the subsurface geologic structure through the mining tunnels. Borehole tests included core logging, televiewer logging, constant pressure fixed interval length tests and tracer tests. The results is summarized as follows; 1) To determine the hydraulic parameters of the fractured rock, the transient flow analysis produce better results than the steady - state flow analysis. 2) Based on the relationship between fracture distribution and transmissivities measured, the shallow part of the system could be considered as a porous and continuous medium due to the well developed fractures and weathering. However, the deeper part shows flow characteristics of the fracture dominant system, satisfying the assumptions of the Cubic law. 3) Transmissivities from the FIL test were averaged to be 6.12 x 10{sup -7}{sub m}{sup 2}{sub /s}. 4) Tracer tests result indicates groundwater flow in the study area is controlled by the connection, extension and geometry of fractures in the bedrock. 5) Hydraulic conductivity of the tracer-test interval was in maximum of 7.2 x 10{sup -6}{sub m/sec}, and the effective porosity of 1.8 %. 6) Composition of the groundwater varies

  16. Which factors, processes and storages influence low flow (Q347)?

    Science.gov (United States)

    Margreth, Michael; Scherrer, Simon; Smoorenburg, Maarten; Naef, Felix

    2013-04-01

    In Switzerland, estimation of residual water is based on Q347 (flow exceeded during 347 days per year). In ungauged catchments Q347 has to be determined with some simplified approaches. However, these statistical models often provide inaccurate results. The runoff reaction of a river depends on the spatial distribution of the Dominant Runoff Processes (DRP) like Hortonian Overland Flow (HOF), Saturated Overland Flow (SOF), Sub-Surface Flow (SSF) or Deep Percolation (DP) within its catchment area. Low flow is fed by slowly reacting groundwater or deep hillslope storages. These storages are supposed to be located mainly beneath permeable soils in highly permeable bedrock like talus, deposits of debris flows or rock fall, gravel of river deposits, lateral moraines or karst systems, represented in DRP-maps by slowly reacting SOF3-, SSF3- or DP- areas. To better understand these mechanisms, the relation between areas of slowly reacting SOF3, SSF3, DP and the form of the recession curves was analysed in 27 catchments of Swiss Plateau and Jura. Results show, that drainage characteristics and percentage of SOF3-, SSF3- and DP- areas in catchments relate well. The more extended the recharge areas, the smoother and longer the recession curves. For example the recession to Q347 in the Eulach River (Area of SOF3, SSF3, DP = 54%) takes 95 days, in the Töss River only 10 days (Area of SOF3, SSF3, DP = 9%). However, the differences in Q347 cannot be explained with these percentages. The runoff volume from Q347 to Q365 in 14 investigated catchments is only between 0.2 and 14 mm, about 1.5% of the annual precipitation volume. It seems that the storages mentioned above do not contribute significantly any more, when the discharge falls below Q347. It was found that catchments with high Q347 consist mainly of sandstone, conglomerate or large scaled wetlands. It seems that mainly porous and fissured solid rocks contribute to Q347. Very small Q347 are usually caused by seepage loss of

  17. Groundwater flow and sorption processes in fractured rocks (I)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Won Young; Woo, Nam Chul; Yum, Byoung Woo; Choi, Young Sub; Chae, Byoung Kon; Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja; Lee, Kil Yong; Lee, Seung Gu; Youn, Youn Yul; Choon, Sang Ki [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)

    1996-12-01

    This study is objected to characterize groundwater flow and sorption processes of the contaminants (ground-water solutes) along the fractured crystalline rocks in Korea. Considering that crystalline rock mass is an essential condition for using underground space cannot be overemphasized the significance of the characterizing fractured crystalline rocks. the behavior of the groundwater contaminants is studied in related to the subsurface structure, and eventually a quantitative technique will be developed to evaluate the impacts of the contaminants on the subsurface environments. The study has been carried at the Samkwang mine area in the Chung-Nam Province. The site has Pre-Cambrian crystalline gneiss as a bedrock and the groundwater flow system through the bedrock fractures seemed to be understandable with the study on the subsurface geologic structure through the mining tunnels. Borehole tests included core logging, televiewer logging, constant pressure fixed interval length tests and tracer tests. The results is summarized as follows; 1) To determine the hydraulic parameters of the fractured rock, the transient flow analysis produce better results than the steady - state flow analysis. 2) Based on the relationship between fracture distribution and transmissivities measured, the shallow part of the system could be considered as a porous and continuous medium due to the well developed fractures and weathering. However, the deeper part shows flow characteristics of the fracture dominant system, satisfying the assumptions of the Cubic law. 3) Transmissivities from the FIL test were averaged to be 6.12 x 10{sup -7}{sub m}{sup 2}{sub /s}. 4) Tracer tests result indicates groundwater flow in the study area is controlled by the connection, extension and geometry of fractures in the bedrock. 5) Hydraulic conductivity of the tracer-test interval was in maximum of 7.2 x 10{sup -6}{sub m/sec}, and the effective porosity of 1.8 %. 6) Composition of the groundwater varies

  18. Features, Events and Processes in UZ Flow and Transport

    Energy Technology Data Exchange (ETDEWEB)

    P. Persoff

    2005-08-04

    The purpose of this report is to evaluate and document the inclusion or exclusion of the unsaturated zone (UZ) features, events, and processes (FEPs) with respect to modeling that supports the total system performance assessment (TSPA) for license application (LA) for a nuclear waste repository at Yucca Mountain, Nevada. A screening decision, either Included or Excluded, is given for each FEP, along with the technical basis for the screening decision. This information is required by the U.S. Nuclear Regulatory Commission (NRC) in 10 CFR 63.114 (d, e, and f) [DIRS 173273]. The FEPs deal with UZ flow and radionuclide transport, including climate, surface water infiltration, percolation, drift seepage, and thermally coupled processes. This analysis summarizes the implementation of each FEP in TSPA-LA (that is, how the FEP is included) and also provides the technical basis for exclusion from TSPA-LA (that is, why the FEP is excluded). This report supports TSPA-LA.

  19. The process flow and structure of an integrated stroke strategy

    Directory of Open Access Journals (Sweden)

    Emma F. van Bussel

    2013-06-01

    Full Text Available Introduction: In the Canadian province of Alberta access and quality of stroke care were suboptimal, especially in remote areas. The government introduced the Alberta Provincial Stroke Strategy (APSS in 2005, an integrated strategy to improve access to stroke care, quality and efficiency which utilizes telehealth. Research question: What is the process flow and the structure of the care pathways of the APSS?Methodology: Information for this article was obtained using documentation, archival APSS records, interviews with experts, direct observation and participant observation.Results: The process flow is described. The APSS integrated evidence-based practice, multidisciplinary communication, and telestroke services. It includes regular quality evaluation and improvement.Conclusion: Access, efficiency and quality of care improved since the start of the APSS across many domains, through improvement of expertise and equipment in small hospitals, accessible consultation of stroke specialists using telestroke, enhancing preventive care, enhancing multidisciplinary collaboration, introducing uniform best practice protocols and bypass-protocols for the emergency medical services.Discussion: The APSS overcame substantial obstacles to decrease discrepancies and to deliver integrated higher quality care. Telestroke has proven itself to be safe and feasible. The APSS works efficiently, which is in line to other projects worldwide, and is, based on limited results, cost effective. Further research on cost-effectiveness is necessary.

  20. The process flow and structure of an integrated stroke strategy

    Directory of Open Access Journals (Sweden)

    Emma F. van Bussel

    2013-06-01

    Full Text Available Introduction: In the Canadian province of Alberta access and quality of stroke care were suboptimal, especially in remote areas. The government introduced the Alberta Provincial Stroke Strategy (APSS in 2005, an integrated strategy to improve access to stroke care, quality and efficiency which utilizes telehealth. Research question: What is the process flow and the structure of the care pathways of the APSS? Methodology: Information for this article was obtained using documentation, archival APSS records, interviews with experts, direct observation and participant observation. Results: The process flow is described. The APSS integrated evidence-based practice, multidisciplinary communication, and telestroke services. It includes regular quality evaluation and improvement. Conclusion: Access, efficiency and quality of care improved since the start of the APSS across many domains, through improvement of expertise and equipment in small hospitals, accessible consultation of stroke specialists using telestroke, enhancing preventive care, enhancing multidisciplinary collaboration, introducing uniform best practice protocols and bypass-protocols for the emergency medical services. Discussion: The APSS overcame substantial obstacles to decrease discrepancies and to deliver integrated higher quality care. Telestroke has proven itself to be safe and feasible. The APSS works efficiently, which is in line to other projects worldwide, and is, based on limited results, cost effective. Further research on cost-effectiveness is necessary.

  1. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00384270; The ATLAS collaboration; Alison, John; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Greco, Virginia; Horyn, Lesya Anna; Iovene, Alessandro; Kalaitzidis, Panagiotis; Kim, Young-Kee; Kimura, Naoki; Kordas, Kostantinos; Kubota, Takashi; Lanza, Agostino; Liberali, Valentino; Luciano, Pierluigi; Magnin, Betty; Sakellariou, Andreas; Sampsonidis, Dimitrios; Saxon, James; Shojaii, Seyed Ruhollah; Sotiropoulou, Calliope Louisa; Stabile, Alberto; Swiatlowski, Maximilian; Volpi, Guido; Zou, Rui; Shochet, Mel

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  2. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  3. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  4. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    Science.gov (United States)

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  5. Mass flow-rate control unit to calibrate hot-wire sensors

    Energy Technology Data Exchange (ETDEWEB)

    Durst, F.; Uensal, B. [FMP Technology GmbH, Erlangen (Germany); Haddad, K. [FMP Technology GmbH, Erlangen (Germany); Friedrich-Alexander-Universitaet Erlangen-Nuernberg, LSTM-Erlangen, Institute of Fluid Mechanics, Erlangen (Germany); Al-Salaymeh, A.; Eid, Shadi [University of Jordan, Mechanical Engineering Department, Faculty of Engineering and Technology, Amman (Jordan)

    2008-02-15

    Hot-wire anemometry is a measuring technique that is widely employed in fluid mechanics research to study the velocity fields of gas flows. It is general practice to calibrate hot-wire sensors against velocity. Calibrations are usually carried out under atmospheric pressure conditions and these suggest that the wire is sensitive to the instantaneous local volume flow rate. It is pointed out, however, that hot wires are sensitive to the instantaneous local mass flow rate and, of course, also to the gas heat conductivity. To calibrate hot wires with respect to mass flow rates per unit area, i.e., with respect to ({rho}U), requires special calibration test rigs. Such a device is described and its application is summarized within the ({rho}U) range 0.1-25 kg/m{sup 2} s. Calibrations are shown to yield the same hot-wire response curves for density variations in the range 1-7 kg/m{sup 3}. The application of the calibrated wires to measure pulsating mass flows is demonstrated, and suggestions are made for carrying out extensive calibrations to yield the ({rho}U) wire response as a basis for advanced fluid mechanics research on ({rho}U) data in density-varying flows. (orig.)

  6. Numerical Analysis on Flow and Solute Transmission during Heap Leaching Processes

    Directory of Open Access Journals (Sweden)

    J. Z. Liu

    2015-01-01

    Full Text Available Based on fluid flow and rock skeleton elastic deformation during heap leaching process, a deformation-flow coupling model is developed. Regarding a leaching column with 1 m height, solution concentration 1 unit, and the leaching time being 10 days, numerical simulations and indoors experiment are conducted, respectively. Numerical results indicate that volumetric strain and concentration of solvent decrease with bed’s depth increasing; while the concentration of dissolved mineral increases firstly and decreases from a certain position, the peak values of concentration curves move leftward with time. The comparison between experimental results and numerical solutions is given, which shows these two are in agreement on the whole trend.

  7. Hyporheic flow and transport processes: mechanisms, models, and biogeochemical implications

    Science.gov (United States)

    Boano, Fulvio; Harvey, Judson W.; Marion, Andrea; Packman, Aaron I.; Revelli, Roberto; Ridolfi, Luca; Anders, Wörman

    2014-01-01

    Fifty years of hyporheic zone research have shown the important role played by the hyporheic zone as an interface between groundwater and surface waters. However, it is only in the last two decades that what began as an empirical science has become a mechanistic science devoted to modeling studies of the complex fluid dynamical and biogeochemical mechanisms occurring in the hyporheic zone. These efforts have led to the picture of surface-subsurface water interactions as regulators of the form and function of fluvial ecosystems. Rather than being isolated systems, surface water bodies continuously interact with the subsurface. Exploration of hyporheic zone processes has led to a new appreciation of their wide reaching consequences for water quality and stream ecology. Modern research aims toward a unified approach, in which processes occurring in the hyporheic zone are key elements for the appreciation, management, and restoration of the whole river environment. In this unifying context, this review summarizes results from modeling studies and field observations about flow and transport processes in the hyporheic zone and describes the theories proposed in hydrology and fluid dynamics developed to quantitatively model and predict the hyporheic transport of water, heat, and dissolved and suspended compounds from sediment grain scale up to the watershed scale. The implications of these processes for stream biogeochemistry and ecology are also discussed."

  8. Information Specificity Vulnerability: Comparison of Medication Information Flows in Different Health Care Units

    Science.gov (United States)

    Aarnio, Eeva; Raitoharju, Reetta

    Information on patient's medication is often vital especially when patient's condition is critical. However, the information does not yet move freely between different health care units and organizations. Before reaching the point of putting into practice any system that makes the inter-organizational medication information transmission possible, some prerequisites and characteristics of the information in different user organization should be defined. There are for instance units with different level of urgency and data/information intensity (e.g. emergency department vs. medical floor). The higher the urgency level, the more vulnerable the medication information flow is to different discontinuation situations. As a conceptual framework, a scoring system based on the asset specificity in the transaction cost theory and previous literacy on information flows of different health care units is created to define the vulnerability of the information flows. As there is a national medication database under planning, the scoring system could be used to assess the prerequisites for the medication database in Finland.

  9. SOFTWARE SOLUTIONS FOR MEASURING AND FORECASTING THE CASH GENERATING UNIT FLOWS RELATED TO INTANGIBLE ASSETS

    Directory of Open Access Journals (Sweden)

    Veronica R GROSU

    2016-08-01

    Full Text Available In light of the difficulties encountered in assessing the value of the CGU (Cash Generating Unit and of the cash flows associated with goodwill or other intangible assets of a company and after performing the impairment test as provided by the IAS 36-Intangibile Asset and the forecasts related to it, the aim of this paper is to identify and suggest software instruments that would assist in the measurement and forecasting of these elements. The employment of the SPSS and the NeuroShell programmes in analyzing and forecasting the changes in CGU and CGU flows has helped compare the results and the ensuing error margins, thus giving the business entity the possibility to select the best software option, depending on certain variables identified on a micro or a macroeconomic level that may affect the depreciation or the increases in value of the underlying assets for CGU or CGU flows.

  10. Thermochemical Process Development Unit: Researching Fuels from Biomass, Bioenergy Technologies (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2009-01-01

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a unique facility dedicated to researching thermochemical processes to produce fuels from biomass.

  11. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    National Research Council Canada - National Science Library

    Sungki Kim; Wonil Ko; Sungsig Bang

    2015-01-01

    ...) metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method...

  12. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n(3)) or better with system size n, which may be compared with the O(n(5)) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  13. Engineered Barrier System Degradation, Flow, and Transport Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2000-07-17

    The Engineered Barrier System Degradation, Flow, and Transport Process Model Report (EBS PMR) is one of nine PMRs supporting the Total System Performance Assessment (TSPA) being developed by the Yucca Mountain Project for the Site Recommendation Report (SRR). The EBS PMR summarizes the development and abstraction of models for processes that govern the evolution of conditions within the emplacement drifts of a potential high-level nuclear waste repository at Yucca Mountain, Nye County, Nevada. Details of these individual models are documented in 23 supporting Analysis/Model Reports (AMRs). Nineteen of these AMRs are for process models, and the remaining 4 describe the abstraction of results for application in TSPA. The process models themselves cluster around four major topics: ''Water Distribution and Removal Model, Physical and Chemical Environment Model, Radionuclide Transport Model, and Multiscale Thermohydrologic Model''. One AMR (Engineered Barrier System-Features, Events, and Processes/Degradation Modes Analysis) summarizes the formal screening analysis used to select the Features, Events, and Processes (FEPs) included in TSPA and those excluded from further consideration. Performance of a potential Yucca Mountain high-level radioactive waste repository depends on both the natural barrier system (NBS) and the engineered barrier system (EBS) and on their interactions. Although the waste packages are generally considered as components of the EBS, the EBS as defined in the EBS PMR includes all engineered components outside the waste packages. The principal function of the EBS is to complement the geologic system in limiting the amount of water contacting nuclear waste. A number of alternatives were considered by the Project for different EBS designs that could provide better performance than the design analyzed for the Viability Assessment. The design concept selected was Enhanced Design Alternative II (EDA II).

  14. Counter-rotating type axial flow pump unit in turbine mode for micro grid system

    Science.gov (United States)

    Kasahara, R.; Takano, G.; Murakami, T.; Kanemoto, T.; Komaki, K.

    2012-11-01

    Traditional type pumped storage system contributes to adjust the electric power unbalance between day and night, in general. This serial research proposes the hybrid power system combined the wind power unit with the pump-turbine unit, to provide the constant output for the grid system, even at the suddenly fluctuating/turbulent wind. In the pumping mode, the pump should operate unsteadily at not only the normal but also the partial discharge. The operation may be unstable in the rising portion of the head characteristics at the lower discharge, and/or bring the cavitation at the low suction head. To simultaneously overcome both weak points, the authors have proposed a superior pump unit that is composed of counter-rotating type impellers and a peculiar motor with double rotational armatures. This paper discusses the operation at the turbine mode of the above unit. It is concluded with the numerical simulations that this type unit can be also operated acceptably at the turbine mode, because the unit works so as to coincide the angular momentum change through the front runners/impellers with that thorough the rear runners/impellers, namely to take the axial flow at not only the inlet but also the outlet without the guide vanes.

  15. Optimization of protein electroextraction from microalgae by a flow process.

    Science.gov (United States)

    Coustets, Mathilde; Joubert-Durigneux, Vanessa; Hérault, Josiane; Schoefs, Benoît; Blanckaert, Vincent; Garnier, Jean-Pierre; Teissié, Justin

    2015-06-01

    Classical methods, used for large scale treatments such as mechanical or chemical extractions, affect the integrity of extracted cytosolic protein by releasing proteases contained in vacuoles. Our previous experiments on flow processes electroextraction on yeasts proved that pulsed electric field technology allows preserving the integrity of released cytosolic proteins, by not affecting vacuole membranes. Furthermore, large cell culture volumes are easily treated by the flow technology. Based on this previous knowledge, we developed a new protocol in order to electro-extract total cytoplasmic proteins from microalgae (Nannochloropsis salina, Chlorella vulgaris and Haematococcus pluvialis). Given that induction of electropermeabilization is under the control of target cell size, as the mean diameter for N. salina is only 2.5 μm, we used repetitive 2 ms long pulses of alternating polarities with stronger field strengths than previously described for yeasts. The electric treatment was followed by a 24h incubation period in a salty buffer. The amount of total protein release was observed by a classical Bradford assay. A more accurate evaluation of protein release was obtained by SDS-PAGE. Similar results were obtained with C. vulgaris and H. pluvialis under milder electrical conditions as expected from their larger size.

  16. Dissipation process of binary mixture gas in thermally relativistic flow

    CERN Document Server

    Yano, Ryosuke

    2016-01-01

    In this paper, we discuss dissipation process of the binary mixture gas in the thermally relativistic flow \\textcolor{red}{by focusing on the characteristics of the diffusion flux}. As an analytical object, we consider the relativistic rarefied-shock layer problem around the triangle prism. Numerical results of the diffusion flux are compared with the Navier-Stokes-Fourier (NSF) order approximation of the diffusion flux, which is calculated using the diffusion and thermal-diffusion coefficients by Kox \\textit{et al}. [Physica A, 84, 1, pp.165-174 (1976)]. In the case of the uniform flow with the small Lorentz contraction, the diffusion flux, which is obtained by calculating the relativistic Boltzmann equation, is roughly approximated by the NSF order approximation inside the shock wave, whereas the diffusion flux in the vicinity of the wall is markedly different from the NSF order approximation. The magnitude of the diffusion flux, which is obtained by calculating the relativistic Boltzmann equation, is simil...

  17. Coded Ultrasound for Blood Flow Estimation Using Subband Processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael Bachamnn

    2008-01-01

    the excitation signal is broadband and has good spatial resolution after pulse compression. This means that time can be saved by using the same data for B-mode imaging and blood flow estimation. Two different coding schemes are used in this paper, Barker codes and Golay codes. The performance of the codes...... signals are used to increase SNR, followed by subband processing. The received broadband signal is filtered using a set of narrow-band filters. Estimating the velocity in each of the bands and averaging the results yields better performance compared with what would be possible when transmitting a narrow......-band pulse directly. Also, the spatial resolution of the narrow-band pulse would be too poor for brightness-mode (B-mode) imaging, and additional transmissions would be required to update the B-mode image. For the described approach in the paper, there is no need for additional transmissions, because...

  18. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  19. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, Joseph M [ORNL; Cui, Xiaohui [ORNL

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  20. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  1. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  2. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  3. Graphics processing unit-accelerated quantitative trait Loci detection.

    Science.gov (United States)

    Chapuis, Guillaume; Filangi, Olivier; Elsen, Jean-Michel; Lavenier, Dominique; Le Roy, Pascale

    2013-09-01

    Mapping quantitative trait loci (QTL) using genetic marker information is a time-consuming analysis that has interested the mapping community in recent decades. The increasing amount of genetic marker data allows one to consider ever more precise QTL analyses while increasing the demand for computation. Part of the difficulty of detecting QTLs resides in finding appropriate critical values or threshold values, above which a QTL effect is considered significant. Different approaches exist to determine these thresholds, using either empirical methods or algebraic approximations. In this article, we present a new implementation of existing software, QTLMap, which takes advantage of the data parallel nature of the problem by offsetting heavy computations to a graphics processing unit (GPU). Developments on the GPU were implemented using Cuda technology. This new implementation performs up to 75 times faster than the previous multicore implementation, while maintaining the same results and level of precision (Double Precision) and computing both QTL values and thresholds. This speedup allows one to perform more complex analyses, such as linkage disequilibrium linkage analyses (LDLA) and multiQTL analyses, in a reasonable time frame.

  4. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  5. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  6. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  7. Efficient graphics processing unit-based voxel carving for surveillance

    Science.gov (United States)

    Ober-Gecks, Antje; Zwicker, Marius; Henrich, Dominik

    2016-07-01

    A graphics processing unit (GPU)-based implementation of a space carving method for the reconstruction of the photo hull is presented. In particular, the generalized voxel coloring with item buffer approach is transferred to the GPU. The fast computation on the GPU is realized by an incrementally calculated standard deviation within the likelihood ratio test, which is applied as color consistency criterion. A fast and efficient computation of complete voxel-pixel projections is provided using volume rendering methods. This generates a speedup of the iterative carving procedure while considering all given pixel color information. Different volume rendering methods, such as texture mapping and raycasting, are examined. The termination of the voxel carving procedure is controlled through an anytime concept. The photo hull algorithm is examined for its applicability to real-world surveillance scenarios as an online reconstruction method. For this reason, a GPU-based redesign of a visual hull algorithm is provided that utilizes geometric knowledge about known static occluders of the scene in order to create a conservative and complete visual hull that includes all given objects. This visual hull approximation serves as input for the photo hull algorithm.

  8. Influence of laminar flow on preorientation of coal tar pitch structural units: Raman microspectroscopic study

    Science.gov (United States)

    Urban, O.; Jehlička, J.; Pokorný, J.; Rouzaud, J. N.

    2003-08-01

    In order to estimate the role of laminar flow of viscous, aromatic matter of carbonaceous precursor on microtextural preorientation in pregraphitization stage, we performed experiments with coal tar pitch (CTP). The principal hypothesis of preorientation of basic structural units (BSUs) in the case of laminar flow (pressure impregnation of CTP into porous matrix) and secondary release of volatiles during carbonization were studied. Glass microplates, planar porous medium with average distance between single microplates 5 μm were used as suitable porous matrix. Samples of CTP were carbonized up to 2500 °C. Optical microscopy reveals large flow domains in the sample of cokes carbonized between glass microplates. Raman microspectroscopy and high resolution transmission electron microscopy (HRTEM) show that at nanometric scale, the samples do not support the proposed hypotheses. With increasing temperature of pyrolysis, the graphitization of CTP impregnated into porous matrix proceeds to lower degree of structural ordering in comparison with single pyrolyzed CTP. This is explained by the release of volatile matter during carbonization in geometrically restricted spaces. More evident structural changes were discovered with the sample of single coke, where parts of fine grain mosaics, relicts of 'so called QI parts', reveal higher structural organization, in comparison with large and prolonged flow domains, similar to flow domains of cokes from microplates.

  9. Numerical study of cross flow fan performance in an indoor air conditioning unit

    Science.gov (United States)

    Yet, New Mei; Raghavan, Vijay R.; Chinc, W. M.

    2012-06-01

    The cross flow fan is a unique type of turbo machinery where the air stream flows transversely across the impeller, passing the blades twice. Due to its complex geometry, and highly turbulent and unsteady air-flow, a numerical method is used in this work to conduct the characterization study on the performance of a cross flow fan. A 2D cross-sectional model of a typical indoor air conditioning unit has been chosen for the simulation instead of a three dimensional 3D model due to the highly complex geometry of the fan. The simplified 2D model has been validated with experiments where it is found that the RMS error between the simulation and experimental results is less than 7%. The important parameters that affect the cross flow fan performance, i.e. the internal and external blade angles, the blade thickness, and the casing design, are analyzed in this study. The formation of an eccentric vortex is observed within the impeller.

  10. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  11. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  12. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  13. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    Science.gov (United States)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  14. Performance study of a heat recovery tower with synthetic (polyurethane) flow channels to operate in a solar desalination unit

    OpenAIRE

    Frederico Pinheiro Rodrigues

    2010-01-01

    Because of the lack of drinkable water in various semi-arid regions and the necessary use of renewable energies, the present work presents a performance study of a heat recovery tower to operate in a solar desalination unit for decentralized water production. The solar desalination unit has two parts: a heating unit and a desalination unit.This work presents the field results with a desalination tower with synthetic (polyurethane) flow channels. The tower operation consists of the heating ...

  15. Qualitative Assessment of Flow and Transport Mechanisms in Bioremediation Processes

    Science.gov (United States)

    Terry, N.; Hou, Z.

    2008-12-01

    Recent studies suggest that time-lapse crosshole geophysical methods may be effective in monitoring subsurface hydrological and biochemical mechanisms. These methods have potential to provide a minimally invasive, cost-effective, high resolution, field relevant means to gain information previously limited to wellbore data. Our study area is located at a DOE Hanford site, an area heavily polluted with toxic chromate. Time- lapse crosshole seismic and radar data sets have been collected in order to monitor spatio-temporal responses to these processes. Before using these data for parameter estimation and monitoring hydrobiogeochemical processes, we need to 1) identify the critical parameters involved in these processes; 2) determine the sensitivity of seismic/radar responses to these parameters; and 3) choose the most appropriate forward modeling approach for forward and inverse modeling. In this study, we treat critical parameters (e.g., hydraulic conductivity, flow rate, and the dispersion coefficients) as random variables, which can be described by their probabilistic density distributions. Then we adopt stochastic sampling method within the Minimum relative entropy (MRE) framework to generate many realistic models based on the welllog data. From here, the geophysical (crosshole seismic and radar) responses are computed using different forward models to study the sensitivity of the responses to those aforementioned parameters, and the performances of the different forward modeling approaches are compared. Finally, geophysical data are used for hydrobiogeochemical parameter estimation through Bayesian inverse modeling. Our study provides guidance on favorable situations in which borehole geophysical data can be effectively used for monitoring subsurface hydrobiogeochemical processes.

  16. The Open Physiology workflow: modeling processes over physiology circuitboards of interoperable tissue units

    Science.gov (United States)

    de Bono, Bernard; Safaei, Soroush; Grenon, Pierre; Nickerson, David P.; Alexander, Samuel; Helvensteijn, Michiel; Kok, Joost N.; Kokash, Natallia; Wu, Alan; Yu, Tommy; Hunter, Peter; Baldock, Richard A.

    2015-01-01

    A key challenge for the physiology modeling community is to enable the searching, objective comparison and, ultimately, re-use of models and associated data that are interoperable in terms of their physiological meaning. In this work, we outline the development of a workflow to modularize the simulation of tissue-level processes in physiology. In particular, we show how, via this approach, we can systematically extract, parcellate and annotate tissue histology data to represent component units of tissue function. These functional units are semantically interoperable, in terms of their physiological meaning. In particular, they are interoperable with respect to [i] each other and with respect to [ii] a circuitboard representation of long-range advective routes of fluid flow over which to model long-range molecular exchange between these units. We exemplify this approach through the combination of models for physiology-based pharmacokinetics and pharmacodynamics to quantitatively depict biological mechanisms across multiple scales. Links to the data, models and software components that constitute this workflow are found at http://open-physiology.org/. PMID:25759670

  17. Feasibility of utilizing bioindicators for testing microbial inactivation in sweetpotato purees processed with a continuous-flow microwave system.

    Science.gov (United States)

    Brinley, T A; Dock, C N; Truong, V-D; Coronel, P; Kumar, P; Simunovic, J; Sandeep, K P; Cartwright, G D; Swartzel, K R; Jaykus, L-A

    2007-06-01

    Continuous-flow microwave heating has potential in aseptic processing of various food products, including purees from sweetpotatoes and other vegetables. Establishing the feasibility of a new processing technology for achieving commercial sterility requires evaluating microbial inactivation. This study aimed to assess the feasibility of using commercially available plastic pouches of bioindicators containing spores of Geobacillius stearothermophilus ATCC 7953 and Bacillus subtilis ATCC 35021 for evaluating the degree of microbial inactivation achieved in vegetable purees processed in a continuous-flow microwave heating unit. Sweetpotato puree seeded with the bioindicators was subjected to 3 levels of processing based on the fastest particles: undertarget process (F(0) approximately 0.65), target process (F(0) approximately 2.8), and overtarget process (F(0) approximately 10.10). After initial experiments, we found it was necessary to engineer a setup with 2 removable tubes connected to the continuous-flow microwave system to facilitate the injection of indicators into the unit without interrupting the puree flow. Using this approach, 60% of the indicators injected into the system could be recovered postprocess. Spore survival after processing, as evaluated by use of growth indicator dyes and standard plating methods, verified inactivation of the spores in sweetpotato puree. The log reduction results for B. subtilis were equivalent to the predesigned degrees of sterilization (F(0)). This study presents the first report suggesting that bioindicators such as the flexible, food-grade plastic pouches can be used for microbial validation of commercial sterilization in aseptic processing of foods using a continuous-flow microwave system.

  18. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vincent [Karlsruhe Inst. of Technology (KIT) (Germany)

    2011-11-30

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.

  19. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  20. Eigenanalysis of a neural network for optic flow processing

    Science.gov (United States)

    Weber, F.; Eichner, H.; Cuntz, H.; Borst, A.

    2008-01-01

    Flies gain information about self-motion during free flight by processing images of the environment moving across their retina. The visual course control center in the brain of the blowfly contains, among others, a population of ten neurons, the so-called vertical system (VS) cells that are mainly sensitive to downward motion. VS cells are assumed to encode information about rotational optic flow induced by self-motion (Krapp and Hengstenberg 1996 Nature 384 463-6). Recent evidence supports a connectivity scheme between the VS cells where neurons with neighboring receptive fields are connected to each other by electrical synapses at the axonal terminals, whereas the boundary neurons in the network are reciprocally coupled via inhibitory synapses (Haag and Borst 2004 Nat. Neurosci. 7 628-34 Farrow et al 2005 J. Neurosci. 25 3985-93 Cuntz et al 2007 Proc. Natl Acad. Sci. USA). Here, we investigate the functional properties of the VS network and its connectivity scheme by reducing a biophysically realistic network to a simplified model, where each cell is represented by a dendritic and axonal compartment only. Eigenanalysis of this model reveals that the whole population of VS cells projects the synaptic input provided from local motion detectors on to its behaviorally relevant components. The two major eigenvectors consist of a horizontal and a slanted line representing the distribution of vertical motion components across the fly's azimuth. They are, thus, ideally suited for reliably encoding translational and rotational whole-field optic flow induced by respective flight maneuvers. The dimensionality reduction compensates for the contrast and texture dependence of the local motion detectors of the correlation-type, which becomes particularly pronounced when confronted with natural images and their highly inhomogeneous contrast distribution.

  1. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  2. Coded ultrasound for blood flow estimation using subband processing

    DEFF Research Database (Denmark)

    Gran, Fredrik; Udesen, Jesper; Nielsen, Michael bachmann

    2007-01-01

    This paper further investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband...... was carried out using an experimental ultrasound scanner and a commercial linear array 7 MHz transducer. A circulating flow rig was scanned with a beam-to-flow angle of 60 degrees. The flow in the rig was laminar and had a parabolic flow-profile with a peak velocity of 0.09 m/s. The mean relative standard...

  3. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU

  4. Continuous 'Passive' flow-proportional monitoring of drainage using a new modified Sutro weir (MSW) unit.

    Science.gov (United States)

    Vendelboe, Anders Lindblad; Rozemeijer, Joachim; de Jonge, Lis Wollesen; de Jonge, Hubert

    2016-03-01

    In view of their crucial role in water and solute transport, enhanced monitoring of agricultural subsurface drain tile systems is important for adequate water quality management. However, existing monitoring techniques for flow and contaminant loads from tile drains are expensive and labour intensive. The aim of this study was to develop a cost-effective and simple method for monitoring loads from tile drains. The Flowcap is a modified Sutro weir (MSW) unit that can be attached to the outlet of tile drains. It is capable of registering total flow, contaminant loads and flow-averaged concentrations. The MSW builds on a modern passive sampling technique that responds to hydraulic pressure and measures average concentrations over time (days to months) for various substances. Mounting the samplers in the MSW allowed a flow-proportional part of the drainage to be sampled. Laboratory testing yielded high linear correlation between the accumulated sampler flow, q total, and accumulated drainage flow, Q total (r (2) > 0.96). The slope of these correlations was used to calculate the total drainage discharge from the sampled volume, and therefore contaminant load. A calibration of the MSW under controlled laboratory condition was needed before interpretation of the monitoring results was possible. The MSW does not require a shed, electricity, or maintenance. This enables large-scale monitoring of contaminant loads via tile drains, which can improve contaminant transport models and yield valuable information for the selection and evaluation of mitigation options to improve water quality. Results from this type of monitoring can provide data for the evaluation and optimisation of best management practices in agriculture in order to produce the highest yield without water quality and recipient surface waters being compromised.

  5. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  6. ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)

    Science.gov (United States)

    Horn, S.

    2012-03-01

    In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  7. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  8. Attributes for NHDPlus Catchments (Version 1.1) for the Conterminous United States: Base-Flow Index

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This tabular data set represents the mean base-flow index expressed as a percent, compiled for every catchment in NHDPlus for the conterminous United States. Base...

  9. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    He, Qingyun; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; Feng, Jingchao

    2015-12-15

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  10. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  11. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  12. Transport phenomena of reactive fluid flow in heterogeneous combustion processes.

    Science.gov (United States)

    Hung, W. S. Y.; Chen, C. S.; Haviland, J. K.

    1972-01-01

    A previously developed computer program was used to model two transient hybrid combustion processes involving tubes of solid Plexiglas. In the first study, representing combustion of a hybrid rocket, the oxidizing gas was oxygen, and calculations were continued sufficiently long to obtain steady-state values. Systematic variations were made in reaction rate constant, mass flow rate, and pressure, alternatively using constant and temperature dependent regression rate models for the fuel surface. Consistent results were obtained, as is evidenced by the values for the mass function of the reaction product and the flame temperature, for which plots are supplied. In the second study, fire initiation in a duct was studied, with an air mixture as the oxidizing gas. It was demonstrated that a satisfactory flame spread mechanism could be reproduced on the computer. In both of the above applications, the general, transient, two-dimensional conservation equations were represented, together with chemical reactions, solid-fuel interface conditions, and heat conduction in the solid fuel.

  13. Investigation of Multiscale and Multiphase Flow, Transport and Reaction in Heavy Oil Recovery Processes

    Energy Technology Data Exchange (ETDEWEB)

    Yorstos, Yannis C.

    2003-03-19

    The report describes progress made in the various thrust areas of the project, which include internal drives for oil recovery, vapor-liquid flows, combustion and reaction processes and the flow of fluids with yield stress.

  14. A Comparative Evaluation of Cash Flow and Batch Profit Hedging Effectiveness in Commodity Processing

    OpenAIRE

    Dahlgran, Roger A.

    2006-01-01

    Agribusinesses make long-term plant-investment decisions based on discounted cash flow. It is therefore incongruous for an agribusiness firm to use cash flow as a plant-investment criterion and then to completely discard cash flow in favor of batch profits as an operating objective. This paper assumes that cash flow and its stability are important to commodity processors and examines methods for hedging cash flows under continuous processing. Its objectives are (a) to determine how standard h...

  15. The ideal oxygen/nitrous oxide fresh gas flow sequence with the Anesthesia Delivery Unit machine.

    Science.gov (United States)

    Hendrickx, Jan F A; Cardinael, Sara; Carette, Rik; Lemmens, Hendrikus J M; De Wolf, Andre M

    2007-06-01

    To determine whether early reduction of oxygen and nitrous oxide fresh gas flow from 6 L/min to 0.7 L/min could be accomplished while maintaining end-expired nitrous oxide concentration > or =50% with an Anesthesia Delivery Unit anesthesia machine. Prospective, randomized clinical study. Large teaching hospital in Belgium. 53 ASA physical status I and II patients requiring general endotracheal anesthesia and controlled mechanical ventilation. Patients were randomly assigned to one of 4 groups depending on the duration of high oxygen/nitrous oxide fresh gas flow (two and 4 L/min, respectively) before lowering total fresh gas flow to 0.7 L/min (0.3 and 0.4 L/min oxygen and nitrous oxide, respectively): one, two, three, or 5 minutes (1-minute group, 2-minute group, 3-minute group, and 5-minute group), with n = 10, 12, 13, and 8, respectively. The course of the end-expired nitrous oxide concentration and bellows volume deficit at end-expiration was compared among the 4 groups during the first 30 minutes. At the end of the high-flow period the end-expired nitrous oxide concentration was 35.6 +/- 6.2%, 48.4 +/- 4.8%, 53.7 +/- 8.7%, and 57.3 +/- 1.6% in the 4 groups, respectively. Thereafter, the end-expired nitrous oxide concentration decreased to a nadir of 36.1 +/- 4.5%, 45.4 +/- 3.8%, 50.9 +/- 6.1%, and 55.4 +/- 2.8% after three, 4, 6, and 8 minutes after flows were lowered in the 1- to 5-minute groups, respectively. A decrease in bellows volume was observed in most patients, but was most pronounced in the 2-minute group. The bellows volume deficit gradually faded within 15 to 20 minutes in all 4 groups. A 3-minute high-flow period (oxygen and nitrous oxide fresh gas flow of 2 and 4 L/min, respectively) suffices to attain and maintain end-expired nitrous oxide concentration > or =50% and ensures an adequate bellows volume during the ensuing low-flow period.

  16. Calculating Method for Influence of Material Flow on Energy Consumption in Steel Manufacturing Process

    Institute of Scientific and Technical Information of China (English)

    YU Qing-bo; LU Zhong-wu; CAI Jiu-ju

    2007-01-01

    From the viewpoint of systems energy conservation, the influences of material flow on its energy consumption in a steel manufacturing process is an important subject. The quantitative analysis of the relationship between material flow and the energy intensity is useful to save energy in steel industry. Based on the concept of standard material flow diagram, all possible situations of ferric material flow in steel manufacturing process are analyzed. The expressions of the influence of material flow deviated from standard material flow diagram on energy consumption are put forward.

  17. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    Science.gov (United States)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  18. Improving the Quotation Process of an After-Sales Unit

    OpenAIRE

    Matilainen, Janne

    2013-01-01

    The purpose of this study was to model and analyze the quotation process of area managers at a global company. Process improvement requires understanding the fundamentals of the process. The study was conducted a case study. Data comprised of internal documentation of the case company, literature, and semi-structured, themed interviews of process performers and stakeholders. The objective was to produce model of the current state of the process. The focus was to establish a holistic view o...

  19. Investigation of the Dynamic Melting Process in a Thermal Energy Storage Unit Using a Helical Coil Heat Exchanger

    Directory of Open Access Journals (Sweden)

    Xun Yang

    2017-08-01

    Full Text Available In this study, the dynamic melting process of the phase change material (PCM in a vertical cylindrical tube-in-tank thermal energy storage (TES unit was investigated through numerical simulations and experimental measurements. To ensure good heat exchange performance, a concentric helical coil was inserted into the TES unit to pipe the heat transfer fluid (HTF. A numerical model using the computational fluid dynamics (CFD approach was developed based on the enthalpy-porosity method to simulate the unsteady melting process including temperature and liquid fraction variations. Temperature measurements using evenly spaced thermocouples were conducted, and the temperature variation at three locations inside the TES unit was recorded. The effects of the HTF inlet parameters were investigated by parametric studies with different temperatures and flow rate values. Reasonably good agreement was achieved between the numerical prediction and the temperature measurement, which confirmed the numerical simulation accuracy. The numerical results showed the significance of buoyancy effect for the dynamic melting process. The system TES performance was very sensitive to the HTF inlet temperature. By contrast, no apparent influences can be found when changing the HTF flow rates. This study provides a comprehensive solution to investigate the heat exchange process of the TES system using PCM.

  20. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  1. Unit Operation Experiment Linking Classroom with Industrial Processing

    Science.gov (United States)

    Benson, Tracy J.; Richmond, Peyton C.; LeBlanc, Weldon

    2013-01-01

    An industrial-type distillation column, including appropriate pumps, heat exchangers, and automation, was used as a unit operations experiment to provide a link between classroom teaching and real-world applications. Students were presented with an open-ended experiment where they defined the testing parameters to solve a generalized problem. The…

  2. Effect of energetic dissipation processes on the friction unit tribological

    Directory of Open Access Journals (Sweden)

    Moving V. V.

    2007-01-01

    Full Text Available In article presented temperature influence on reological and fric-tion unit coefficients cast iron elements. It has been found that surface layer formed in the temperature friction has good rub off resistance. The surface layer structural hardening and capacity stress relaxation make up.

  3. Molten salt coal gasification process development unit. Phase 1. Volume 1. PDU operations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Kohl, A.L.

    1980-05-01

    This report summarizes the results of a test program conducted on the Molten Salt Coal Gasification Process, which included the design, construction, and operation of a Process Development Unit. In this process, coal is gasified by contacting it with air in a turbulent pool of molten sodium carbonate. Sulfur and ash are retained in the melt, and a small stream is continuously removed from the gasifier for regeneration of sodium carbonate, removal of sulfur, and disposal of the ash. The process can handle a wide variety of feed materials, including highly caking coals, and produces a gas relatively free from tars and other impurities. The gasification step is carried out at approximately 1800/sup 0/F. The PDU was designed to process 1 ton per hour of coal at pressures up to 20 atm. It is a completely integrated facility including systems for feeding solids to the gasifier, regenerating sodium carbonate for reuse, and removing sulfur and ash in forms suitable for disposal. Five extended test runs were made. The observed product gas composition was quite close to that predicted on the basis of earlier small-scale tests and thermodynamic considerations. All plant systems were operated in an integrated manner during one of the runs. The principal problem encountered during the five test runs was maintaining a continuous flow of melt from the gasifier to the quench tank. Test data and discussions regarding plant equipment and process performance are presented. The program also included a commercial plant study which showed the process to be attractive for use in a combined-cycle, electric power plant. The report is presented in two volumes, Volume 1, PDU Operations, and Volume 2, Commercial Plant Study.

  4. Research on the pyrolysis of hardwood in an entrained bed process development unit

    Energy Technology Data Exchange (ETDEWEB)

    Kovac, R.J.; Gorton, C.W.; Knight, J.A.; Newman, C.J.; O' Neil, D.J. (Georgia Inst. of Tech., Atlanta, GA (United States). Research Inst.)

    1991-08-01

    An atmospheric flash pyrolysis process, the Georgia Tech Entrained Flow Pyrolysis Process, for the production of liquid biofuels from oak hardwood is described. The development of the process began with bench-scale studies and a conceptual design in the 1978--1981 timeframe. Its development and successful demonstration through research on the pyrolysis of hardwood in an entrained bed process development unit (PDU), in the period of 1982--1989, is presented. Oil yields (dry basis) up to 60% were achieved in the 1.5 ton-per-day PDU, far exceeding the initial target/forecast of 40% oil yields. Experimental data, based on over forty runs under steady-state conditions, supported by material and energy balances of near-100% closures, have been used to establish a process model which indicates that oil yields well in excess of 60% (dry basis) can be achieved in a commercial reactor. Experimental results demonstrate a gross product thermal efficiency of 94% and a net product thermal efficiency of 72% or more; the highest values yet achieved with a large-scale biomass liquefaction process. A conceptual manufacturing process and an economic analysis for liquid biofuel production at 60% oil yield from a 200-TPD commercial plant is reported. The plant appears to be profitable at contemporary fuel costs of $21/barrel oil-equivalent. Total capital investment is estimated at under $2.5 million. A rate-of-return on investment of 39.4% and a pay-out period of 2.1 years has been estimated. The manufacturing cost of the combustible pyrolysis oil is $2.70 per gigajoule. 20 figs., 87 tabs.

  5. Design Choices for Thermofluid Flow Components and Systems that are Exported as Functional Mockup Units

    Energy Technology Data Exchange (ETDEWEB)

    Wetter, Michael; Fuchs, Marcus; Nouidui, Thierry

    2015-09-21

    This paper discusses design decisions for exporting Modelica thermofluid flow components as Functional Mockup Units. The purpose is to provide guidelines that will allow building energy simulation programs and HVAC equipment manufacturers to effectively use FMUs for modeling of HVAC components and systems. We provide an analysis for direct input-output dependencies of such components and discuss how these dependencies can lead to algebraic loops that are formed when connecting thermofluid flow components. Based on this analysis, we provide recommendations that increase the computing efficiency of such components and systems that are formed by connecting multiple components. We explain what code optimizations are lost when providing thermofluid flow components as FMUs rather than Modelica code. We present an implementation of a package for FMU export of such components, explain the rationale for selecting the connector variables of the FMUs and finally provide computing benchmarks for different design choices. It turns out that selecting temperature rather than specific enthalpy as input and output signals does not lead to a measurable increase in computing time, but selecting nine small FMUs rather than a large FMU increases computing time by 70%.

  6. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  7. Ambient groundwater flow diminishes nitrate processing in the hyporheic zone of streams

    Science.gov (United States)

    Azizian, Morvarid; Boano, Fulvio; Cook, Perran L. M.; Detwiler, Russell L.; Rippy, Megan A.; Grant, Stanley B.

    2017-05-01

    Modeling and experimental studies demonstrate that ambient groundwater reduces hyporheic exchange, but the implications of this observation for stream N-cycling is not yet clear. Here we utilize a simple process-based model (the Pumping and Streamline Segregation or PASS model) to evaluate N-cycling over two scales of hyporheic exchange (fluvial ripples and riffle-pool sequences), ten ambient groundwater and stream flow scenarios (five gaining and losing conditions and two stream discharges), and three biogeochemical settings (identified based on a principal component analysis of previously published measurements in streams throughout the United States). Model-data comparisons indicate that our model provides realistic estimates for direct denitrification of stream nitrate, but overpredicts nitrification and coupled nitrification-denitrification. Riffle-pool sequences are responsible for most of the N-processing, despite the fact that fluvial ripples generate 3-11 times more hyporheic exchange flux. Across all scenarios, hyporheic exchange flux and the Damköhler Number emerge as primary controls on stream N-cycling; the former regulates trafficking of nutrients and oxygen across the sediment-water interface, while the latter quantifies the relative rates of organic carbon mineralization and advective transport in streambed sediments. Vertical groundwater flux modulates both of these master variables in ways that tend to diminish stream N-cycling. Thus, anthropogenic perturbations of ambient groundwater flows (e.g., by urbanization, agricultural activities, groundwater mining, and/or climate change) may compromise some of the key ecosystem services provided by streams.

  8. Can a stepwise steady flow computational fluid dynamics model reproduce unsteady particulate matter separation for common unit operations?

    Science.gov (United States)

    Pathapati, Subbu-Srikanth; Sansalone, John J

    2011-07-01

    Computational fluid dynamics (CFD) is emerging as a model for resolving the fate of particulate matter (PM) by unit operations subject to rainfall-runoff loadings. However, compared to steady flow CFD models, there are greater computational requirements for unsteady hydrodynamics and PM loading models. Therefore this study examines if integrating a stepwise steady flow CFD model can reproduce PM separation by common unit operations loaded by unsteady flow and PM loadings, thereby reducing computational effort. Utilizing monitored unit operation data from unsteady events as a metric, this study compares the two CFD modeling approaches for a hydrodynamic separator (HS), a primary clarifier (PC) tank, and a volumetric clarifying filtration system (VCF). Results indicate that while unsteady CFD models reproduce PM separation of each unit operation, stepwise steady CFD models result in significant deviation for HS and PC models as compared to monitored data; overestimating the physical size requirements of each unit required to reproduce monitored PM separation results. In contrast, the stepwise steady flow approach reproduces PM separation by the VCF, a combined gravitational sedimentation and media filtration unit operation that provides attenuation of turbulent energy and flow velocity.

  9. Ground-Water Flow Model of the Sierra Vista Subwatershed and Sonoran Portions of the Upper San Pedro Basin, Southeastern Arizona, United States, and Northern Sonora, Mexico

    Science.gov (United States)

    Pool, D.R.; Dickinson, Jesse E.

    2007-01-01

    A numerical ground-water model was developed to simulate seasonal and long-term variations in ground-water flow in the Sierra Vista subwatershed, Arizona, United States, and Sonora, Mexico, portions of the Upper San Pedro Basin. This model includes the simulation of details of the groundwater flow system that were not simulated by previous models, such as ground-water flow in the sedimentary rocks that surround and underlie the alluvial basin deposits, withdrawals for dewatering purposes at the Tombstone mine, discharge to springs in the Huachuca Mountains, thick low-permeability intervals of silt and clay that separate the ground-water flow system into deep-confined and shallow-unconfined systems, ephemeral-channel recharge, and seasonal variations in ground-water discharge by wells and evapotranspiration. Steady-state and transient conditions during 1902-2003 were simulated by using a five-layer numerical ground- water flow model representing multiple hydrogeologic units. Hydraulic properties of model layers, streamflow, and evapotranspiration rates were estimated as part of the calibration process by using observed water levels, vertical hydraulic gradients, streamflow, and estimated evapotranspiration rates as constraints. Simulations approximate observed water-level trends throughout most of the model area and streamflow trends at the Charleston streamflow-gaging station on the San Pedro River. Differences in observed and simulated water levels, streamflow, and evapotranspiration could be reduced through simulation of climate-related variations in recharge rates and recharge from flood-flow infiltration.

  10. Quantitative investigation of the transition process in Taylor-Couette flow

    Energy Technology Data Exchange (ETDEWEB)

    Tu, Xin Cheng; Kim, Hyoung Bum Kim [Gyeongsang National University, Jinju (Korea, Republic of); Liu, Dong [Jiangsu University, Zhenjiang (China)

    2013-02-15

    The transition process from circular Couette flow to Taylor vortex flow regime was experimentally investigated by measuring the instantaneous velocity vector fields at the annular gap flow region between two concentric cylinders. The proper orthogonal decomposition method, vorticity calculation, and frequency analysis were applied in order to analyze the instantaneous velocity fields to identify the flow characteristics during the transition process. From the results, the kinetic energy and corresponding reconstructed velocity fields were able to detect the onset of the transition process and the alternation of the flow structure. The intermittency and oscillation of the vortex flows during the transition process were also revealed from the analysis of the instantaneous velocity fields. The results can be a measure of identifying the critical Reynolds number of the Taylor-Couette flow from a velocity measurement method.

  11. Optimizing resource allocation and patient flow: process analysis and reorganization in three chemotherapy outpatient clinics.

    Science.gov (United States)

    Holmes, Morgan; Bodie, Kelly; Porter, Geoffrey; Sullivan, Victoria; Tarasuk, Joy; Trembley, Jodie; Trudeau, Maureen

    2010-01-01

    Optimizing human and physical resources is a major concern for cancer care decision-makers and practitioners. This issue is particularly acute in the context of ambulatory out patient chemotherapy clinics, especially when - as is the case almost everywhere in the industrialized world - the number of people requiring systemic therapy is increasing while budgets, staffing and physical space remain static. Recent initiatives at three hospital-based chemotherapy units - in Halifax, Toronto and Kingston - shed light on the value of process analysis and reorganization for using existing human and physical resources to their full potential, improving patient flow and enhancing patient satisfaction. The steps taken in these settings are broadly applicable to other healthcare settings and would likely result in similar benefits in those environments.

  12. The United States Military Entrance Processing Command (USMEPCOM) Uses Six Sigma Process to Develop and Improve Data Quality

    Science.gov (United States)

    2007-06-01

    mecpom.army.mil Original title on 712 A/B: The United States Military Entrance Processing Command (USMEPCOM) uses Six Sigma process to develop and...Entrance Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 3 • USMEPCOM Overview/History • Purpose • Define: What is Important

  13. Treatment of volatile organic contaminants in a vertical flow filter: Relevance of different removal processes

    NARCIS (Netherlands)

    De Biase, C.; Reger, D.; Schmidt, A.; Jechalke, S.; Reiche, N.; Martínez-Lavanchy, P.M.; Rosell, M.; Van Afferden, M.; Maier, U.; Oswald, S.E.; Thullner, M.

    2011-01-01

    Vertical flow filters and vertical flow constructed wetlands are established wastewater treatment systems and have also been proposed for the treatment of contaminated groundwater. This study investigates the removal processes of volatile organic compounds in a pilot-scale vertical flow filter. The

  14. Treatment of volatile organic contaminants in a vertical flow filter: Relevance of different removal processes

    NARCIS (Netherlands)

    De Biase, C.; Reger, D.; Schmidt, A.; Jechalke, S.; Reiche, N.; Martínez-Lavanchy, P.M.; Rosell, M.; Van Afferden, M.; Maier, U.; Oswald, S.E.; Thullner, M.

    2011-01-01

    Vertical flow filters and vertical flow constructed wetlands are established wastewater treatment systems and have also been proposed for the treatment of contaminated groundwater. This study investigates the removal processes of volatile organic compounds in a pilot-scale vertical flow filter.

  15. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  16. Stochastic Modelling of Shiroro River Stream flow Process

    Directory of Open Access Journals (Sweden)

    Musa, J. J

    2013-01-01

    Full Text Available Economists, social scientists and engineers provide insights into the drivers of anthropogenic climate change and the options for adaptation and mitigation, and yet other scientists, including geographers and biologists, study the impacts of climate change. This project concentrates mainly on the discharge from the Shiroro River. A stochastic approach is presented for modeling a time series by an Autoregressive Moving Average model (ARMA. The development and use of a stochastic stream flow model involves some basic steps such as obtain stream flow record and other information, Selecting models that best describes the marginal probability distribution of flows. The flow discharge of about 22 years (1990-2011 was gotten from the Meteorological Station at Shiroro and analyzed with three different models namely; Autoregressive (AR model, Autoregressive Moving Average (ARMA model and Autoregressive Integrated Moving Average (ARIMA model. The initial model identification is done by using the autocorrelation function (ACF and partial autocorrelation function (PACF. Based on the model analysis and evaluations, proper predictions for the effective usage of the flow from the river for farming activities and generation of power for both industrial and domestic us were made. It also highlights some recommendations to be made to utilize the possible potentials of the river effectively

  17. 76 FR 13973 - United States Warehouse Act; Processed Agricultural Products Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ... Farm Service Agency United States Warehouse Act; Processed Agricultural Products Licensing Agreement... warehouse licenses may be issued under the United States Warehouse Act (USWA). Through this notice, FSA is... processed agricultural products that are stored in climate controlled, cooler, and freezer warehouses....

  18. Genealogy of flows of continuous-state branching processes via flows of partitions and the Eve property

    CERN Document Server

    Labbé, Cyril

    2012-01-01

    We encode the genealogy of a continuous-state branching process associated with a branching mechanism $\\Psi$ - or $\\Psi$-CSBP in short - using a stochastic flow of partitions. This encoding holds for all branching mechanisms and appears as a very tractable object to deal with asymptotic behaviours and convergences. In particular we study the so-called Eve property - the existence of an ancestor from which the entire population descends asymptotically - and give a necessary and sufficient condition on the $\\Psi$-CSBP for this property to hold. Finally, we show that the flow of partitions unifies the lookdown representation and the flow of subordinators when the Eve property holds.

  19. Reactive-Separator Process Unit for Lunar Regolith Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's plans for a lunar habitation outpost call out for process technologies to separate hydrogen sulfide and sulfur dioxide gases from regolith product gas...

  20. Aerodynamic structures and processes in rotationally augmented flow fields

    DEFF Research Database (Denmark)

    Schreck, S.J.; Sørensen, Niels N.; Robinson, M.C.

    2007-01-01

    . Experimental measurements consisted of surface pressure data statistics used to infer sectional boundary layer state and to quantify normal force levels. Computed predictions included high-resolution boundary layer topologies and detailed above-surface flow field structures. This synergy was exploited...... to reliably identify and track pertinent features in the rotating blade boundary layer topology as they evolved in response to varying wind speed. Subsequently, boundary layer state was linked to above-surface flow field structure and used to deduce mechanisms; underlying augmented aerodynamic force...

  1. Full Stokes finite-element modeling of ice sheets using a graphics processing unit

    Science.gov (United States)

    Seddik, H.; Greve, R.

    2016-12-01

    Thermo-mechanical simulation of ice sheets is an important approach to understand and predict their evolution in a changing climate. For that purpose, higher order (e.g., ISSM, BISICLES) and full Stokes (e.g., Elmer/Ice, http://elmerice.elmerfem.org) models are increasingly used to more accurately model the flow of entire ice sheets. In parallel to this development, the rapidly improving performance and capabilities of Graphics Processing Units (GPUs) allows to efficiently offload more calculations of complex and computationally demanding problems on those devices. Thus, in order to continue the trend of using full Stokes models with greater resolutions, using GPUs should be considered for the implementation of ice sheet models. We developed the GPU-accelerated ice-sheet model Sainō. Sainō is an Elmer (http://www.csc.fi/english/pages/elmer) derivative implemented in Objective-C which solves the full Stokes equations with the finite element method. It uses the standard OpenCL language (http://www.khronos.org/opencl/) to offload the assembly of the finite element matrix on the GPU. A mesh-coloring scheme is used so that elements with the same color (non-sharing nodes) are assembled in parallel on the GPU without the need for synchronization primitives. The current implementation shows that, for the ISMIP-HOM experiment A, during the matrix assembly in double precision with 8000, 87,500 and 252,000 brick elements, Sainō is respectively 2x, 10x and 14x faster than Elmer/Ice (when both models are run on a single processing unit). In single precision, Sainō is even 3x, 20x and 25x faster than Elmer/Ice. A detailed description of the comparative results between Sainō and Elmer/Ice will be presented, and further perspectives in optimization and the limitations of the current implementation.

  2. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  3. Flow Field Post Processing via Partial Differential Equations

    NARCIS (Netherlands)

    Preusser, T.; Rumpf, M.; Telea, A.

    2006-01-01

    The visualization of stationary and time-dependent flow is an important and challenging topic in scientific visualization. Its aim is to represent transport phenomena governed by vector fields in an intuitively understandable way. In this paper, we review the use of methods based on partial differen

  4. Coded ultrasound for blood flow estimation using subband processing

    DEFF Research Database (Denmark)

    Gran, F.; Udesen, J.; Jensen, J.A.;

    2008-01-01

    This paper investigates the use of coded excitation for blood flow estimation in medical ultrasound. Traditional autocorrelation estimators use narrow-band excitation signals to provide sufficient signal-to-noise-ratio (SNR) and velocity estimation performance. In this paper, broadband coded sign...

  5. Quantifying the implicit process flow abstraction in SBGN-PD diagrams with Bio-PEPA

    National Research Council Canada - National Science Library

    Loewe, Laurence; Moodie, Stuart; Hillston, Jane

    2009-01-01

    .... Its qualitative Process Diagrams (SBGN-PD) are based on an implicit Process Flow Abstraction (PFA) that can also be used to construct quantitative representations, which can be used for automated analyses of the system...

  6. Work flow of signal processing data of ground penetrating radar case of rigid pavement measurements

    Energy Technology Data Exchange (ETDEWEB)

    Handayani, Gunawan [The Earth Physics and Complex Systems Research Group (Jl. Ganesa 10 Bandung Indonesia) gunawanhandayani@gmail.com (Indonesia)

    2015-04-16

    The signal processing of Ground Penetrating Radar (GPR) requires a certain work flow to obtain good results. Even though the Ground Penetrating Radar data looks similar with seismic reflection data, but the GPR data has particular signatures that the seismic reflection data does not have. This is something to do with coupling between antennae and the ground surface. Because of this, the GPR data should be treated differently from the seismic signal data processing work flow. Even though most of the processing steps still follow the same work flow of seismic reflection data such as: filtering, predictive deconvolution etc. This paper presents the work flow of GPR processing data on rigid pavement measurements. The processing steps start from raw data, de-Wow process, remove DC and continue with the standard process to get rid of noises i.e. filtering process. Some radargram particular features of rigid pavement along with pile foundations are presented.

  7. Work flow of signal processing data of ground penetrating radar case of rigid pavement measurements

    Science.gov (United States)

    Handayani, Gunawan

    2015-04-01

    The signal processing of Ground Penetrating Radar (GPR) requires a certain work flow to obtain good results. Even though the Ground Penetrating Radar data looks similar with seismic reflection data, but the GPR data has particular signatures that the seismic reflection data does not have. This is something to do with coupling between antennae and the ground surface. Because of this, the GPR data should be treated differently from the seismic signal data processing work flow. Even though most of the processing steps still follow the same work flow of seismic reflection data such as: filtering, predictive deconvolution etc. This paper presents the work flow of GPR processing data on rigid pavement measurements. The processing steps start from raw data, de-Wow process, remove DC and continue with the standard process to get rid of noises i.e. filtering process. Some radargram particular features of rigid pavement along with pile foundations are presented.

  8. Two-Phase Flow in Pipes: Numerical Improvements and Qualitative Analysis for a Refining Process

    Directory of Open Access Journals (Sweden)

    Teixeira R.G.D.

    2015-03-01

    Full Text Available Two-phase flow in pipes occurs frequently in refineries, oil and gas production facilities and petrochemical units. The accurate design of such processing plants requires that numerical algorithms be combined with suitable models for predicting expected pressure drops. In performing such calculations, pressure gradients may be obtained from empirical correlations such as Beggs and Brill, and they must be integrated over the total length of the pipe segment, simultaneously with the enthalpy-gradient equation when the temperature profile is unknown. This paper proposes that the set of differential and algebraic equations involved should be solved as a Differential Algebraic Equations (DAE System, which poses a more CPU-efficient alternative to the “marching algorithm” employed by most related work. Demonstrating the use of specific regularization functions in preventing convergence failure in calculations due to discontinuities inherent to such empirical correlations is also a key feature of this study. The developed numerical techniques are then employed to examine the sensitivity to heat-transfer parameters of the results obtained for a typical refinery two-phase flow design problem.

  9. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    B. Zhang (Bo); C.W. Oosterlee (Cornelis)

    2009-01-01

    htmlabstractIn this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover, th

  10. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2009-01-01

    In this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover, the influence

  11. Sensitivity study on critical flow models of SPACE for inadvertent opening of containment spray valve in Shin Kori unit 1

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seyun; Kim, Minhee [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    SPACE (Safety and Performance Analysis Code for Nuclear Power Plants) has been developed by KHNP with the cooperation with KEPCO E and C and KAERI. SPACE code is expected to be applied to the safety analysis for LOCA (Loss of Coolant Accident) and Non-LOCA scenarios. SPACE code solves two-fluid, three-field governing equations and programmed with C++ computer language using object-oriented concepts. To evaluate the analysis capability for the transient phenomena in the actual nuclear power plant, an inadvertent opening of spray valve in startup test phase of Shin Kori unit 1 was simulated with SPACE. To assess the critical flow models of SPACE, the calculation with several critical flow models were carried out. The simulations of an inadvertent opening of spray valve of Shin Kori unit 1 with several critical flow models were carried out. The calculated transient behaviors of major reactor parameters with four critical flow models generally show good agreement with the measured.

  12. Development of high-performance and low-noise axial-flow fan units in their local operating region

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Seung; Ha, Min Ho; Cheong, Cheol Ung [Pusan National University, Busan (Korea, Republic of); Kim, Tae Hoon [LG Electronics Inc., Changwon (Korea, Republic of)

    2015-09-15

    Aerodynamic and aeroacoustic performances of an axial-flow fan unit are improved by modifying its housing structure without changing the fan blade. The target axial-flow fan system is used to lower temperature of a compressor and a condenser in the machine room of a household refrigerator which has relatively high system resistance due to complex layout of structures inside it. First, the performance of the fan system is experimentally characterized by measuring its volume flow rate versus static pressure using a fan performance tester satisfying the AMCA (Air Movement and Control Association) regulation, AMCA 210-07. The detailed structure of flow driven by the fan is numerically investigated using a virtual fan performance tester based on computational fluid dynamics techniques. The prediction result reveals possible loss due to radial and tangential velocity components in the wake flow downstream of the fan. The length of the fan housing is chosen as a design parameter for improving the aerodynamic and aeroacoustic performances of the fan unit by reducing the identified radial and tangential velocity components. Three fan units with different housing lengths longer than the original are analyzed using the virtual fan performance tester. The results confirm the improved aerodynamic performance of the proposed three designs. The flow field driven by the proposed fan unit is closely examined to find the causes for the observed performance improvements, which ensures that the radial and tangential velocity components in the wake flow are reduced. Finally, the improved performance of the proposed fan systems is validated by comparing the P-Q and efficiency curves measured using the fan performance tester. The noise emission from the household refrigerator is also found to be lessened when the new fan units are installed.

  13. The Cilium: Cellular Antenna and Central Processing Unit

    OpenAIRE

    Malicki, Jarema J.; Johnson, Colin A.

    2017-01-01

    Cilia mediate an astonishing diversity of processes. Recent advances provide unexpected insights into the regulatory mechanisms of cilium formation, and reveal diverse regulatory inputs that are related to the cell cycle, cytoskeleton, proteostasis, and cilia-mediated signaling itself. Ciliogenesis and cilia maintenance are regulated by reciprocal antagonistic or synergistic influences, often acting in parallel to each other. By receiving parallel inputs, cilia appear to integrate multiple si...

  14. Simulations of ductile flow in brittle material processing

    Energy Technology Data Exchange (ETDEWEB)

    Luh, M.H.; Strenkowski, J.S.

    1988-12-01

    Research is continuing on the effects of thermal properties of the cutting tool and workpiece on the overall temperature distribution. Using an Eulerian finite element model, diamond and steel tools cutting aluminum have been simulated at various, speeds, and depths of cut. The relative magnitude of the thermal conductivity of the tool and the workpiece is believed to be a primary factor in the resulting temperature distribution in the workpiece. This effect is demonstrated in the change of maximum surface temperatures for diamond on aluminum vs. steel on aluminum. As a preliminary step toward the study of ductile flow in brittle materials, the relative thermal conductivities of diamond on polycarbonate is simulated. In this case, the maximum temperature shifts from the rake face of the tool to the surface of the machined workpiece, thus promoting ductile flow in the workpiece surface.

  15. Uniting Gradual and Abrupt set Processes in Resistive Switching Oxides

    Science.gov (United States)

    Fleck, Karsten; La Torre, Camilla; Aslam, Nabeel; Hoffmann-Eifert, Susanne; Böttger, Ulrich; Menzel, Stephan

    2016-12-01

    Identifying limiting factors is crucial for a better understanding of the dynamics of the resistive switching phenomenon in transition-metal oxides. This improved understanding is important for the design of fast-switching, energy-efficient, and long-term stable redox-based resistive random-access memory devices. Therefore, this work presents a detailed study of the set kinetics of valence change resistive switches on a time scale from 10 ns to 104 s , taking Pt /SrTiO3/TiN nanocrossbars as a model material. The analysis of the transient currents reveals that the switching process can be subdivided into a linear-degradation process that is followed by a thermal runaway. The comparison with a dynamical electrothermal model of the memory cell allows the deduction of the physical origin of the degradation. The origin is an electric-field-induced increase of the oxygen-vacancy concentration near the Schottky barrier of the Pt /SrTiO3 interface that is accompanied by a steadily rising local temperature due to Joule heating. The positive feedback of the temperature increase on the oxygen-vacancy mobility, and thereby on the conductivity of the filament, leads to a self-acceleration of the set process.

  16. Flow-Injection Responses of Diffusion Processes and Chemical Reactions

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2000-01-01

    The technique of Flow-injection Analysis (FIA), now aged 25 years, offers unique analytical methods that are fast, reliable and consuming an absolute minimum of chemicals. These advantages together with its inherent feasibility for automation warrant the future applications of FIA as an attractive...... be used in the resolution of FIA profiles to obtain information about the content of interference’s, in the study of chemical reaction kinetics and to measure absolute concentrations within the FIA-detector cell....

  17. Formation of a Methodological Approach to Evaluating the State of Management of Enterprise Flow Processes

    Directory of Open Access Journals (Sweden)

    Dzobko Iryna P.

    2016-02-01

    Full Text Available The formation of a methodological approach to evaluating management of the state of enterprise flow processes has been considered. Proceeding from the developed and presented in literary sources theoretical propositions on organization of management of enterprise flow processes, the hypothesis of the study is correlation of quantitative and qualitative evaluations of management effectiveness and formation of the integral index on their basis. The article presents stages of implementation of a methodological approach to evaluating the state of management of enterprise flow processes, which implies indicating the components, their characteristics and methods of research. The composition of indicators, on the basis of which it is possible to evaluate effectiveness of management of enterprise flow processes, has been determined. Grouping of such indicators based on the flow nature of enterprise processes has been performed. The grouping of indicators is justified by a pairwise determination of canonical correlations between the selected groups (the obtained high correlation coefficients confirmed the author’s systematization of indicators. It is shown that a specificity of the formation of a methodological approach to evaluating the state of management of enterprise flow processes requires expansion in the direction of aggregation of the results and determination of factors that influence effectiveness of flow processes management. The article carries out such aggregation using the factor analysis. Distribution of a set of objects into different classes according to the results of the cluster analysis has been presented. To obtain an integral estimation of effectiveness of flow processes management, the taxonomic index of a multidimensional object has been built. A peculiarity of the formed methodological approach to evaluating the state of management of enterprise flow processes is in the matrix correlation of integral indicators calculated on

  18. Flagella, flexibility and flow: Physical processes in microbial ecology

    Science.gov (United States)

    Brumley, D. R.; Rusconi, R.; Son, K.; Stocker, R.

    2015-12-01

    How microorganisms interact with their environment and with their conspecifics depends strongly on their mechanical properties, on the hydrodynamic signatures they generate while swimming and on fluid flows in their environment. The rich fluid-structure interaction between flagella - the appendages microorganisms use for propulsion - and the surrounding flow, has broad reaching effects for both eukaryotic and prokaryotic microorganisms. Here, we discuss selected recent advances in our understanding of the physical ecology of microorganisms, which have hinged on the ability to directly interrogate the movement of individual cells and their swimming appendages, in precisely controlled fluid environments, and to image them at appropriately fast timescales. We review how a flagellar buckling instability can unexpectedly serve a fundamental function in the motility of bacteria, we elucidate the role of hydrodynamics and flexibility in the emergent properties of groups of eukaryotic flagella, and we show how fluid flows characteristic of microbial habitats can strongly bias the migration and spatial distribution of bacteria. The topics covered here are illustrative of the potential inherent in the adoption of experimental methods and conceptual frameworks from physics in understanding the lives of microorganisms.

  19. Novel process windows for enabling, accelerating, and uplifting flow chemistry.

    Science.gov (United States)

    Hessel, Volker; Kralisch, Dana; Kockmann, Norbert; Noël, Timothy; Wang, Qi

    2013-05-01

    Novel Process Windows make use of process conditions that are far from conventional practices. This involves the use of high temperatures, high pressures, high concentrations (solvent-free), new chemical transformations, explosive conditions, and process simplification and integration to boost synthetic chemistry on both the laboratory and production scale. Such harsh reaction conditions can be safely reached in microstructured reactors due to their excellent transport intensification properties. This Review discusses the different routes towards Novel Process Windows and provides several examples for each route grouped into different classes of chemical and process-design intensification.

  20. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    Directory of Open Access Journals (Sweden)

    Sungki Kim

    2015-08-01

    Full Text Available Pyroprocessing, which is a dry recycling method, converts spent nuclear fuel into U (Uranium/TRU (TRansUranium metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method. Toward this end, the pyroprocess was classified into four kinds of unit processes: pretreatment, electrochemical reduction, electrorefining and electrowinning. The unit process cost was calculated by classifying the cost consumed at each process into raw material and conversion costs. The unit process costs of the pretreatment, electrochemical reduction, electrorefining and electrowinning were calculated as 195 US$/kgU-TRU, 310 US$/kgU-TRU, 215 US$/kgU-TRU and 231 US$/kgU-TRU, respectively. Finally the total pyroprocess cost was calculated as 951 US$/kgU-TRU. In addition, the cost driver for the raw material cost was identified as the cost for Li3PO4, needed for the LiCl-KCl purification process, and platinum as an anode electrode in the electrochemical reduction process.

  1. Flow Past an Accumulator Unit of an Underwater Energy Storage System:Three Touching Balloons in a Floral Configuration

    Institute of Scientific and Technical Information of China (English)

    Ahmadreza Vasel-Be-Hagh; Rupp Carriveau; David S-K Ting

    2014-01-01

    An LES simulation of flow over an accumulator unit of an underwater compressed air energy storage facility was conducted. The accumulator unit consists of three touching underwater balloons arranged in a floral configuration. The structure of the flow was examined via three dimensional iso surfaces of the Q criterion. Vortical cores were observed on the leeward surface of the balloons. The swirling tube flows generated by these vortical cores were depicted through three dimensional path lines. The flow dynamics were visualized via time series snapshots of two dimensional vorticity contours perpendicular to the flow direction;revealing the turbulent swinging motions of the aforementioned shedding-swirling tube flows. The time history of the hydrodynamic loading was presented in terms of lift and drag coefficients. Drag coefficient of each individual balloon in the floral configuration was smaller than that of a single balloon. It was found that the total drag coefficient of the floral unit of three touching balloons, i.e. summation of the drag coefficients of the balloons, is not too much larger than that of a single balloon whereas it provides three times the storage capacity. In addition to its practical significance in designing appropriate foundation and supports, the instantaneous hydrodynamic loading was used to determine the frequency of the turbulent swirling-swinging motions of the shedding vortex tubes;the Strouhal number was found to be larger than that of a single sphere at the same Reynolds number.

  2. COSTS AND PROFITABILITY IN FOOD PROCESSING: PASTRY TYPE UNITS

    Directory of Open Access Journals (Sweden)

    DUMITRANA MIHAELA

    2013-08-01

    Full Text Available For each company, profitability, products quality and customer satisfaction are the most importanttargets. To attaint these targets, managers need to know all about costs that are used in decision making. Whatkind of costs? How these costs are calculated for a specific sector such as food processing? These are only a fewquestions with answers in our paper. We consider that a case study for this sector may be relevant for all peoplethat are interested to increase the profitability of this specific activity sector.

  3. Impact of Expanded North Slope of Alaska Crude Oil Production on Crude Oil Flows in the Contiguous United States

    Energy Technology Data Exchange (ETDEWEB)

    DeRosa, Sean E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flanagan, Tatiana Paz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    The National Transportation Fuels Model was used to simulate a hypothetical increase in North Slope of Alaska crude oil production. The results show that the magnitude of production utilized depends in part on the ability of crude oil and refined products infrastructure in the contiguous United States to absorb and adjust to the additional supply. Decisions about expanding North Slope production can use the National Transportation Fuels Model take into account the effects on crude oil flows in the contiguous United States.

  4. Discount factors for public sector investment projects using the sum of discounted consumption flows -- estimates for the United Kingdom

    OpenAIRE

    E Kula

    1984-01-01

    In this article a model to estimate a discount factor matrix is derived for discount rates between 1% and 15% for the United Kingdom on the basis of a public-sector project evaluation method known as the sum of discounted consumption flows. These factors can readily be used by project analysts working on United Kingdom projects, especially those in which costs and benefits extend over many years.

  5. Analysis of production flow process with lean manufacturing approach

    Science.gov (United States)

    Siregar, Ikhsan; Arif Nasution, Abdillah; Prasetio, Aji; Fadillah, Kharis

    2017-09-01

    This research was conducted on the company engaged in the production of Fast Moving Consumer Goods (FMCG). The production process in the company are still exists several activities that cause waste. Non value added activity (NVA) in the implementation is still widely found, so the cycle time generated to make the product will be longer. A form of improvement on the production line is by applying lean manufacturing method to identify waste along the value stream to find non value added activities. Non value added activity can be eliminated and reduced by utilizing value stream mapping and identifying it with activity mapping process. According to the results obtained that there are 26% of value-added activities and 74% non value added activity. The results obtained through the current state map of the production process of process lead time value of 678.11 minutes and processing time of 173.94 minutes. While the results obtained from the research proposal is the percentage of value added time of 41% of production process activities while non value added time of the production process of 59%. While the results obtained through the future state map of the production process of process lead time value of 426.69 minutes and processing time of 173.89 minutes.

  6. ENTREPRENEURIAL OPPORTUNITIES IN FOOD PROCESSING UNITS (WITH SPECIAL REFERENCES TO BYADGI RED CHILLI COLD STORAGE UNITS IN THE KARNATAKA STATE

    Directory of Open Access Journals (Sweden)

    P. ISHWARA

    2010-01-01

    Full Text Available After the green revolution, we are now ushering in the evergreen revolution in the country; food processing is an evergreen activity. It is the key to the agricultural sector. In this paper an attempt has been made to study the workings of food processing units with special references to Red Chilli Cold Storage units in the Byadgi district of Karnataka State. Byadgi has been famous for Red Chilli since the days it’s of antiquity. The vast and extensive market yard in Byadagi taluk is famous as the second largest Red Chilli dealing market in the country. However, the most common and recurring problem faced by the farmer is inability to store enough red chilli from one harvest to another. Red chilli that was locally abundant for only a short period of time had to be stored against times of scarcity. In recent years, due to Oleoresin, demand for Red Chilli has grow from other countries like Sri Lanka, Bangladesh, America, Europe, Nepal, Indonesia, Mexico etc. The study reveals that all the cold storage units of the study area have been using vapour compression refrigeration system or method. All entrepreneurs have satisfied with their turnover and profit and they are in a good economic position. Even though the average turnover and profits are increased, few units have shown negligible amount of decrease in turnover and profit. This is due to the competition from increasing number of cold storages and early established units. The cold storages of the study area have been storing Red chilli, Chilli seeds, Chilli powder, Tamarind, Jeera, Dania, Turmeric, Sunflower, Zinger, Channa, Flower seeds etc,. But the 80 per cent of the each cold storage is filled by the red chilli this is due to the existence of vast and extensivered chilli market yard in the Byadgi. There is no business without problems. In the same way the entrepreneurs who are chosen for the study are facing a few problems in their business like skilled labour, technical and management

  7. Microreactors with integrated UV/Vis spectroscopic detection for online process analysis under segmented flow.

    Science.gov (United States)

    Yue, Jun; Falke, Floris H; Schouten, Jaap C; Nijhuis, T Alexander

    2013-12-21

    Combining reaction and detection in multiphase microfluidic flow is becoming increasingly important for accelerating process development in microreactors. We report the coupling of UV/Vis spectroscopy with microreactors for online process analysis under segmented flow conditions. Two integration schemes are presented: one uses a cross-type flow-through cell subsequent to a capillary microreactor for detection in the transmission mode; the other uses embedded waveguides on a microfluidic chip for detection in the evanescent wave field. Model experiments reveal the capabilities of the integrated systems in real-time concentration measurements and segmented flow characterization. The application of such integration for process analysis during gold nanoparticle synthesis is demonstrated, showing its great potential in process monitoring in microreactors operated under segmented flow.

  8. The Cilium: Cellular Antenna and Central Processing Unit.

    Science.gov (United States)

    Malicki, Jarema J; Johnson, Colin A

    2017-02-01

    Cilia mediate an astonishing diversity of processes. Recent advances provide unexpected insights into the regulatory mechanisms of cilium formation, and reveal diverse regulatory inputs that are related to the cell cycle, cytoskeleton, proteostasis, and cilia-mediated signaling itself. Ciliogenesis and cilia maintenance are regulated by reciprocal antagonistic or synergistic influences, often acting in parallel to each other. By receiving parallel inputs, cilia appear to integrate multiple signals into specific outputs and may have functions similar to logic gates of digital systems. Some combinations of input signals appear to impose higher hierarchical control related to the cell cycle. An integrated view of these regulatory inputs will be necessary to understand ciliogenesis and its wider relevance to human biology. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Evaluation Of Functional Flows To Prioritize The Restoration Of Spawning Habitat Geomorphic Units Among Three Tributaries Of The Sacramento-San Joaquin Delta

    Science.gov (United States)

    Escobar, M. I.; Pasternack, G. B.

    2006-12-01

    Biologists have identified fish spawning habitat rehabilitation as a primary goal in the recovery of river ecosystems. Prioritization of restoration efforts in large river ecosystems is a management strategy for an efficient use of available resources. Recognizing that science-based tools to evaluate restoration actions lack the incorporation of key hydrogeomorphic and ecologic attributes of river processes, a method to prioritize salmon spawning habitat restoration efforts that explores the complex linkages among different hydrologic, geomorphic, and ecologic variables was developed. The present work summarizes the conceptual background of the method and presents applications to three tributaries of the Sacramento-San Joaquin Delta system to make management conclusions for those rivers. The method is based on the definition of functional flows. Within the spawning habitat context, functional flows are those flow processes that provide optimal habitat conditioning before the freshwater lifestage begins by creating pool-riffle sequences, and that grant healthy habitat throughout the freshwater lifestage by maintaining the required water depth, velocity, and substrate composition. The method incorporates hydrogeomorphic and ecologic attributes through classifying magnitude and timing of functional flows and determining their effects on the habitat. Essential variables to evaluate the status of spawning habitat (i.e. slope, grain size, discharge, channel geometry, shear stress) are non- dimensionalized to provide comparability. Feasible combinations of the variables are put into an algorithm that discloses scenarios of flow functionality for characteristic hydrographs. The method was used to evaluate the ecological functionality of individual geomorphic units along the Mokelumne, Cosumnes, and Yuba Rivers and to compare them within each river and between rivers. Ranking according to the number of days with functional flows provided a hierarchical comparison of the

  10. Professional Competence of the Head of External Relations Unit and its Development in the Study Process

    OpenAIRE

    Turuševa, Larisa

    2010-01-01

    Dissertation Annotation Larisa Turuševa’s promotion paper „Professional Competence of the Head of External Relations Unit and its Development in the Study Process” is a fulfilled research on the development of Professional competence of the heads of external relations units, conditions for the study programme development. A model of professional competence of the head of external relations unit is worked out, its indicators and levels are described. A study process model for th...

  11. Multilevel Flow Modelling of Process Plant for Diagnosis and Control

    DEFF Research Database (Denmark)

    Lind, Morten

    1982-01-01

    of complex systems. A model of a nuclear power plant (PWR) is presented in the paper for illustration. Due to the consistency of the method, multilevel flow models provide specifications of plant goals and functions and may be used as a basis for design of computer-based support systems for the plant...... operator. Plant control requirements can be derived from the models and due to independence of the actual controller implementation the method may be used as a basis for design of control strategies and for the allocation of control tasks to the computer and the plant operator....

  12. Fluid flow and solute segregation in EFG crystal growth process

    Science.gov (United States)

    Bunoiu, O.; Nicoara, I.; Santailler, J. L.; Duffar, T.

    2005-02-01

    The influence of the die geometry and various growth conditions on the fluid flow and on the solute distribution in EFG method has been studied using numerical simulation. The commercial FIDAP software has been used in order to solve the momentum and mass transfer equations in the capillary channel and in the melt meniscus. Two types of shaper design are studied and the results are in good agreement with the void distribution observed in rod-shaped sapphire crystals grown by the EFG method in the various configurations.

  13. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  14. Phase II Groundwater Flow Model of Corrective Action Unit 98: Frenchman Flat, Nevada Test Site, Nye County, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    John McCord

    2006-05-01

    The Phase II Frenchman Flat groundwater flow model is a key element in the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996) corrective action strategy for the Underground Test Area (UGTA) Frenchman Flat corrective action unit (CAU). The objective of this integrated process is to provide an estimate of the vertical and horizontal extent of contaminant migration for each CAU to predict contaminant boundaries. A contaminant boundary is the model-predicted perimeter that defines the extent of radionuclide-contaminated groundwater from underground testing above background conditions exceeding the ''Safe Drinking Water Act'' (SDWA) standards. The contaminant boundary will be composed of both a perimeter boundary and a lower hydrostratigraphic unit (HSU) boundary. The computer model will predict the location of this boundary within 1,000 years and must do so at a 95 percent level of confidence. Additional results showing contaminant concentrations and the location of the contaminant boundary at selected times will also be presented. These times may include the verification period, the end of the five-year proof-of-concept period, as well as other times that are of specific interest. This report documents the development and implementation of the groundwater flow model for the Frenchman Flat CAU. Specific objectives of the Phase II Frenchman Flat flow model are to: (1) Incorporate pertinent information and lessons learned from the Phase I Frenchman Flat CAU models. (2) Develop a three-dimensional (3-D), mathematical flow model that incorporates the important physical features of the flow system and honors CAU-specific data and information. (3) Simulate the steady-state groundwater flow system to determine the direction and magnitude of groundwater fluxes based on calibration to Frenchman Flat hydrogeologic data. (4) Quantify the uncertainty in the direction and magnitude of groundwater flow due to uncertainty in

  15. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    Science.gov (United States)

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  16. Synthesis of a parallel data stream processor from data flow process networks

    NARCIS (Netherlands)

    Zissulescu-Ianculescu, Claudiu

    2008-01-01

    In this talk, we address the problem of synthesizing Process Network specifications to FPGA execution platforms. The process networks we consider are special cases of Kahn Process Networks. We call them COMPAAN Data Flow Process Networks (CDFPN) because they are provided by a translator called the C

  17. Dynamic evolution process of turbulent channel flow after opposition control

    Science.gov (United States)

    Ge, Mingwei; Tian, De; Yongqian, Liu

    2017-02-01

    Dynamic evolution of turbulent channel flow after application of opposition control (OC), together with the mechanism of drag reduction, is studied through direct numerical simulation (DNS). In the simulation, the pressure gradient is kept constant, and the flow rate increases due to drag reduction. In the transport of mean kinetic energy (MKE), one part of the energy from the external pressure is dissipated by the mean shear, and the other part is transported to the turbulent kinetic energy (TKE) through a TKE production term (TKP). It is found that the increase of MKE is mainly induced by the reduction of TKP that is directly affected by OC. Further analysis shows that the suppression of the redistribution term of TKE in the wall normal direction plays a key role in drag reduction, which represses the wall normal velocity fluctuation and then reduces TKP through the attenuation of its main production term. When OC is suddenly applied, an acute imbalance of energy in space is induced by the wall blowing and suction. Both the skin-friction and TKP terms exhibit a transient growth in the initial phase of OC, which can be attributed to the local effect of and in the viscous sublayer. Project supported by the National Natural Science Foundation of China (Grant No. 11402088 and Grant No. 51376062) , State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources (Grant No. LAPS15005), and ‘the Fundamental Research Funds for the Central Universities’ (Grant No.2014MS33).

  18. Development Status of Power Processing Unit for 250mN-Class Hall Thruster

    Science.gov (United States)

    Osuga, H.; Suzuki, K.; Ozaki, T.; Nakagawa, T.; Suga, I.; Tamida, T.; Akuzawa, Y.; Suzuki, H.; Soga, Y.; Furuichi, T.; Maki, S.; Matui, K.

    2008-09-01

    Institute for Unmanned Space Experiment Free Flyer (USEF) and Mitsubishi Electric Corporation (MELCO) are developing the next generation ion engine system under the sponsorship of Ministry of Economy, Trade and Industry (METI) within six years. The system requirement specifications are a thrust level of over 250mN and specific impulse of over 1500 sec with a less than 5kW electric power supply, and a lifetime of over 3,000 hours. These target specifications required the development of both a Hall Thruster and a Power Processing Unit (PPU). In the 2007 fiscal year, the PPU called Second Engineering Model (EM2) consist of all power supplies was a model for the Hall Thruster system. The EM2 PPU showed the discharge efficiency was over 96.2% for 250V and 350V at output power between 1.8kW to 4.5kW. And also the Hall Thruster could start up quickly and smoothly to control the discharge voltage, the inner magnet current, the outer magnet current and the xenon flow rate. This paper reports on the design and test results of the EM2 PPU.

  19. The significance of late-stage processes in lava flow emplacement: squeeze-ups in the 2001 Etna flow field

    Science.gov (United States)

    Applegarth, L. J.; Pinkerton, H.; James, M. R.

    2009-04-01

    The general processes associated with the formation and activity of ephemeral boccas in lava flow fields are well documented (e.g. Pinkerton & Sparks 1976; Polacci & Papale 1997). The importance of studying such behaviour is illustrated by observations of the emplacement of a basaltic andesite flow at Parícutin during the 1940s. Following a pause in advance of one month, this 8 km long flow was reactivated by the resumption of supply from the vent, which forced the rapid drainage of stagnant material in the flow front region. The material extruded during drainage was in a highly plastic state (Krauskopf 1948), and its displacement allowed hot fluid lava from the vent to be transported in a tube to the original flow front, from where it covered an area of 350,000 m2 in one night (Luhr & Simkin 1993). Determining when a flow has stopped advancing, and cannot be drained in such a manner, is therefore highly important in hazard assessment and flow modelling, and our ability to do this may be improved through the examination of relatively small-scale secondary extrusions and boccas. The 2001 flank eruption of Mt. Etna, Sicily, resulted in the emplacement of a 7 km long compound `a`ā flow field over a period of 23 days. During emplacement, many ephemeral boccas were observed in the flow field, which were active for between two and at least nine days. The longer-lived examples initially fed well-established flows that channelled fresh material from the main vent. With time, as activity waned, the nature of the extruded material changed. The latest stages of development of all boccas involved the very slow extrusion of material that was either draining from higher parts of the flow or being forced out of the flow interior as changing local flow conditions pressurised parts of the flow that had been stagnant for some time. Here we describe this late-stage activity of the ephemeral boccas, which resulted in the formation of ‘squeeze-ups' of lava with a markedly different

  20. The Aluminum Deep Processing Project of North United Aluminum Landed in Qijiang

    Institute of Scientific and Technical Information of China (English)

    2014-01-01

    <正>On April 10,North United Aluminum Company respectively signed investment cooperation agreements with Qijiang Industrial Park and Qineng Electricity&Aluminum Co.,Ltd,signifying the landing of North United Aluminum’s aluminum deep processing project in Qijiang.

  1. Advanced Recording and Preprocessing of Physiological Signals. [data processing equipment for flow measurement of blood flow by ultrasonics

    Science.gov (United States)

    Bentley, P. B.

    1975-01-01

    The measurement of the volume flow-rate of blood in an artery or vein requires both an estimate of the flow velocity and its spatial distribution and the corresponding cross-sectional area. Transcutaneous measurements of these parameters can be performed using ultrasonic techniques that are analogous to the measurement of moving objects by use of a radar. Modern digital data recording and preprocessing methods were applied to the measurement of blood-flow velocity by means of the CW Doppler ultrasonic technique. Only the average flow velocity was measured and no distribution or size information was obtained. Evaluations of current flowmeter design and performance, ultrasonic transducer fabrication methods, and other related items are given. The main thrust was the development of effective data-handling and processing methods by application of modern digital techniques. The evaluation resulted in useful improvements in both the flowmeter instrumentation and the ultrasonic transducers. Effective digital processing algorithms that provided enhanced blood-flow measurement accuracy and sensitivity were developed. Block diagrams illustrative of the equipment setup are included.

  2. User experience network. Supply gas failure alarm on Cardinal Health Infant Flow SiPAP units may not activate.

    Science.gov (United States)

    2009-07-01

    The supply gas failure alarm on Cardinal Health Infant Flow SiPAP units manufactured before April 2009 may not activate in the event of a gas supply loss if the device's silencer accessory is attached. However, the unit's FiO2 (fraction of inspired oxygen) and low-airway-pressure alarms will activate in such cases. If both of these alarms activate simultaneously, users should suspect a failure of the gas supply pressure. Identifying affected units requires testing that can be conducted during the device's next scheduled maintenance.

  3. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  4. Advanced In-Space Propulsion (AISP): High Temperature Boost Power Processing Unit (PPU) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The task is to investigate the technology path to develop a 10kW modular Silicon Carbide (SiC) based power processing unit (PPU). The PPU utilizes the high...

  5. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) Power Processing Unit (PPU) for Hall Effect...

  6. Power flow control and damping enhancement of a large wind farm using a superconducting magnetic energy storage unit

    DEFF Research Database (Denmark)

    Chen, S. S.; Wang, L.; Lee, W. J.

    2009-01-01

    A novel scheme using a superconducting magnetic energy storage (SMES) unit to perform both power flow control and damping enhancement of a large wind farm (WF) feeding to a utility grid is presented. The studied WF consisting of forty 2 MW wind induction generators (IGs) is simulated by an equiva...

  7. Modeling and flow analysis of pure nylon polymer for injection molding process

    Science.gov (United States)

    Nuruzzaman, D. M.; Kusaseh, N.; Basri, S.; Oumer, A. N.; Hamedon, Z.

    2016-02-01

    In the production of complex plastic parts, injection molding is one of the most popular industrial processes. This paper addresses the modeling and analysis of the flow process of the nylon (polyamide) polymer for injection molding process. To determine the best molding conditions, a series of simulations are carried out using Autodesk Moldflow Insight software and the processing parameters are adjusted. This mold filling commercial software simulates the cavity filling pattern along with temperature and pressure distributions in the mold cavity. In the modeling, during the plastics flow inside the mold cavity, different flow parameters such as fill time, pressure, temperature, shear rate and warp at different locations in the cavity are analyzed. Overall, this Moldflow is able to perform a relatively sophisticated analysis of the flow process of pure nylon. Thus the prediction of the filling of a mold cavity is very important and it becomes useful before a nylon plastic part to be manufactured.

  8. Modeling summer month hydrological drought probabilities in the United States using antecedent flow conditions

    Science.gov (United States)

    Austin, Samuel H.; Nelms, David L.

    2017-01-01

    Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.

  9. Scale Effects in Laboratory and Pilot-Plant Reactors for Trickle-Flow Processes Les conséquences de l'extrapolation appliquée aux procédés à écoulement ruisselant réalisés en laboratoire et dans les réacteurs des unités-pilotes

    Directory of Open Access Journals (Sweden)

    Sie S. T.

    2006-11-01

    Full Text Available Research and development studies in a laboratory are necessarily conducted on a scale which is orders of magnitude smaller than that in commercial practice. In the case of the development and commercialization of an unprecedented novel process technology, available laboratory results have to be translated into envisaged technology on a commercial scale, i. e. the problem is that of scaling-up. However, in many circumstances the commercial technology is more or less defined as far as type of reactor is concerned and laboratory studies are concerned with the generation of predictive information on the behaviour of new catalysts, alternative feedstocks, etc. , in such a reactor. In many cases the complexity of feed composition and reaction kinetics preclude the prediction to be made on the basis of a combination of fundamental kinetic data and computer models, so that there is no other option than to simulate the commercial reactor on a laboratory scale, i. e. the problem is that of scaling-down. From the point of view of R & D Defficiency, the scale of the laboratory experiments should be as small as possible without detracting from the meaningfulness of the results. In the present paper some problems in the scaling-down of a trickle-flow reactor as applied in hydrotreating processes to kinetically equivalent laboratory reactors of different sizes will be discussed. Two main aspects relating to inequalities in fluid dynamics resulting from the differences in scale will be treated in more detail, viz. deviations from ideal plug flow and non ideal wetting or irrigation of the catalyst particles. Although a laboratory reactor can never be a true small-scale replica of a commercial trickle-flow reactor in all respects, it can nevertheless be made to provide representative data as far as the catalytic conversion aspects are concerned. By ressorting to measures such as catalyst bed dilution with fine catalytically inert material it proves possible to

  10. A review of concentrated flow erosion processes on rangelands: Fundamental understanding and knowledge gaps

    Directory of Open Access Journals (Sweden)

    Sayjro K. Nouwakpo

    2016-06-01

    Full Text Available Concentrated flow erosion processes are distinguished from splash and sheetflow processes in their enhanced ability to mobilize and transport large amounts of soil, water and dissolved elements. On rangelands, soil, nutrients and water are scarce and only narrow margins of resource losses are tolerable before crossing the sustainability threshold. In these ecosystems, concentrated flow processes are perceived as indicators of degradation and often warrant the implementation of mitigation strategies. Nevertheless, this negative perception of concentrated flow processes may conflict with the need to improve understanding of the role of these transport vessels in redistributing water, soil and nutrients along the rangeland hillslope. Vegetation influences the development and erosion of concentrated flowpaths and has been the primary factor used to control and mitigate erosion on rangelands. At the ecohydrologic level, vegetation and concentrated flow pathways are engaged in a feedback relationship, the understanding of which might help improve rangeland management and restoration strategies. In this paper, we review published literature on experimental and conceptual research pertaining to concentrated flow processes on rangelands to: (1 present the fundamental science underpinning concentrated flow erosion modeling in these landscapes, (2 discuss the influence of vegetation on these erosion processes, (3 evaluate the contribution of concentrated flow erosion to overall sediment budget and (4 identify knowledge gaps.

  11. Non-equilibrium reacting gas flows kinetic theory of transport and relaxation processes

    CERN Document Server

    Nagnibeda, Ekaterina; Nagnibeda, Ekaterina

    2009-01-01

    This volume develops the kinetic theory of transport phenomena and relaxation processes in the flows of reacting gas mixtures. The theory is applied to the modeling of non-equilibrium flows behind strong shock waves, in the boundary layer, and in nozzles.

  12. Computational Flow Dynamic Simulation of Micro Flow Field Characteristics Drainage Device Used in the Process of Oil-Water Separation

    Directory of Open Access Journals (Sweden)

    Guangya Jin

    2017-01-01

    Full Text Available Aqueous crude oil often contains large amounts of produced water and heavy sediment, which seriously threats the safety of crude oil storage and transportation. Therefore, the proper design of crude oil tank drainage device is prerequisite for efficient purification of aqueous crude oil. In this work, the composition and physicochemical properties of crude oil samples were tested under the actual conditions encountered. Based on these data, an appropriate crude oil tank drainage device was developed using the principle of floating ball and multiphase flow. In addition, the flow field characteristics in the device were simulated and the contours and streamtraces of velocity magnitude at different nine moments were obtained. Meanwhile, the improvement of flow field characteristics after the addition of grids in crude oil tank drainage device was validated. These findings provide insights into the development of effective selection methods and serve as important references for oil-water separation process.

  13. PROCESS INTENSIFICATION: MICROWAVE INITIATED REACTIONS USING A CONTINUOUS FLOW REACTOR

    Science.gov (United States)

    The concept of process intensification has been used to develop a continuous narrow channel reactor at Clarkson capable of carrying out reactions under isothermal conditions whilst being exposed to microwave (MW) irradiation thereby providing information on the true effect of mi...

  14. Stochastic flows, reaction-diffusion processes, and morphogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, J.J.; Hatlee, M.D.; Musho, M.K.; Politowicz, P.A.; Walsh, C.A.

    1983-02-01

    Recently, an exact procedure has been introduced (C. A. Walsh and J. J. Kozak, Phys. Rev. Lett.. 47: 1500 (1981)) for calculating the expected walk length for a walker undergoing random displacements on a finite or infinite (periodic) d-dimensional lattice with traps (reactive sites). The method (which is based on a classification of the symmetry of the sites surrounding the central deep trap and a coding of the fate of the random walker as it encounters a site of given symmetry) is applied here to several problems in lattice statistics for each of which exact results are presented. First, we assess the importance of lattice geometry in influencing the efficiency of reaction-diffusion processs in simple and multiple trap systems by reporting values of for square (cubic) versus hexagonal lattices in d = 2,3. We then show how the method may be applied to variable-step (distance-dependent) walks for a single walker on a given lattice and also demonstrate the calculation of the expected walk length for the case of multiple walkers. Finally, we make contact with recent discussions of ''mixing'' by showing that the degree of chaos associated with flows in certain lattice-systems can be calibrated by monitoring the lattice walks induced by the Poincare map of a certain parabolic function.

  15. Numerical calculation of flow and heat transfer process in the new-type external combustion swirl-flowing hot stove

    Institute of Scientific and Technical Information of China (English)

    Shuchen Zhang; Hongzhi Guo; Xiangjun Liu; Zhangping Cai; Xiancheng Gao; Sidong Xu

    2003-01-01

    It is clarified that the important method to improve the blast temperature of the small and the middle blast furnaces whose production is about two-thirds of total sum of China from 1000℃ to 1250-1300℃ is to preheat both their combustion-supporting air and coal gas. The air temperature of blast furnaces can be reached to 1250-1300℃ by burning single blast furnace coal gas if high speed burner is applied to blast furnaces and new-type external combustion swirl-flowing hot stove is used to preheat their combustion-supporting air. The computational results of the flow and heat transfer processions in the bot stove prove that the surface of the bed of the thernal storage balls there have not eccentric flow and the flow field and temperature field distribution is even. The computational results of the blast temperature distribution are similar to those determination experiment data. The numerical results also provide references for developing and designing the new-type external combustion swirl-flowing hot stoves.

  16. Identification of vortices in a transonic compressor flow and the stall process

    Institute of Scientific and Technical Information of China (English)

    HUANGXu-dong; CHENHai-xin; FUSong; DavidWisler; AspiWadia; G.ScottMcNulty

    2007-01-01

    A novel vortex identification method for the visualization of the flow field is used for the study of the stall process of a transonic compressor. The parameter η4, which is one of the five invariants formed by the stain rate and vorticity tensors from the theory of modern rational mechanics, is found to have good ability to identify vortex stretching and vortex relaxation/breakdown processes, is introduced here to identify the tip leakage vortices. Compare with former generally used DPH(dynamic pressure head) contour, the new method reveals much more flow details which may advance our understanding of the compressor behaviors. The Vortices details are revealed in both peak efficiency and near stall condition. A possible stall process is also suggested based on the vortices analysis. The tip leakage flow from mid-chord, besides leading edge leakage flow, is also considered to play an important role in the stall process.

  17. Scaling up ecohydrological processes: role of surface water flow in water-limited landscapes

    CSIR Research Space (South Africa)

    Popp, A

    2009-11-01

    Full Text Available microscale processes like ecohydrological feedback mechanisms and spatial exchange like surface water flow, the authors derive transition probabilities from a fine-scale simulation model. They applied two versions of the landscape model, one that includes...

  18. Overview of flow studies for recycling metal commodities in the United States

    Science.gov (United States)

    Sibley, Scott F.

    2011-01-01

    Metal supply consists of primary material from a mining operation and secondary material, which is composed of new and old scrap. Recycling, which is the use of secondary material, can contribute significantly to metal production, sometimes accounting for more than 50 percent of raw material supply. From 2001 to 2011, U.S. Geological Survey (USGS) scientists studied 26 metals to ascertain the status and magnitude of their recycling industries. The results were published in chapters A-Z of USGS Circular 1196, entitled, "Flow Studies for Recycling Metal Commodities in the United States." These metals were aluminum (chapter W), antimony (Q), beryllium (P), cadmium (O), chromium (C), cobalt (M), columbium (niobium) (I), copper (X), germanium (V), gold (A), iron and steel (G), lead (F), magnesium (E), manganese (H), mercury (U), molybdenum (L), nickel (Z), platinum (B), selenium (T), silver (N), tantalum (J), tin (K), titanium (Y), tungsten (R), vanadium (S), and zinc (D). Each metal commodity was assigned to a single year: chapters A-M have recycling data for 1998; chapters N-R and U-W have data for 2000, and chapters S, T, and X-Z have data for 2004. This 27th chapter of Circular 1196 is called AA; it includes salient data from each study described in chapters A-Z, along with an analysis of overall trends of metals recycling in the United States during 1998 through 2004 and additional up-to-date reviews of selected metal recycling industries from 1991 through 2008. In the United States for these metals in 1998, 2000, and 2004 (each metal commodity assigned to a single year), 84 million metric tons (Mt) of old scrap was generated. Unrecovered old scrap totaled 43 Mt (about 51 percent of old scrap generated, OSG), old scrap consumed was 38 Mt (about 45 percent of OSG), and net old scrap exports were 3.3 Mt (about 4 percent of OSG). Therefore, there was significant potential for increased recovery from scrap. The total old scrap supply was 88 Mt, and the overall new

  19. Investigation of Multiscale and Multiphase Flow, Transport and Reaction in Heavy Oil Recovery Processes

    Energy Technology Data Exchange (ETDEWEB)

    Yortsos, Yanis C.

    2002-10-08

    In this report, the thrust areas include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.

  20. Model of coupled gas flow and deformation process in heterogeneous coal seams and its application

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chun-hui; ZHAO Quan-sheng; YU Yong-jiang

    2011-01-01

    The heterogeneity of coal was studied by mechanical tests. Probability plots of experimental data show that the mechanical parameters of heterogeneous coal follow a Weibull distribution. Based on elasto-plastic mechanics and gas dynamics, the model of coupled gas flow and deformation process of heterogeneous coal was presented and the effects of heterogeneity of coal on gas flow and failure of coal were investigated. Major findings include: The effect of the heterogeneity of coal on gas flow and mechanical failure of coal can be considered by the model in this paper. Failure of coal has a great effect on gas flow.

  1. Taming hazardous chemistry in flow: the continuous processing of diazo and diazonium compounds.

    Science.gov (United States)

    Deadman, Benjamin J; Collins, Stuart G; Maguire, Anita R

    2015-02-02

    The synthetic utilities of the diazo and diazonium groups are matched only by their reputation for explosive decomposition. Continuous processing technology offers new opportunities to make and use these versatile intermediates at a range of scales with improved safety over traditional batch processes. In this minireview, the state of the art in the continuous flow processing of reactive diazo and diazonium species is discussed.

  2. Automated processing of whole blood units: operational value and in vitro quality of final blood components

    Science.gov (United States)

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958

  3. Resource Analysis of Cognitive Process Flow Used to Achieve Autonomy

    Science.gov (United States)

    2016-03-01

    the software is fixed (e.g., many FPGA implementations). The structural complexity reflected in the state space is determined by the... reflected in the state-space trajectories that move in multiple state-space dimensions simultaneously. • In synchronous processing the trajectories...important component of the configuration state would be the strengths of synaptic connections between neurons. Figure 7: Decomposition of Internal System

  4. Relationships among the Energy, Emergy and Money Flows of the United States from 1900 to 2011

    Directory of Open Access Journals (Sweden)

    Daniel Elliott Campbell

    2014-10-01

    Full Text Available Energy Systems Language models of the resource base for the U.S. economy and of economic exchange were used, respectively, (1 to show how energy consumption and emergy use contribute to real and nominal GDP and (2 to propose a model of coupled flows that explains high correlations of these inputs with measures of market-based economic activity. We examined a 3rd power law model of growth supported by excess resources and found evidence that it has governed U.S. economic growth since 1900, i.e., nominal GDP was best explained by a power function of total emergy use with exponent 2.8. We used a weight of evidence approach to identify relationships among emergy, energy, and money flows in the U.S. from 1900 to 2011. All measures of quality adjusted energy consumption had a relationship with nominal GDP that was best described by a hyperbolic function plus a constant and the relationship between all measures of energy consumption and real GDP was best described by a 2nd order polynomial. The fact that energy consumption per unit of real GDP declined after 1996 as real GDP continued to increase indicates that energy conservation or a shift toward less energy intensive industries has resulted in lower fossil fuel use and reduced CO2 emissions, while maintaining growth in real GDP. Since all energy consumption measures vs. real GDP deviated from a power law relationship after 1996; whereas, total emergy use did not, we concluded that total emergy use captured more of the factors responsible for the increase in real GDP than did energy measures alone, and as a result, total emergy use may be the best measure to quantify the biophysical basis for social and economic activity in the information age. The Emergy to Money Ratio measured as solar emjoules per nominal $ followed a decreasing trend from a high of 1.01E+14 semj/$ in 1902 to 1.56E+12 semj/$ in 2011 with fluctuations in its value corresponding to major periods of inflation and deflation over this

  5. The International Space Station Alpha (ISSA) End-to-End On-Orbit Maintenance Process Flow

    Science.gov (United States)

    Zingrebe, Kenneth W., II

    1995-01-01

    As a tool for construction and refinement of the on-orbit maintenance system to sustain the International Space Station Alpha (ISSA), the Mission Operations Directorate (MOD) developed an end to-end on-orbit maintenance process flow. This paper discusses and demonstrates that process flow. This tool is being used by MOD to identify areas which require further work in preparation for MOD's role in the conduct of on-orbit maintenance operations.

  6. Generalized Fleming-Viot processes with immigration via stochastic flows of partitions

    CERN Document Server

    Foucart, Clément

    2011-01-01

    The generalized Fleming-Viot processes were defined in 1999 by Donnelly and Kurtz using a particle model and by Bertoin and Le Gall in 2003 using stochastic flows of bridges. In both methods, the key argument used to characterize these processes is the duality between these processes and exchangeable coalescents. A larger class of coalescent processes, called distinguished coalescents, was set up recently to incorporate an immigration phenomenon in the underlying population. The purpose of this article is to define and characterize a class of probability-measure valued processes called the generalized Fleming-Viot processes with immigration. We consider some stochastic flows of partitions of Z_{+}, in the same spirit as Bertoin and Le Gall's flows, replacing roughly speaking, composition of bridges by coagulation of partitions. Identifying at any time a population with the integers $\\mathbb{N}:=\\{1,2,...\\}$, the formalism of partitions is effective in the past as well as in the future especially when there ar...

  7. Sediment transfer processes in a debris-flow dominated catchment in the Swiss Alps

    Science.gov (United States)

    McArdell, B. W.; Berger, C.; Schlunegger, F.

    2009-12-01

    The transfer of sediment from steep hillslopes into channels and subsequent mobilization remains a problem with implications for the development of landscapes as well as applications in natural hazards mitigation. The Illgraben catchment in the Swiss Alps is among the most active catchments in Europe, with several 100’000 cubic meters of sediment exported from the catchment (active area debris flows every year, providing an exceptional opportunity to investigate the transfer of sediment from hillslopes to the outlet of the channel at the distal end of the alluvial fan. Thirty-four debris flows or similar torrential flash flood/hyper-concentrated flows have been recorded at the debris flow observation station since the year 2000. Data are available for many flow properties including front velocity (max. 10 m/s) and front flow depth (max. 3.25 m) as well as estimates for debris flow volume (max. 85,000 cubic meters). Flow bulk density data are also available from a large force plate installation for most flows since 2004, permitting estimation of sediment export from the catchment by debris flows. The channel morphology is strongly affected by these events, and debris flows can increase their volume considerably by entraining material from the channel bed. Aerial photography of the initiation area and upper catchment (fall 2007, early summer and fall 2008; fall 2009 is planned) and photogrammetric analyses allow detection of areas of land surface elevation change (deposition or erosion). Strong hillslope channel coupling is expected, with sediment delivery to the steep torrent channels by rockfall and other mass-movement processes. The upper catchment is generally quite active, yet the main sediment source of debris flows varies from event to event In some cases it was possible to identify the movement of small landslides into torrent channels and the subsequent removal by debris flows. In other cases no landslide activity was obvious and the sediment for the

  8. GATE REGULATION SPEED AND TRANSITION PROCESS OF UNSTEADY FLOW IN CHANNEL

    Institute of Scientific and Technical Information of China (English)

    TAN Guang-ming; DING Zhi-liang; WANG Chang-de; YAO Xiong

    2008-01-01

    The operation methods of channel and the speed of gate regulation have great influence on the transformation of flow in water conveyance channels. Based on characteristics method, a 1-D unsteady flow numerical model for gate regulation was established in this study. The process of water flow was simulated under different boundary conditions. The influence of gate regulation speed and channel operation methods on flow transition process was analyzed. The numerical results show that under the same conditions, with increasing regulation speed of the gate, the change rates of discharge and water level increase, while the response time of channel becomes shorter, and ultimately the discharge and water level will transit to the same equilibrium states. Moreover, the flow is easier to reach stable state, if the water level in front of the sluice is kept constant, instead of behind the sluice. This study will be important to the scheme design of automatic operation control in water conveyance channels.

  9. Heat transfer and fluid flow in biological processes advances and applications

    CERN Document Server

    Becker, Sid

    2015-01-01

    Heat Transfer and Fluid Flow in Biological Processes covers emerging areas in fluid flow and heat transfer relevant to biosystems and medical technology. This book uses an interdisciplinary approach to provide a comprehensive prospective on biofluid mechanics and heat transfer advances and includes reviews of the most recent methods in modeling of flows in biological media, such as CFD. Written by internationally recognized researchers in the field, each chapter provides a strong introductory section that is useful to both readers currently in the field and readers interested in learning more about these areas. Heat Transfer and Fluid Flow in Biological Processes is an indispensable reference for professors, graduate students, professionals, and clinical researchers in the fields of biology, biomedical engineering, chemistry and medicine working on applications of fluid flow, heat transfer, and transport phenomena in biomedical technology. Provides a wide range of biological and clinical applications of fluid...

  10. Regional Groundwater Processes and Flow Dynamics from Age Tracer Data

    Science.gov (United States)

    Morgenstern, Uwe; Stewart, Mike K.; Matthews, Abby

    2016-04-01

    Age tracers are now used in New Zealand on regional scales for quantifying the impact and lag time of land use and climate change on the quantity and quality of available groundwater resources within the framework of the National Policy Statement for Freshwater Management 2014. Age tracers provide measurable information on the dynamics of groundwater systems and reaction rates (e.g. denitrification), essential for conceptualising the regional groundwater - surface water system and informing the development of land use and groundwater flow and transport models. In the Horizons Region of New Zealand, around 200 wells have tracer data available, including tritium, SF6, CFCs, 2H, 18O, Ar, N2, CH4 and radon. Well depths range from shallower wells in gravel aquifers in the Horowhenua and Tararua districts, and deeper wells in the aquifers between Palmerston North and Wanganui. Most of the groundwater samples around and north of the Manawatu River west of the Tararua ranges are extremely old (>100 years), even from relatively shallow wells, indicating that these groundwaters are relatively disconnected from fresh surface recharge. The groundwater wells in the Horowhenua tap into a considerably younger groundwater reservoir with groundwater mean residence time (MRT) of 10 - 40 years. Groundwater along the eastern side of the Tararua and Ruahine ranges is significantly younger, typically groundwater recharge rates, as deduced from groundwater depth and MRT, are extremely low in the central coastal area, consistent with confined groundwater systems, or with upwelling of old groundwater close to the coast. Very low vertical recharge rates along the Manawatu River west of the Manawatu Gorge indicate upwelling groundwater conditions in this area, implying groundwater discharge into the river is more likely here than loss of river water into the groundwater system. High recharge rates observed at several wells in the Horowhenua area and in the area east of the Tararua and

  11. Biodiesel and FAME synthesis assisted by microwaves: Homogeneous batch and flow processes

    Energy Technology Data Exchange (ETDEWEB)

    J. Hernando; P. Leton; M.P. Matia; J.L. Novella; J. Alvarez-Builla [Universidad de Alcala, Madrid (Spain). Planta Piloto de Quimica Fina

    2007-07-15

    Fatty acids methyl esters (FAME) have been prepared under microwave irradiation, using homogeneous catalysis, either in batch or in a flow system. The quality of the biodiesel obtained has been confirmed by GC analysis of the isolated product. While the initial experiments have been performed in a small scale laboratory batch reactor, the best experiment has been straightforward converted into a stop-flow process, by the use of a microwave flow system. Compared with conventional heating methods, the process using microwaves irradiation proved to be a faster method for alcoholysis of triglycerides with methanol, leading to high yields of FAME. Short communication. 19 refs., 2 tabs.

  12. Laminar flow and convective transport processes scaling principles and asymptotic analysis

    CERN Document Server

    Brenner, Howard

    1992-01-01

    Laminar Flow and Convective Transport Processes: Scaling Principles and Asymptotic Analysis presents analytic methods for the solution of fluid mechanics and convective transport processes, all in the laminar flow regime. This book brings together the results of almost 30 years of research on the use of nondimensionalization, scaling principles, and asymptotic analysis into a comprehensive form suitable for presentation in a core graduate-level course on fluid mechanics and the convective transport of heat. A considerable amount of material on viscous-dominated flows is covered.A unique feat

  13. THE ASYMPTOTIC PROPERTIES OF SUPERCRITICAL BISEXUAL GALTON-WATSON BRANCHING PROCESSES WITH IMMIGRATION OF MATING UNITS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this article the supercritical bisexual Galton-Watson branching processes with the immigration of mating units is considered. A necessary condition for the almost sure convergence, and a sufficient condition for the L1 convergence are given for the process with the suitably normed condition.

  14. A Framework for Smart Distribution of Bio-signal Processing Units in M-Health

    NARCIS (Netherlands)

    Mei, Hailiang; Widya, Ing; Broens, Tom; Pawar, Pravin; Halteren, van Aart; Shishkov, Boris; Sinderen, van Marten

    2007-01-01

    This paper introduces the Bio-Signal Processing Unit (BSPU) as a functional component that hosts (part of ) the bio-signal information processing algorithms that are needed for an m-health application. With our approach, the BSPUs can be dynamically assigned to available nodes between the bio-signal

  15. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    Science.gov (United States)

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  16. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  17. A numerical study on the flow and heat transfer characteristics in a noncontact glass transportation unit

    Energy Technology Data Exchange (ETDEWEB)

    Im, Ik Tae; Park, Chan Woo [Chonbuk National University, Jeonju (Korea, Republic of); Kim, Kwang Sun [Korea University of Technology and Education, Chonan (Korea, Republic of)

    2009-12-15

    Vertical sputtering systems are key equipment in the manufacture of liquid crystal display (LCD) panels. During the sputtering process for LCD panels, a glass plate is transported between chambers for various processes, such as deposition of chemicals on the surface. The minimization of surface scratches and damage to the glass, the rate of consumption of gas, and the stability of the floating glass-plate are key considerations in the design of a gas pad. To develop new, non-contact systems of transportation for large, thin glass plates, various shapes of the nozzle of a gas pad unit were considered in this study. In the proposed nozzle design, negative pressure was used to suppress undesirable fluctuations of the glass plate. After the nozzle's shape was varied through numerical simulations in two dimensions, we determined the optimal shape, after which three-dimensional analyses were carried out to verify the results from the two-dimensional analyses. The rate of heat transfer from the glass plate, as a result of the gas jet, was also investigated. The average Nusselt number at the glass surface varied from 22.7 to 26.6 depending on the turbulence model, while the value from the correlation for the jet array was 23.5. It was found that the well-established correlation equation of the Nusselt number for the circular jet array can be applied to the cooling of the glass plates

  18. Vadose zone process that control landslide initiation and debris flow propagation

    Science.gov (United States)

    Sidle, Roy C.

    2015-04-01

    Advances in the areas of geotechnical engineering, hydrology, mineralogy, geomorphology, geology, and biology have individually advanced our understanding of factors affecting slope stability; however, the interactions among these processes and attributes as they affect the initiation and propagation of landslides and debris flows are not well understood. Here the importance of interactive vadose zone processes is emphasized related to the mechanisms, initiation, mode, and timing of rainfall-initiated landslides that are triggered by positive pore water accretion, loss of soil suction and increase in overburden weight, and long-term cumulative rain water infiltration. Both large- and small-scale preferential flow pathways can both contribute to and mitigate instability, by respectively concentrating and dispersing subsurface flow. These mechanisms are influenced by soil structure, lithology, landforms, and biota. Conditions conducive to landslide initiation by infiltration versus exfiltration are discussed relative to bedrock structure and joints. The effects of rhizosphere processes on slope stability are examined, including root reinforcement of soil mantles, evapotranspiration, and how root structures affect preferential flow paths. At a larger scale, the nexus between hillslope landslides and in-channel debris flows is examined with emphasis on understanding the timing of debris flows relative to chronic and episodic infilling processes, as well as the episodic nature of large rainfall and related stormflow generation in headwater streams. The hydrogeomorphic processes and conditions that determine whether or not landslides immediately mobilize into debris flows is important for predicting the timing and extent of devastating debris flow runout in steep terrain. Given the spatial footprint of individual landslides, it is necessary to assess vadose zone processes at appropriate scales to ascertain impacts on mass wasting phenomena. Articulating the appropriate

  19. Command decoder unit. [performance tests of data processing terminals and data converters for space shuttle orbiters

    Science.gov (United States)

    1976-01-01

    The design and testing of laboratory hardware (a command decoder unit) used in evaluating space shuttle instrumentation, data processing, and ground check-out operations is described. The hardware was a modification of another similar instrumentation system. A data bus coupler was designed and tested to interface the equipment to a central bus controller (computer). A serial digital data transfer mechanism was also designed. Redundant power supplies and overhead modules were provided to minimize the probability of a single component failure causing a catastrophic failure. The command decoder unit is packaged in a modular configuration to allow maximum user flexibility in configuring a system. Test procedures and special test equipment for use in testing the hardware are described. Results indicate that the unit will allow NASA to evaluate future software systems for use in space shuttles. The units were delivered to NASA and appear to be adequately performing their intended function. Engineering sketches and photographs of the command decoder unit are included.

  20. Activation process in excitable systems with multiple noise sources: Large number of units

    CERN Document Server

    Franović, Igor; Todorović, Kristina; Kostić, Srđan; Burić, Nikola

    2015-01-01

    We study the activation process in large assemblies of type II excitable units whose dynamics is influenced by two independent noise terms. The mean-field approach is applied to explicitly demonstrate that the assembly of excitable units can itself exhibit macroscopic excitable behavior. In order to facilitate the comparison between the excitable dynamics of a single unit and an assembly, we introduce three distinct formulations of the assembly activation event. Each formulation treats different aspects of the relevant phenomena, including the threshold-like behavior and the role of coherence of individual spikes. Statistical properties of the assembly activation process, such as the mean time-to-first pulse and the associated coefficient of variation, are found to be qualitatively analogous for all three formulations, as well as to resemble the results for a single unit. These analogies are shown to derive from the fact that global variables undergo a stochastic bifurcation from the stochastically stable fix...

  1. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents

    Science.gov (United States)

    Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.

    2013-01-01

    Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451

  2. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings.

    Science.gov (United States)

    Boubour, Jean; Jenson, Katherine; Richter, Hannah; Yarbrough, Josiah; Oden, Z Maria; Schuler, Douglas A

    2016-01-01

    Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed "the sterile box," is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings.

  3. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings.

    Directory of Open Access Journals (Sweden)

    Jean Boubour

    Full Text Available Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed "the sterile box," is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings.

  4. High Power Silicon Carbide (SiC) Power Processing Unit Development

    Science.gov (United States)

    Scheidegger, Robert J.; Santiago, Walter; Bozak, Karin E.; Pinero, Luis R.; Birchenough, Arthur G.

    2015-01-01

    NASA GRC successfully designed, built and tested a technology-push power processing unit for electric propulsion applications that utilizes high voltage silicon carbide (SiC) technology. The development specifically addresses the need for high power electronics to enable electric propulsion systems in the 100s of kilowatts. This unit demonstrated how high voltage combined with superior semiconductor components resulted in exceptional converter performance.

  5. Experience in design and startup of distillation towers in primary crude oil processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Y.N.; D' yakov, V.G.; Mamontov, G.V.; Sheinman, V.A.; Ukhin, V.V.

    1985-11-01

    This paper describes a refinery in the city of Mathura, India, with a capacity of 7 million metric tons of crude per year, designed and constructed to include the following units: AVT for primary crude oil processing; catalytic cracking; visbreaking; asphalt; and other units. A diagram of the atmospheric tower with stripping sections is shown, and the stabilizer tower is illustrated. The startup and operation of the AVT and visbreaking units are described, and they demonstrate the high reliability and efficiency of the equipment.

  6. Program note: applying the UN process indicators for emergency obstetric care to the United States.

    Science.gov (United States)

    Lobis, S; Fry, D; Paxton, A

    2005-02-01

    The United Nations Process Indicators for emergency obstetric care (EmOC) have been used extensively in countries with high maternal mortality ratios (MMR) to assess the availability, utilization and quality of EmOC services. To compare the situation in high MMR countries to that of a low MMR country, data from the United States were used to determine EmOC service availability, utilization and quality. As was expected, the United States was found to have an adequate amount of good-quality EmOC services that are used by the majority of women with life-threatening obstetric complications.

  7. Event-triggered logical flow control for comprehensive process integration of multi-step assays on centrifugal microfluidic platforms.

    Science.gov (United States)

    Kinahan, David J; Kearney, Sinéad M; Dimov, Nikolay; Glynn, Macdara T; Ducrée, Jens

    2014-07-01

    The centrifugal "lab-on-a-disc" concept has proven to have great potential for process integration of bioanalytical assays, in particular where ease-of-use, ruggedness, portability, fast turn-around time and cost efficiency are of paramount importance. Yet, as all liquids residing on the disc are exposed to the same centrifugal field, an inherent challenge of these systems remains the automation of multi-step, multi-liquid sample processing and subsequent detection. In order to orchestrate the underlying bioanalytical protocols, an ample palette of rotationally and externally actuated valving schemes has been developed. While excelling with the level of flow control, externally actuated valves require interaction with peripheral instrumentation, thus compromising the conceptual simplicity of the centrifugal platform. In turn, for rotationally controlled schemes, such as common capillary burst valves, typical manufacturing tolerances tend to limit the number of consecutive laboratory unit operations (LUOs) that can be automated on a single disc. In this paper, a major advancement on recently established dissolvable film (DF) valving is presented; for the very first time, a liquid handling sequence can be controlled in response to completion of preceding liquid transfer event, i.e. completely independent of external stimulus or changes in speed of disc rotation. The basic, event-triggered valve configuration is further adapted to leverage conditional, large-scale process integration. First, we demonstrate a fluidic network on a disc encompassing 10 discrete valving steps including logical relationships such as an AND-conditional as well as serial and parallel flow control. Then we present a disc which is capable of implementing common laboratory unit operations such as metering and selective routing of flows. Finally, as a pilot study, these functions are integrated on a single disc to automate a common, multi-step lab protocol for the extraction of total RNA from

  8. Numerical Simulation on Flow and Heat Transfer Performance of Air-cooler for a Natural Gas Storage Compressor Unit

    Science.gov (United States)

    Liu, Biyuan; Zhang, Feng; Ma, Zenghui; Zheng, Zilong; Feng, Jianmei

    2017-08-01

    Heat transfer efficiency has been a key issue for large size air coolers with the noise reducers used in natural gas storage compressor unit, especially operated in summer with cooling air at a high temperature. The 3-D numerical simulation model of the whole air cooler was established to study the flow field characteristic with different inlet and outlet structures by CFD software. The system pressure loss distributions were calculated. The relationship was obtained among heat exchange efficiency, resistance loss, and the structure of air cooler, the results presented some methods to improve cooling air flow rate and heat exchange efficiency. Based on the results, some effective measures were proposed to improve heat exchanger efficiency and were implemented in the actual operation unit.

  9. Processing the ground vibration signal produced by debris flows: the methods of amplitude and impulses compared

    Science.gov (United States)

    Arattano, M.; Abancó, C.; Coviello, V.; Hürlimann, M.

    2014-12-01

    Ground vibration sensors have been increasingly used and tested, during the last few years, as devices to monitor debris flows and they have also been proposed as one of the more reliable devices for the design of debris flow warning systems. The need to process the output of ground vibration sensors, to diminish the amount of data to be recorded, is usually due to the reduced storing capabilities and the limited power supply, normally provided by solar panels, available in the high mountain environment. There are different methods that can be found in literature to process the ground vibration signal produced by debris flows. In this paper we will discuss the two most commonly employed: the method of impulses and the method of amplitude. These two methods of data processing are analyzed describing their origin and their use, presenting examples of applications and their main advantages and shortcomings. The two methods are then applied to process the ground vibration raw data produced by a debris flow occurred in the Rebaixader Torrent (Spanish Pyrenees) in 2012. The results of this work will provide means for decision to researchers and technicians who find themselves facing the task of designing a debris flow monitoring installation or a debris flow warning equipment based on the use of ground vibration detectors.

  10. Energy and mass balances in multiple-effect upward solar distillers with air flow through the last-effect unit

    Energy Technology Data Exchange (ETDEWEB)

    Homing Yeh; Chiidong Ho [Tamkang Univ. Tamsui, Dept. of Chemical Engineering, Taipei Hsien (Taiwan)

    2000-04-01

    Considerable improvement in productivity may be obtained if water vapor in the last-effect unit is carried away directly by flowing air. The theory of a closed-type upward multiple-effect solar distiller has been modified to that of an open-type device, and the energy and mass balances have been derived. The production rate of distilled water for each effect under various climate, design, and operational conditions may be predicted by simultaneously solving the appropriate equations. (Author)

  11. A multiple-point geostatistical method for characterizing uncertainty of subsurface alluvial units and its effects on flow and transport

    Science.gov (United States)

    Cronkite-Ratcliff, C.; Phelps, G.A.; Boucher, A.

    2012-01-01

    This report provides a proof-of-concept to demonstrate the potential application of multiple-point geostatistics for characterizing geologic heterogeneity and its effect on flow and transport simulation. The study presented in this report is the result of collaboration between the U.S. Geological Survey (USGS) and Stanford University. This collaboration focused on improving the characterization of alluvial deposits by incorporating prior knowledge of geologic structure and estimating the uncertainty of the modeled geologic units. In this study, geologic heterogeneity of alluvial units is characterized as a set of stochastic realizations, and uncertainty is indicated by variability in the results of flow and transport simulations for this set of realizations. This approach is tested on a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. Yucca Flat was chosen as a data source for this test case because it includes both complex geologic and hydrologic characteristics and also contains a substantial amount of both surface and subsurface geologic data. Multiple-point geostatistics is used to model geologic heterogeneity in the subsurface. A three-dimensional (3D) model of spatial variability is developed by integrating alluvial units mapped at the surface with vertical drill-hole data. The SNESIM (Single Normal Equation Simulation) algorithm is used to represent geologic heterogeneity stochastically by generating 20 realizations, each of which represents an equally probable geologic scenario. A 3D numerical model is used to simulate groundwater flow and contaminant transport for each realization, producing a distribution of flow and transport responses to the geologic heterogeneity. From this distribution of flow and transport responses, the frequency of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary.

  12. Process Flow Features as a Host-Based Event Knowledge Representation

    Science.gov (United States)

    2012-06-14

    Detection System . . . . . . . . . . . 2 VMI virtual machine introspection . . . . . . . . . . . . . . . . 4 CMAT Compiled Memory Analysis Tool...4 GA genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . 22 VMI virtual machine introspection...use virtual machine introspection( VMI )to eval- uate the novel use of host based process flow feature clustering to model processes behaviors. The

  13. Overview of the Dissertation Process within the Framework of Flow Theory: A Qualitative Study

    Science.gov (United States)

    Cakmak, Esra; Oztekin, Ozge; Isci, Sabiha; Danisman, Sahin; Uslu, Fatma; Karadag, Engin

    2015-01-01

    The purpose of this study is to examine the flow of doctoral students who are also research assistants and in the dissertation process. The study was designed using the case study method. The case undertaken in the study was the dissertation process. Eleven participants were selected into the study using maximum variation sampling. Face-to-face,…

  14. On the self-organizing process of large scale shear flows

    Energy Technology Data Exchange (ETDEWEB)

    Newton, Andrew P. L. [Department of Applied Maths, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Kim, Eun-jin [School of Mathematics and Statistics, University of Sheffield, Sheffield, Yorkshire S3 7RH (United Kingdom); Liu, Han-Li [High Altitude Observatory, National Centre for Atmospheric Research, P. O. BOX 3000, Boulder, Colorado 80303-3000 (United States)

    2013-09-15

    Self organization is invoked as a paradigm to explore the processes governing the evolution of shear flows. By examining the probability density function (PDF) of the local flow gradient (shear), we show that shear flows reach a quasi-equilibrium state as its growth of shear is balanced by shear relaxation. Specifically, the PDFs of the local shear are calculated numerically and analytically in reduced 1D and 0D models, where the PDFs are shown to converge to a bimodal distribution in the case of finite correlated temporal forcing. This bimodal PDF is then shown to be reproduced in nonlinear simulation of 2D hydrodynamic turbulence. Furthermore, the bimodal PDF is demonstrated to result from a self-organizing shear flow with linear profile. Similar bimodal structure and linear profile of the shear flow are observed in gulf stream, suggesting self-organization.

  15. [High flow nasal cannula in infants: Experience in a critical patient unit].

    Science.gov (United States)

    Wegner A, Adriana; Cespedes F, Pamela; Godoy M, María Loreto; Erices B, Pedro; Urrutia C, Luis; Venthur U, Carina; Labbé C, Marcela; Riquelme M, Hugo; Sanchez J, Cecilia; Vera V, Waldo; Wood V, David; Contreras C, Juan Carlos; Urrutia S, Efren

    2015-01-01

    The high flow nasal cannula (HFNC) is a method of respiratory support that is increasingly being used in paediatrics due to its results and safety. To determine the efficacy of HFNC, as well as to evaluate the factors related to its failure and complications associated with its use in infants. An analysis was performed on the demographic, clinical, blood gas, and radiological data, as well as the complications of patients connected to a HFNC in a critical care unit between June 2012 and September 2014. A comparison was made between the patients who failed and those who responded to HFNC. A failure was considered as the need for further respiratory support during the first 48hours of connection. The Kolmogorov Smirnov, Mann-Whitney U, chi squared and the Exact Fisher test were used, as well as correlations and a binary logistic regression model for P≤.05. The study included 109 patients, with a median age and weight: 1 month (0.2-20 months) and 3.7kg (2-10kg); 95 percentile: 3.7 months and 5.7kg, respectively. The most frequent diagnosis and radiological pattern was bronchiolitis (53.2%) and interstitial infiltration (56%). Around 70.6% responded. There was a significant difference between failure and response in the diagnosis (P=.013), radiography (P=018), connection context (P<.0001), pCO2 (median 40.7mmHg [15.4-67 mmHg] versus 47.3mmHg [28.6-71.3mmHg], P=.004) and hours on HFNC (median 60.75hrs [5-621.5 hrs] versus 10.5hrs [1-29 hrs], P<.0001). The OR of the PCO2 ≥ 55mmHg for failure was 2.97 (95% CI; 1.08-8.17; P=.035). No patient died and no complications were recorded. The percentage success observed was similar to that published. In this sample, the failure of HFNC was only associated with an initial pCO2 ≥ 55mmHg. On there being no complications reported as regards it use, it is considered safe, although a randomised, controlled, multicentre study is required to compare and contrast these results. Copyright © 2015 Sociedad Chilena de Pediatr

  16. Numerical simulation of gas-dynamic, thermal processes and evaluation of the stress-strain state in the modeling compressor of the gas-distributing unit

    Science.gov (United States)

    Shmakov, A. F.; Modorskii, V. Ya.

    2016-10-01

    This paper presents the results of numerical modeling of gas-dynamic processes occurring in the flow path, thermal analysis and evaluation of the stress-strain state of a three-stage design of the compressor gas pumping unit. Physical and mathematical models of the processes developed. Numerical simulation was carried out in the engineering software ANSYS 13. The problem is solved in a coupled statement, in which the results of the gas-dynamic calculation transferred as boundary conditions for the evaluation of the thermal and stress-strain state of a three-stage design of the compressor gas pumping unit. The basic parameters, which affect the stress-strain state of the housing and changing gaps of labyrinth seals in construction. The method of analysis of the pumped gas flow influence on the strain of construction was developed.

  17. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Krichinsky, A.M.

    1983-02-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to estimate compositions of materials in vessels involved in unit operations and chemical processes. This program has been implemented in a remotely operated nuclear fuel processing plant. NUMATH provides estimates of the steady-state composition of materials residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimations, the results are determined for the cataloged in container-oriented files. The estimated compositions represent materials collected in applicable vessels - including consideration for materials previously acknowledged in these vessels. The program utilizes process measurements and simple performance models to estimate material holdup and distribution within unit operations. In simulated run-testing, NUMATH typically produced estimates within 5% of the measured inventories for uranium and within 8% of the measured inventories for thorium during steady-state process operation.

  18. Grace: a Cross-platform Micromagnetic Simulator On Graphics Processing Units

    CERN Document Server

    Zhu, Ru

    2014-01-01

    A micromagnetic simulator running on graphics processing unit (GPU) is presented. It achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude for large input problems. Different from GPU implementations of other research groups, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform compatible. It runs on GPU from venders include NVidia, AMD and Intel, which paved the way for fast micromagnetic simulation on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics card. A copy of the simulator software is publicly available.

  19. Fast extended focused imaging in digital holography using a graphics processing unit.

    Science.gov (United States)

    Wang, Le; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen

    2011-05-01

    We present a simple and effective method for reconstructing extended focused images in digital holography using a graphics processing unit (GPU). The Fresnel transform method is simplified by an algorithm named fast Fourier transform pruning with frequency shift. Then the pixel size consistency problem is solved by coordinate transformation and combining the subpixel resampling and the fast Fourier transform pruning with frequency shift. With the assistance of the GPU, we implemented an improved parallel version of this method, which obtained about a 300-500-fold speedup compared with central processing unit codes.

  20. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  1. Investigation of Multiscale and Multiphase Flow, Transport and Reaction in Heavy Oil Recovery Processes

    Energy Technology Data Exchange (ETDEWEB)

    Yortsos, Yanis C.

    2001-08-07

    This project is an investigation of various multi-phase and multiscale transport and reaction processes associated with heavy oil recovery. The thrust areas of the project include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.

  2. Investigation of Multiscale and Multiphase Flow, Transport and Reaction in Heavy Oil Recovery Processes

    Energy Technology Data Exchange (ETDEWEB)

    Yortsos, Y.C.

    2001-05-29

    This report is an investigation of various multi-phase and multiscale transport and reaction processes associated with heavy oil recovery. The thrust areas of the project include the following: Internal drives, vapor-liquid flows, combustion and reaction processes, fluid displacements and the effect of instabilities and heterogeneities and the flow of fluids with yield stress. These find respective applications in foamy oils, the evolution of dissolved gas, internal steam drives, the mechanics of concurrent and countercurrent vapor-liquid flows, associated with thermal methods and steam injection, such as SAGD, the in-situ combustion, the upscaling of displacements in heterogeneous media and the flow of foams, Bingham plastics and heavy oils in porous media and the development of wormholes during cold production.

  3. Wildfire-related debris-flow generation through episodic progressive sediment-bulking processes, western USA

    Science.gov (United States)

    Cannon, S.H.; Gartner, J.E.; Parrett, C.; Parise, M.; ,

    2003-01-01

    Debris-flow initiation processes on hillslopes recently burned by wildfire differ from those generally recognized on unburned, vegetated hillslopes. These differences result from fire-induced changes in the hydrologic response to rainfall events. In this study, detailed field and aerial photographic mapping, observations, and measurements of debris-flow events from three sites in the western U.S. are used to describe and evaluate the process of episodic progressive sediment bulking of storm runoff that leads to the generation of post-wildfire debris flows. Our data demonstrate the effects of material credibility, sediment availability on hillslopes and in channels, the degree of channel confinement, the formation of continuous channel incision, and the upslope contributing area and its gradient on the generation of flows and the magnitude of the response are demonstrated. ?? 2003 Millpress.

  4. The Physical Flow of Materials and the Associated Costs in the Production Process of a Rolling Mill

    Directory of Open Access Journals (Sweden)

    Holisz-Burzyńska, J.

    2007-01-01

    Full Text Available Efficiency of resources use is, in a large extent, determined by the organization of production flow and the way of their control. The optimization of materials flow in the production process requires the identification of physical flows of goods and it cost. In the article the physical flow process of materials stream in the production process in one of Polish rolling mill and also its logistics analysis and cost analysis are presented.

  5. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  6. 3D visualization of the material flow in friction stir welding process

    Institute of Scientific and Technical Information of China (English)

    Zhao Yanhua; Lin Sanbao; Shen Jiajie; Wu Lin

    2005-01-01

    The material flow in friction stir welded 2014 Al alloy has been investigated using a marker insert technique (MIT). Results of the flow visualization show that the material flow is asymmetrical during the friction stir welding(FSW)process and there are also significant differences in the flow patterns observed on advancing side and retreating side. On advancing side, some material transport forward and some move backward, but on retreating side, material only transport backward. At the top surface of the weld, significant material traasport forward due to the action of the rotating tool shoulder.Combining the data from all the markers, a three-dimensional flow visualization, similar to the 3D image reconstruction technique, was obtained. The three-dimensional plot gives the tendency chart of material flow in friction stir welding process and from the plot it can be seen that there is a vertical, circular motion around the longitudinal axis of the weld. On the advancing side of the weld, the material is pushed downward but on the retreating side, the material is pushed toward the crown of the weld. The net result of the two relative motions in both side of the advancing and the retreating is that a circular motion comes into being. Comparatively, the material flow around the longitudinal axis is a secondary motion.

  7. Analysis of nuclear material flow for experimental DUPIC fuel fabrication process at DFDF

    Energy Technology Data Exchange (ETDEWEB)

    Lee, H. H.; Park, J. J.; Shin, J. M.; Lee, J. W.; Yang, M. S.; Baik, S. Y.; Lee, E. P

    1999-08-01

    This report describes facilities necessary for manufacturing experiment for DUPIC fuel, manufacturing process and equipment. Nuclear material flows among facilities, in PIEF and IMEF, for irradiation test, for post examination of DUPIC fuel, for quality control, for chemical analysis and for treatment of radioactive waste have been analyzed in details. This may be helpful for DUPIC project participants and facility engineers working in related facilities to understand overall flow for nuclear material and radioactive waste. (Author). 14 refs., 15 tabs., 41 figs.

  8. A New Method to Track Resin Flow Fronts in Mold Filling Simulation of RTM Process

    Institute of Scientific and Technical Information of China (English)

    Fuhong DAI; Shanyi DU; Boming ZHANG; Dianfu WAN

    2004-01-01

    A new method to track resin flow fronts, referred to as the topological interpolated method (TIM), which is based on filling states and topological relations of adjacent nodes was proposed. An experiment on the mould filling process was conducted. It was compared with exact solutions and the experimental results, and good agreements were observed. Numerical and experimental comparisons with the conventional contour method were also carried out, and it showed that TIM could enhance the local accuracy of flow front solutions with respect to the contour method when merging flow fronts and resin approaching the mold wall were involved.

  9. A finite element modeling on the fluid flow and solidification in a continuous casting process

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T.H.; Kim, D.S. [Hanyang University Graduate School, Seoul (Korea); Choi, H.C. [Agency for Defence Development, Taejon (Korea); Kim, S.W. [Hanyang University, Seoul (Korea); Lee, S.K. [Chung Buk National University, Chungju (Korea)

    1999-07-01

    The coupled turbulent flow and solidification is considered in a typical slab continuous casting process using commercial program FIDAP. Standard {kappa}-{epsilon} turbulence model is modified to decay turbulent viscosity in the mushy zone and laminar viscosity is set to a sufficiently large value at the solid region. This coupled turbulent flow and solidification model also contains thermal contact resistance due to the mold powder and air gap between the strand and mold using an effective thermal conductivity. From the computed flow pattern, the trajectory of inclusion particles was calculated. The comparison between the predicted and experimental solidified shell thickness shows a good agreement. (author). 27 refs., 11 figs., 2 tabs.

  10. Effect of Vertical Flow Exchange on Biogeochemical Processes in Hyporheic Zones

    Science.gov (United States)

    Kim, H.; Lee, S.; Shin, D.; Hyun, Y.; Lee, K.

    2008-12-01

    Biogeochemical processes in hyporheic zones are of great interest because they make the hyporheic zones highly productive and complex environments. When contaminants or polluted water pass through hyporheic zones, in particular, biogeochemical processes play an important role in removing contaminants or attenuating contamination under certain conditions. The study site, a reach of Munsan stream (Paju-si, South Korea), exhibits severe contamination of surface water by nitrate released from Water Treatment Plant (WTP) nearby. The objectives of this study are to investigate the hydrologic and biogeochemical processes at the riparian area of the site which may contribute to natural attenuation of surface water driven nitrate, and analyze the effect of vertical (hyporheic) flow exchange on the biogeochemical processes in the area. To examine hydraulic mixing or dilution processes, vertical hydraulic gradients were measured at several depth levels using minipiezometers, and then soil temperatures were measured by using i-buttons installed inside the minipiezometers. The microbial analyses by means of polymerase chain reaction (PCR)-cloning methods were also done in order to identify the denitrification process in soil samples. In addition, correlation between vertical flow exchange, temperature data, and denitrifying bacteria activity was also investigated so as to examine the effects on one another. The results showed that there were significant effects of vertical flow exchange and hyporheic soil temperature on the biogeochemical processes of the site. This study found strong support for the idea that the biogeochemical function of hyporheic zone is a predictable outcome of the interaction between microbial activity and flow exchange.

  11. Process and structure: resource management and the development of sub-unit organisational structure.

    Science.gov (United States)

    Packwood, T; Keen, J; Buxton, M

    1992-03-01

    Resource Management (RM) requires hospital units to manage their work in new ways, and the new management processes affect, and are affected by, organisation structure. This paper is concerned with these effects, reporting on the basis of a three-year evaluation of the national RM experiment that was commissioned by the DH. After briefly indicating some of the major characteristics of the RM process, the two main types of unit structures existing in the pilot sites at the beginning of the experiment, unit disciplinary structure and clinical directorates, are analysed. At the end of the experiment, while clinical directorates had become more popular, another variant, clinical grouping, had replaced the unit disciplinary structure. Both types of structure represent a movement towards sub-unit organisation, bringing the work and interests of the service providers and unit managers closer together. Their properties are likewise analysed and their implications, particularly in terms of training and organisational development (OD), are then considered. The paper concludes by considering the causes for these structural changes, which, in the immediate time-scale, appear to owe as much to the NHS Review as to RM.

  12. [Applying graphics processing unit in real-time signal processing and visualization of ophthalmic Fourier-domain OCT system].

    Science.gov (United States)

    Liu, Qiaoyan; Li, Yuejie; Xu, Qiujing; Zhao, Jincheng; Wang, Liwei; Gao, Yonghe

    2013-01-01

    This investigation introduces GPU (Graphics Processing Unit)- based CUDA (Compute Unified Device Architecture) technology into signal processing of ophthalmic FD-OCT (Fourier-Domain Optical Coherence Tomography) imaging system, can realize parallel data processing, using CUDA to optimize relevant operations and algorithms, in order to solve the technical bottlenecks that currently affect ophthalmic real-time imaging in OCT system. Laboratory results showed that with GPU as a general parallel computing processor, the speed of imaging data processing using GPU+CPU mode is more than dozens times faster than traditional CPU platform based serial computing and imaging mode when executing the same data processing, which reaches the clinical requirements for two dimensional real-time imaging.

  13. Empirical analysis of the lane formation process in bidirectional pedestrian flow

    Science.gov (United States)

    Feliciani, Claudio; Nishinari, Katsuhiro

    2016-09-01

    This paper presents an experimental study on pedestrian bidirectional streams and the mechanisms leading to spontaneous lane formation by examining the flow formed by two groups of people walking toward each other in a mock corridor. Flow ratio is changed by changing each group size while maintaining comparable total flow and density. By tracking the trajectories of each pedestrian and analyzing the data obtained, five different phases were recognized as contributing to the transition from unidirectional to bidirectional flow including the spontaneous creation and dissolution of lanes. It has been shown that a statistical treatment is required to understand the fundamental characteristics of pedestrian dynamics and some two-dimensional quantities such as order parameter and rotation range were introduced to allow a more complete analysis. All the quantities observed showed a clear relationship with flow ratio and helped distinguishing between the different characteristic phases of the experiment. Results show that balanced bidirectional flow becomes the most stable configuration after lanes are formed, but the lane creation process requires pedestrians to laterally move to a largest extent compared to low flow-ratio configurations. This finding allows us to understand the reasons why balanced bidirectional flow is efficient at low densities, but quickly leads to deadlock formation at high densities.

  14. Confined gravity flow sedimentary process and its impact on the lower continental slope,Niger Delta

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    There is active gravity flow sedimentation on the lower continental slope of Niger Delta. High-resolution 3-D seismic data enable a detailed study on the gravity flow deposition process and its impact. The lower continental slope of Niger Delta is characterized by a stepped complex topography, which resulted from gravity sliding and spreading during Miocene and Pliocene. Two types of accommodations are identified on the slope: ponded accommodation as isolated sub-basins and healed slope accommodation as connected tortuous corridors, where multi-scale submarine fans and submarine channels developed. Gravity flow deposition process is affected by the characteristics of gravity flows and the receiving basin. At the early stage, gravity flow deposition process was dominated by "fill and spill" pattern in the ponded accommodation, whereas it was confined to the healed slope accommodation during the late stage. On the lower continental slope of Niger Delta, complex slope topography controlled the distribution and evolution of the gravity flow, producing complicated gravity depositional patterns.

  15. Efficient Boolean and multi-input flow techniques for advanced mask data processing

    Science.gov (United States)

    Salazar, Daniel; Moore, Bill; Valadez, John

    2012-11-01

    Mask data preparation (MDP) typically involves multiple flows, sometimes consisting of many steps to ensure that the data is properly written on the mask. This may include multiple inputs, transformations (scaling, orientation, etc.), and processing (layer extraction, sizing, Boolean operations, data filtering). Many MDP techniques currently in practice require multiple passes through the input data and/or multiple file I/O steps to achieve these goals. This paper details an approach which efficiently process the data, resulting in minimal I/O and greatly improved turnaround times (TAT). This approach takes advanced processing algorithms and adapts them to produce efficient and reliable data flow. In tandem with this processing flow, an internal jobdeck mapping approach, transparent to the user, allows an essentially unlimited number of pattern inputs to be handled in a single pass, resulting in increased flexibility and ease of use. Transformations and processing operations are critical to MDP. Transformations such as scaling, reverse tone and orientation, along with processing including sizing, Boolean operations and data filtering are key parts of this. These techniques are often employed in sequence and/or in parallel in a complex functional chain. While transformations typically are done "up front" when the data is input, processing is less straightforward, involving multiple reads and writes to handle the more intricate functionality and also the collection of input patterns which may be required to produce the data that comprises a single mask. The approach detailed in this paper consists of two complementary techniques: efficient MDP flow and jobdeck mapping. Efficient MDP flow is achieved by pipelining the output of each step to the input of the subsequent step. Rather than writing the output of a particular processing step to file and then reading it in to the following step, the pipelining or chaining of the steps results in an efficient flow with

  16. FLUVIAL PROCESSES AND SEDIMENT SCOUR RATE OF THE YELLOW RIVER UNDER ACTION OF UNSTEADY FLOWS

    Institute of Scientific and Technical Information of China (English)

    Yong-Nian XU; Zhi-Yong LIANG; Zhao-Yin WANG

    2001-01-01

    Riverbed scour of the main channel by floods in the Yellow River and its tributaries was investigated, including scour by hyper-concentrated floods. Flood scour usually causes variation of river cross-sections in a way similar to that occured when the sediment inflow is less than the sediment-laden capacity. Scour rate equation for the main channel derived based on the momentum and continuous equations was verified by field data. This equation indicates that unsteady flow scour rate is proportional to the flow density, the velocity of the flood peak, the rising rate of flow discharge per unit width, and so on. The Maximum scour depth after a flood could be predicted by the scour rate equation proposed in this paper.

  17. Study on the air flow field of the drawing conduit in the spunbonding process

    Directory of Open Access Journals (Sweden)

    Wu Li-Li

    2015-01-01

    Full Text Available The air flow field of the drawing conduit in the spunbonding process has a great effect on the polymer drawing, the filament diameter and orientation. A numerical simulation of the process is carried out, and the results are compared with the experimental data, showing good accuracy of the numerical prediction. This research lays an important foundation for the optimal design of the drawing conduit in the spunbonding process.

  18. Experimental Investigation of a Vertical Tubular Desalination Unit Using Humidification Dehumidification Process

    Institute of Scientific and Technical Information of China (English)

    熊日华; 王世昌; 王志; 解利昕; 李凭力; 朱爱梅

    2005-01-01

    A vertical tubular desalination unit with shell and tube structure was built to perform humidification and dehumidification simultaneously on the tube and shell side of the column, respectively. The effects of several operating conditions on the productivity and thermal efficiency of the column were investigated. The results show that both the productivity and thermal efficiency of the column enhance with the elevation of the inlet water temperature. The flow rates of water and carrier gas both have optimal operating ranges, which are 10-30 kg·h-1 and 4-7kg·h-1 for the present column, respectively. Meanwhile, the increase of external steam flow rate will promote the productivity of the column but reduce its thermal efficiency.

  19. Hybrid modeling of convective laminar flow in a permeable tube associated with the cross-flow process

    Science.gov (United States)

    Venezuela, A. L.; Pérez-Guerrero, J. S.; Fontes, S. R.

    2009-03-01

    The confined flows in tubes with permeable surfaces are associated to tangential filtration processes (microfiltration or ultrafiltration). The complexity of the phenomena do not allow for the development of exact analytical solutions, however, approximate solutions are of great interest for the calculation of the transmembrane outflow and estimate of the concentration polarization phenomenon. In the present work, the generalized integral transform technique (GITT) was employed in solving the laminar and permanent flow in permeable tubes of Newtonian and incompressible fluid. The mathematical formulation employed the parabolic differential equation of chemical species conservation (convective-diffusive equation). The velocity profiles for the entrance region flow, which are found in the connective terms of the equation, were assessed by solutions obtained from literature. The velocity at the permeable wall was considered uniform, with the concentration at the tube wall regarded as variable with an axial position. A computational methodology using global error control was applied to determine the concentration in the wall and concentration boundary layer thickness. The results obtained for the local transmembrane flux and the concentration boundary layer thickness were compared against others in literature.

  20. Liquid phase methanol LaPorte process development unit: Modification, operation, and support studies

    Energy Technology Data Exchange (ETDEWEB)

    1991-02-02

    The primary focus of this Process Development Unit operating program was to prepare for a confident move to the next scale of operation with a simplified and optimized process. The main purpose of these runs was the evaluation of the alternate commercial catalyst (F21/0E75-43) that had been identified in the laboratory under a different subtask of the program. If the catalyst proved superior to the previous catalyst, then the evaluation run would be continued into a 120-day life run. Also, minor changes were made to the Process Development Unit system to improve operations and reliability. The damaged reactor demister from a previous run was replaced, and a new demister was installed in the intermediate V/L separator. The internal heat exchanger was equipped with an expansion loop to relieve thermal stresses so operation at higher catalyst loadings and gas velocities would be possible. These aggressive conditions are important for improving process economics. (VC)

  1. Initiation processes for run-off generated debris flows in the Wenchuan earthquake area of China

    Science.gov (United States)

    Hu, W.; Dong, X. J.; Xu, Q.; Wang, G. H.; van Asch, T. W. J.; Hicher, P. Y.

    2016-01-01

    The frequency of huge debris flows greatly increased in the epicenter area of the Wenchuan earthquake. Field investigation revealed that runoff during rainstorm played a major role in generating debris flows on the loose deposits, left by coseismic debris avalanches. However, the mechanisms of these runoff-generated debris flows are not well understood due to the complexity of the initiation processes. To better understand the initiation mechanisms, we simulated and monitored the initiation process in laboratory flume test, with the help of a 3D laser scanner. We found that run-off incision caused an accumulation of material down slope. This failed as shallow slides when saturated, transforming the process into debris in a second stage. After this initial phase, the debris flow volume increased rapidly by a chain of subsequent cascading processes starting with collapses of the side walls, damming and breaching, leading to a rapid widening of the erosion channel. In terms of erosion amount, the subsequent mechanisms were much more important than the initial one. The damming and breaching were found to be the main reasons for the huge magnitude of the debris flows in the post-earthquake area. It was also found that the tested material was susceptible to excess pore pressure and liquefaction in undrained triaxial, which may be a reason for the fluidization in the flume tests.

  2. Impact of flow induced vibration acoustic loads on the design of the Laguna Verde Unit 2 steam dryer

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, D. R.; Wellstein, L. F.; Theuret, R. C.; Han, Y.; Rajakumar, C. [Westinghouse Electric Company LLC, Cranberry Township, PA 16066 (United States); Amador C, C.; Sosa F, W., E-mail: forsytdr@westinghouse.com [Comision Federal de Electricidad, Central Nucleoelectrica Laguna Verde, Km 42.5 Carretera Cardel-Nautla, 91680 Alto Lucero, Veracruz (Mexico)

    2015-09-15

    Industry experience with Boiling Water Reactors (BWRs) has shown that increasing the steam flow through the main steam lines (MSLs) to implement an extended power up rate (EPU) may lead to amplified acoustic loads on the steam dryer, which may negatively affect the structural integrity of the component. The source of these acoustic loads has been found to be acoustic resonance of the side branches on the MSLs, specifically, coupling of the vortex shedding frequency and natural acoustic frequency of safety relief valves (SRVs). The resonance that results from this coupling can contribute significant acoustic energy into the MSL system, which may propagate upstream into the reactor pressure vessel steam dome and drive structural vibration of steam dryer components. This can lead to high-cycle fatigue issues. Lock-in between the vortex shedding frequency and SRV natural frequency, as well as the ability for acoustic energy to propagate into the MSL system, are a function of many things, including the plant operating conditions, geometry of the MSL/SRV junction, and placement of SRVs with respect to each other on the MSLs. Comision Federal de Electricidad and Westinghouse designed, fabricated, and installed acoustic side branches (ASBs) on the MSLs which effectively act in the system as an energy absorber, where the acoustic standing wave generated in the side-branch is absorbed and dissipated inside the ASB. These ASBs have been very successful in reducing the amount of acoustic energy which propagates into the steam dome. In addition, modifications to the Laguna Verde Nuclear Power Plant Unit 2 steam dryer have been completed to reduce the stress levels in critical locations in the dryer. The objective of this paper is to describe the acoustic side branch concept and the design iterative processes that were undertaken at Laguna Verde Unit 2 to achieve a steam dryer design that meets the guidelines of the American Society of Mechanical Engineers, Boiler and Pressure

  3. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Directory of Open Access Journals (Sweden)

    Cally Gill

    2013-09-01

    Full Text Available The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  4. Laser doppler blood flow imaging using a CMOS imaging sensor with on-chip signal processing.

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C; Hayes-Gill, Barrie R; Zhu, Yiqun; Crowe, John A; Gill, Cally; Clough, Geraldine F; Morgan, Stephen P

    2013-09-18

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue.

  5. Laser Doppler Blood Flow Imaging Using a CMOS Imaging Sensor with On-Chip Signal Processing

    Science.gov (United States)

    He, Diwei; Nguyen, Hoang C.; Hayes-Gill, Barrie R.; Zhu, Yiqun; Crowe, John A.; Gill, Cally; Clough, Geraldine F.; Morgan, Stephen P.

    2013-01-01

    The first fully integrated 2D CMOS imaging sensor with on-chip signal processing for applications in laser Doppler blood flow (LDBF) imaging has been designed and tested. To obtain a space efficient design over 64 × 64 pixels means that standard processing electronics used off-chip cannot be implemented. Therefore the analog signal processing at each pixel is a tailored design for LDBF signals with balanced optimization for signal-to-noise ratio and silicon area. This custom made sensor offers key advantages over conventional sensors, viz. the analog signal processing at the pixel level carries out signal normalization; the AC amplification in combination with an anti-aliasing filter allows analog-to-digital conversion with a low number of bits; low resource implementation of the digital processor enables on-chip processing and the data bottleneck that exists between the detector and processing electronics has been overcome. The sensor demonstrates good agreement with simulation at each design stage. The measured optical performance of the sensor is demonstrated using modulated light signals and in vivo blood flow experiments. Images showing blood flow changes with arterial occlusion and an inflammatory response to a histamine skin-prick demonstrate that the sensor array is capable of detecting blood flow signals from tissue. PMID:24051525

  6. [Work process of the nurse who works in child care in family health units].

    Science.gov (United States)

    de Assis, Wesley Dantas; Collet, Neusa; Reichert, Altamira Pereira da Silva; de Sá, Lenilde Duarte

    2011-01-01

    This is a qualitative research, which purpose was to analyse the working process of nurse in child care actions in family health units. Nurses are the subjects and empirical data was achieved by the means of participant observation, and interviews. Data analysis followed thematic analysis fundaments. Results reveal that working process organization of nurses still remains centered in proceedings with an offert of assistance based in client illness, showing obstacles to puericulture practice in health basic attention.

  7. Development of the Hydroecological Integrity Assessment Process for Determining Environmental Flows for New Jersey Streams

    Science.gov (United States)

    Kennen, Jonathan G.; Henriksen, James A.; Nieswand, Steven P.

    2007-01-01

    The natural flow regime paradigm and parallel stream ecological concepts and theories have established the benefits of maintaining or restoring the full range of natural hydrologic variation for physiochemical processes, biodiversity, and the evolutionary potential of aquatic and riparian communities. A synthesis of recent advances in hydroecological research coupled with stream classification has resulted in a new process to determine environmental flows and assess hydrologic alteration. This process has national and international applicability. It allows classification of streams into hydrologic stream classes and identification of a set of non-redundant and ecologically relevant hydrologic indices for 10 critical sub-components of flow. Three computer programs have been developed for implementing the Hydroecological Integrity Assessment Process (HIP): (1) the Hydrologic Indices Tool (HIT), which calculates 171 ecologically relevant hydrologic indices on the basis of daily-flow and peak-flow stream-gage data; (2) the New Jersey Hydrologic Assessment Tool (NJHAT), which can be used to establish a hydrologic baseline period, provide options for setting baseline environmental-flow standards, and compare past and proposed streamflow alterations; and (3) the New Jersey Stream Classification Tool (NJSCT), designed for placing unclassified streams into pre-defined stream classes. Biological and multivariate response models including principal-component, cluster, and discriminant-function analyses aided in the development of software and implementation of the HIP for New Jersey. A pilot effort is currently underway by the New Jersey Department of Environmental Protection in which the HIP is being used to evaluate the effects of past and proposed surface-water use, ground-water extraction, and land-use changes on stream ecosystems while determining the most effective way to integrate the process into ongoing regulatory programs. Ultimately, this scientifically defensible

  8. Evapotranspiration Units for the Diamond Valley Flow System Groundwater Discharge Area, Central Nevada, 2010

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data were created as part of a hydrologic study to characterize groundwater budgets and water quality in the Diamond Valley Flow System (DVFS), central Nevada....

  9. Flow characteristics at U.S. Geological Survey streamgages in the conterminous United States.

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This dataset represents point locations and flow characteristics for current (as of November 20, 2001) and historical U.S. Geological Survey (USGS) streamgages in...

  10. 75 FR 74005 - Fisheries of the Northeastern United States; Monkfish Fishery; Scoping Process

    Science.gov (United States)

    2010-11-30

    ... National Oceanic and Atmospheric Administration RIN 0648-BA50 Fisheries of the Northeastern United States; Monkfish Fishery; Scoping Process AGENCY: National Marine Fisheries Service (NMFS), National Oceanic and... statement (EIS) and scoping meetings; request for comments. SUMMARY: The New England Fishery...

  11. ECO LOGIC INTERNATIONAL GAS-PHASE CHEMICAL REDUCTION PROCESS - THE THERMAL DESORPTION UNIT - APPLICATIONS ANALYSIS REPORT

    Science.gov (United States)

    ELI ECO Logic International, Inc.'s Thermal Desorption Unit (TDU) is specifically designed for use with Eco Logic's Gas Phase Chemical Reduction Process. The technology uses an externally heated bath of molten tin in a hydrogen atmosphere to desorb hazardous organic compounds fro...

  12. Process methods and levels of automation of wood pallet repair in the United States

    Science.gov (United States)

    Jonghun Park; Laszlo Horvath; Robert J. Bush

    2016-01-01

    This study documented the current status of wood pallet repair in the United States by identifying the types of processing and equipment usage in repair operations from an automation prespective. The wood pallet repair firms included in the sudy received an average of approximately 1.28 million cores (i.e., used pallets) for recovery in 2012. A majority of the cores...

  13. Catalyzed steam gasification of biomass. Phase 3: Biomass Process Development Unit (PDU) construction and initial operation

    Science.gov (United States)

    Healey, J. J.; Hooverman, R. H.

    1981-12-01

    The design and construction of the process development unit (PDU) are described in detail, examining each system and component in order. Siting, the chip handling system, the reactor feed system, the reactor, the screw conveyor, the ash dump system, the PDU support equipment, control and information management, and shakedown runs are described.

  14. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) power supply for the Power Processing Unit (PPU) of...

  15. Liquid phase methanol LaPorte process development unit: Modification, operation, and support studies

    Energy Technology Data Exchange (ETDEWEB)

    1991-02-02

    This report consists of Detailed Data Acquisition Sheets for Runs E-6 and E-7 for Task 2.2 of the Modification, Operation, and Support Studies of the Liquid Phase Methanol Laporte Process Development Unit. (Task 2.2: Alternate Catalyst Run E-6 and Catalyst Activity Maintenance Run E-7).

  16. Sodium content of popular commercially processed and restaurant foods in the United States

    Science.gov (United States)

    Nutrient Data Laboratory (NDL) of the U.S. Department of Agriculture (USDA) in close collaboration with U.S. Center for Disease Control and Prevention is monitoring the sodium content of commercially processed and restaurant foods in the United States. The main purpose of this manuscript is to prov...

  17. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    NARCIS (Netherlands)

    Hidalgo, R.C.; Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.; Yu, A.; Dong, K.; Yang, R.; Luding, S.

    2013-01-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybr

  18. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    Science.gov (United States)

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  19. ASAMgpu V1.0 – a moist fully compressible atmospheric model using graphics processing units (GPUs

    Directory of Open Access Journals (Sweden)

    S. Horn

    2012-03-01

    Full Text Available In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs. To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  20. ASAMgpu V1.0 – a moist fully compressible atmospheric model using graphics processing units (GPUs

    Directory of Open Access Journals (Sweden)

    S. Horn

    2011-10-01

    Full Text Available In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs. To ensure platform independence OpenGL and GLSL is used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a timesplitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment and a DYCOMS-II case.

  1. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units.

    Science.gov (United States)

    Rath, N; Kato, S; Levesque, J P; Mauel, M E; Navratil, G A; Peng, Q

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  2. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Science.gov (United States)

    Rath, N.; Kato, S.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.

    2014-04-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  3. Quantitative analysis of flow processes in a sand using synchrotron-based X-ray microtomography

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Hopmans, J.W.; Rivers, M.L.

    2005-01-01

    been of a mostly qualitative nature and no experiments have been presented in the existing literature where a truly quantitative approach to investigating the multiphase flow process has been taken, including a thorough image-processing scheme. The tomographic images presented here show, both...... on observed dynamic effects in the measured pressure-saturation curves; a significantly higher residual and higher capillary pressures were found when the sample was drained fast using a high air-phase pressure. Prior work applying the X-ray microtomography technique to pore-scale multiphase flow problems has...

  4. Continuous-Flow Processes in Heterogeneously Catalyzed Transformations of Biomass Derivatives into Fuels and Chemicals

    Directory of Open Access Journals (Sweden)

    Antonio A. Romero

    2012-07-01

    Full Text Available Continuous flow chemical processes offer several advantages as compared to batch chemistries. These are particularly relevant in the case of heterogeneously catalyzed transformations of biomass-derived platform molecules into valuable chemicals and fuels. This work is aimed to provide an overview of key continuous flow processes developed to date dealing with a series of transformations of platform chemicals including alcohols, furanics, organic acids and polyols using a wide range of heterogeneous catalysts based on supported metals, solid acids and bifunctional (metal + acidic materials.

  5. Effects of anthropogenic water regulation and groundwater lateral flow on land processes

    Science.gov (United States)

    Zeng, Yujin; Xie, Zhenghui; Yu, Yan; Liu, Shuang; Wang, Linying; Zou, Jing; Qin, Peihua; Jia, Binghao

    2016-09-01

    Both anthropogenic water regulation and groundwater lateral flow essentially affect groundwater table patterns. Their relationship is close because lateral flow recharges the groundwater depletion cone, which is induced by over-exploitation. In this study, schemes describing groundwater lateral flow and human water regulation were developed and incorporated into the Community Land Model 4.5. To investigate the effects of human water regulation and groundwater lateral flow on land processes as well as the relationship between the two processes, three simulations using the model were conducted for the years 2003-2013 over the Heihe River Basin in northwestern China. Simulations showed that groundwater lateral flow driven by changes in water heads can essentially change the groundwater table pattern with the deeper water table appearing in the hillslope regions and shallower water table appearing in valley bottom regions and plains. Over the last decade, anthropogenic groundwater exploitation deepened the water table by approximately 2 m in the middle reaches of the Heihe River Basin and rapidly reduced the terrestrial water storage, while irrigation increased soil moisture by approximately 0.1 m3 m-3. The water stored in the mainstream of the Heihe River was also reduced by human surface water withdrawal. The latent heat flux was increased by 30 W m-2 over the irrigated region, with an identical decrease in sensible heat flux. The simulated groundwater lateral flow was shown to effectively recharge the groundwater depletion cone caused by over-exploitation. The offset rate is higher in plains than mountainous regions.

  6. Towards an optimized flow-sheet for a SANEX demonstration process using centrifugal contactors

    Energy Technology Data Exchange (ETDEWEB)

    Magnusson, D. [European Commission, Joint Research Center, Karlsruhe (Germany). Inst. for Transuranium Elements; Chalmers Univ. of Technology, Gothenburg (Sweden). Nuclear Chemistry, Dept. of Chemical and Biological Engineering; Christiansen, B.; Glatz, J.P.; Malmbeck, R.; Serrano-Purroy, D. [European Commission, Joint Research Center, Karlsruhe (Germany). Inst. for Transuranium Elements; Modolo, G. [Forschungszentrum Juelich GmbH (Germany). Inst. for Energy Research, Safety Research and Reactor Technology; Sorel, C. [Commissariat a l' Energie Atomique Valrho (CEA), DRCP/SCPS, Bagnols-sur-Ceze (France)

    2009-07-01

    The design of an efficient process flow-sheet requires accurate extraction data for the experimental set-up used. Often this data is provided as equilibrium data. Due to the small hold-up volume compared to the flow rate in centrifugal contactors the time for extraction is often too short to reach equilibrium D-ratios. In this work single stage kinetics experiments have been carried out to investigate the D-ratio dependence of the flow rate and to compare this with equilibrium batch experiments for a SANEX system based on CyMe{sub 4}-BTBP. The first centrifuge experiment was run with spiked solutions while in the second a genuine actinide/lanthanide fraction from a TODGA process was used. Three different flow rates were tested with each set-up. The results show that even with low flow rates, only around 9% of the equilibrium D-ratio (Am) was reached for the extraction in the spiked test and around 16% in the hot test (the difference is due to the size of the centrifuges). In the hot test the lanthanide scrubbing was inefficient whereas in the stripping both the actinides and the lanthanides showed good results. Based on these results improvements of the suggested flow-sheet is discussed. (orig.)

  7. Processing of Egomotion-Consistent Optic Flow in the Rhesus Macaque Cortex.

    Science.gov (United States)

    Cottereau, Benoit R; Smith, Andrew T; Rima, Samy; Fize, Denis; Héjja-Brichard, Yseult; Renaud, Luc; Lejards, Camille; Vayssière, Nathalie; Trotter, Yves; Durand, Jean-Baptiste

    2017-01-01

    The cortical network that processes visual cues to self-motion was characterized with functional magnetic resonance imaging in 3 awake behaving macaques. The experimental protocol was similar to previous human studies in which the responses to a single large optic flow patch were contrasted with responses to an array of 9 similar flow patches. This distinguishes cortical regions where neurons respond to flow in their receptive fields regardless of surrounding motion from those that are sensitive to whether the overall image arises from self-motion. In all 3 animals, significant selectivity for egomotion-consistent flow was found in several areas previously associated with optic flow processing, and notably dorsal middle superior temporal area, ventral intra-parietal area, and VPS. It was also seen in areas 7a (Opt), STPm, FEFsem, FEFsac and in a region of the cingulate sulcus that may be homologous with human area CSv. Selectivity for egomotion-compatible flow was never total but was particularly strong in VPS and putative macaque CSv. Direct comparison of results with the equivalent human studies reveals several commonalities but also some differences.

  8. Wildfire-related debris-flow initiation processes, Storm King Mountain, Colorado

    Science.gov (United States)

    Cannon, S.H.; Kirkham, R.M.; Parise, M.

    2001-01-01

    A torrential rainstorm on September 1, 1994 at the recently burned hillslopes of Storm King Mountain, CO, resulted in the generation of debris flows from every burned drainage basin. Maps (1:5000 scale) of bedrock and surficial materials and of the debris-flow paths, coupled with a 10-m Digital Elevation Model (DEM) of topography, are used to evaluate the processes that generated fire-related debris flows in this setting. These evaluations form the basis for a descriptive model for fire-related debris-flow initiation. The prominent paths left by the debris flows originated in 0- and 1st-order hollows or channels. Discrete soil-slip scars do not occur at the heads of these paths. Although 58 soil-slip scars were mapped on hillslopes in the burned basins, material derived from these soil slips accounted for only about 7% of the total volume of material deposited at canyon mouths. This fact, combined with observations of significant erosion of hillslope materials, suggests that a runoff-dominated process of progressive sediment entrainment by surface runoff, rather than infiltration-triggered failure of discrete soil slips, was the primary mechanism of debris-flow initiation. A paucity of channel incision, along with observations of extensive hillslope erosion, indicates that a significant proportion of material in the debris flows was derived from the hillslopes, with a smaller contribution from the channels. Because of the importance of runoff-dominated rather than infiltration-dominated processes in the generation of these fire-related debris flows, the runoff-contributing area that extends upslope from the point of debris-flow initiation to the drainage divide, and its gradient, becomes a critical constraint in debris-flow initiation. Slope-area thresholds for fire-related debris-flow initiation from Storm King Mountain are defined by functions of the form Acr(tan ??)3 = S, where Acr is the critical area extending upslope from the initiation location to the

  9. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  10. Numerical simulations on the flow fields of dynamic axial compression columns in chromatography processes

    Science.gov (United States)

    Chien Liang, Ru; Che Liu, Cheng; Tsai Liang, Ming; Chen, Jiann Lin

    2017-02-01

    Dynamic axial compression (DAC) columns are key elements in Simulated Moving Bed, which is a chromatography process in drug industry and chemical engineering. In this study, we apply the computational fluid dynamics (CFD) technique to analyze the flow fields in the DAC column and propose rules for distributor design based on mass conservation in fluid dynamics. Computer aided design (CAD) is used in constructing the numerical 3D modelling for the mesh system. The laminar flow fields with Darcy’s law to model the porous zone are governed by the Navier-Stokes equations and employed to describe the porous flow fields. Experimental works have been conducted as the benchmark for us to choose feasible porous parameters for CFD. Besides, numerical treatments are elaborated to avoid calculation divergence resulting from large source terms. Results show that CFD combined with CAD is a good approach to investigate detailed flow fields in DAC columns and the design for distributors is straightforward.

  11. Aerodynamic Study on Supersonic Flows in High-Velocity Oxy-Fuel Thermal Spray Process

    Institute of Scientific and Technical Information of China (English)

    Hiroshi KATANODA; Takeshi MATSUOKA; Seiji KURODA; Jin KAWAKITA; Hirotaka FUKANUMA; Kazuyasu MATSUO

    2005-01-01

    @@ To clarify the characteristics of gas flow in high velocity oxy-fuel (HVOF) thermal spray gun, aerodynamic research is performed using a special gun. The gun has rectangular cross-sectional area and sidewalls of optical glass to visualize the internal flow. The gun consists of a supersonic nozzle with the design Mach number of 2.0 followed by a straight passage called barrel. Compressed dry air up to 0.78 MPa is used as a process gas instead of combustion gas which is used in a commercial HVOF gun. The high-speed gas flows with shock waves in the gun and jets are visualized by schlieren technique. Complicated internal and external flow-fields containing various types of shock wave as well as expansion wave are visualized.

  12. User's guide to the Variably Saturated Flow (VSF) process to MODFLOW

    Science.gov (United States)

    Thoms, R. Brad; Johnson, Richard L.; Healy, Richard W.

    2006-01-01

    A new process for simulating three-dimensional (3-D) variably saturated flow (VSF) using Richards' equation has been added to the 3-D modular finite-difference ground-water model MODFLOW. Five new packages are presented here as part of the VSF Process--the Richards' Equation Flow (REF1) Package, the Seepage Face (SPF1) Package, the Surface Ponding (PND1) Package, the Surface Evaporation (SEV1) Package, and the Root Zone Evapotranspiration (RZE1) Package. Additionally, a new Adaptive Time-Stepping (ATS1) Package is presented for use by both the Ground-Water Flow (GWF) Process and VSF. The VSF Process allows simulation of flow in unsaturated media above the ground-water zone and facilitates modeling of ground-water/surface-water interactions. Model performance is evaluated by comparison to an analytical solution for one-dimensional (1-D) constant-head infiltration (Dirichlet boundary condition), field experimental data for a 1-D constant-head infiltration, laboratory experimental data for two-dimensional (2-D) constant-flux infiltration (Neumann boundary condition), laboratory experimental data for 2-D transient drainage through a seepage face, and numerical model results (VS2DT) of a 2-D flow-path simulation using realistic surface boundary conditions. A hypothetical 3-D example case also is presented to demonstrate the new capability using periodic boundary conditions (for example, daily precipitation) and varied surface topography over a larger spatial scale (0.133 square kilometer). The new model capabilities retain the modular structure of the MODFLOW code and preserve MODFLOW's existing capabilities as well as compatibility with commercial pre-/post-processors. The overall success of the VSF Process in simulating mixed boundary conditions and variable soil types demonstrates its utility for future hydrologic investigations. This report presents a new flow package implementing the governing equations for variably saturated ground-water flow, four new boundary

  13. Upland Processes and Controls on September 2013 Debris Flows, Rocky Mountain National Park, Colorado

    Science.gov (United States)

    Patton, A. I.; Rathburn, S. L.; Bilderback, E. L.

    2015-12-01

    The extreme rainstorms that occurred in Colorado in September 2013 initiated numerous debris flows in the northern Front Range. These flows delivered sediment to upland streams, impacted buildings and infrastructure in and near Rocky Mountain National Park (RMNP), and underscored the importance of ongoing hazards in mountainous areas. Slope failures occurred primarily at elevations above 2600 m on south facing slopes >40 degrees. The 2013 failures provide a valuable opportunity to better understand site-specific geomorphic variables that control slope failure in the interior United States and the frequency of debris flows in steep terrain. Slope characteristics including soil depth, vegetation type and prevalence, contributing area, slope convexity/concavity and soil texture were compared between 11 debris flow sites and 30 control sites that did not fail in RMNP. This analysis indicates that slope morphology is the primary controlling factor: 45% of the debris flow sites initiated in or below a colluvial hollow and 36% of the failed sites initiated in other areas of convergent hillslope topography. Only one of the 30 control sites (3%) was located within a colluvial hollow and only two control sites (6%) were located in other areas of convergent topography. Difference in the average maximum soil thickness between debris flow sites (0.9 m) and control sites (0.7 m) is not significant but may reflect the difficulty of using a soil probe in glacially derived soils. Additional research includes field mapping and geochronologic study at one 2013 debris deposit with evidence of multiple mass movements. Preliminary results from the mapping indicate that up to six debris flows have occurred at this site. Radiocarbon analysis of organic material and 10Be analysis of quartz from boulders in old debris levees indicate the timing of past events in this area. Future land management in RMNP will utilize this understanding of controls on slope failure and event frequency.

  14. Numerical and Experimental Study of the Bleeder Flow in Autoclave Process

    Science.gov (United States)

    Li, Yanxia; Li, Min; Gu, Yizhuo; Zhang, Zuoguang

    2011-08-01

    In the autoclave process, resin flow is a primary mechanics for the removing of excess resin and voids entrapped in the laminate and obtaining a uniform and void free composite part. A numerical method was developed to simulate the resin flow in the laminate and the bleeder, and the effects of `bleeder flow' on the resin flow and fiber compaction were conducted. At the same time, fiber distribution in the cured laminates was investigated by both experiments and simulations for the CF/Epoxy and CF/BMI composites. The data of the experiments and simulations demonstrated that fibers consolidated and reconsolidated in the laminate and it was impacted by the viscosity and gel time of the resin system. Compared to the post study in which only resin flow in the laminate are considered, these results will deepen the understanding of the consolidation process, resin pressure variation and void control during the autoclave process, which is valuable for the study of the performance of composite parts, provided that fiber distribution does affect some properties of composite material.

  15. Bandwidth Enhancement between Graphics Processing Units on the Peripheral Component Interconnect Bus

    Directory of Open Access Journals (Sweden)

    ANTON Alin

    2015-10-01

    Full Text Available General purpose computing on graphics processing units is a new trend in high performance computing. Present day applications require office and personal supercomputers which are mostly based on many core hardware accelerators communicating with the host system through the Peripheral Component Interconnect (PCI bus. Parallel data compression is a difficult topic but compression has been used successfully to improve the communication between parallel message passing interface (MPI processes on high performance computing clusters. In this paper we show that special pur pose compression algorithms designed for scientific floating point data can be used to enhance the bandwidth between 2 graphics processing unit (GPU devices on the PCI Express (PCIe 3.0 x16 bus in a homebuilt personal supercomputer (PSC.

  16. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    Science.gov (United States)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  17. Rapid learning-based video stereolization using graphic processing unit acceleration

    Science.gov (United States)

    Sun, Tian; Jung, Cheolkon; Wang, Lei; Kim, Joongkyu

    2016-09-01

    Video stereolization has received much attention in recent years due to the lack of stereoscopic three-dimensional (3-D) contents. Although video stereolization can enrich stereoscopic 3-D contents, it is hard to achieve automatic two-dimensional-to-3-D conversion with less computational cost. We proposed rapid learning-based video stereolization using a graphic processing unit (GPU) acceleration. We first generated an initial depth map based on learning from examples. Then, we refined the depth map using saliency and cross-bilateral filtering to make object boundaries clear. Finally, we performed depth-image-based-rendering to generate stereoscopic 3-D views. To accelerate the computation of video stereolization, we provided a parallelizable hybrid GPU-central processing unit (CPU) solution to be suitable for running on GPU. Experimental results demonstrate that the proposed method is nearly 180 times faster than CPU-based processing and achieves a good performance comparable to the-state-of-the-art ones.

  18. Molecular dynamics for long-range interacting systems on Graphic Processing Units

    CERN Document Server

    Filho, Tarcísio M Rocha

    2012-01-01

    We present implementations of a fourth-order symplectic integrator on graphic processing units for three $N$-body models with long-range interactions of general interest: the Hamiltonian Mean Field, Ring and two-dimensional self-gravitating models. We discuss the algorithms, speedups and errors using one and two GPU units. Speedups can be as high as 140 compared to a serial code, and the overall relative error in the total energy is of the same order of magnitude as for the CPU code. The number of particles used in the tests range from 10,000 to 50,000,000 depending on the model.

  19. A Hydrostrat Model and Alternatives for Groundwater Flow and Contaminant Transport Model of Corrective Action Unit 99: Rainer Mesa-Shoshone Mountain, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Geotechnical Sciences Group

    2007-03-01

    The three-dimensional hydrostratigraphic framework model for the Rainier Mesa-Shoshone Mountain Corrective Action Unit was completed in Fiscal Year 2006. The model extends from eastern Pahute Mesa in the north to Mid Valley in the south and centers on the former nuclear testing areas at Rainier Mesa, Aqueduct Mesa, and Shoshone Mountain. The model area also includes an overlap with the existing Underground Test Area Corrective Action Unit models for Yucca Flat and Pahute Mesa. The model area is geologically diverse and includes un-extended yet highly deformed Paleozoic terrain and high volcanic mesas between the Yucca Flat extensional basin on the east and caldera complexes of the Southwestern Nevada Volcanic Field on the west. The area also includes a hydrologic divide between two groundwater sub-basins of the Death Valley regional flow system. A diverse set of geological and geophysical data collected over the past 50 years was used to develop a structural model and hydrostratigraphic system for the model area. Three deep characterization wells, a magnetotelluric survey, and reprocessed gravity data were acquired specifically for this modeling initiative. These data and associated interpretive products were integrated using EarthVision{reg_sign} software to develop the three-dimensional hydrostratigraphic framework model. Crucial steps in the model building process included establishing a fault model, developing a hydrostratigraphic scheme, compiling a drill-hole database, and constructing detailed geologic and hydrostratigraphic cross sections and subsurface maps. The more than 100 stratigraphic units in the model area were grouped into 43 hydrostratigraphic units based on each unit's propensity toward aquifer or aquitard characteristics. The authors organized the volcanic units in the model area into 35 hydrostratigraphic units that include 16 aquifers, 12 confining units, 2 composite units (a mixture of aquifer and confining units), and 5 intrusive

  20. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  1. Flow Stress Behavior and Processing Map of Al-Cu-Mg-Ag Alloy during Hot Compression

    Institute of Scientific and Technical Information of China (English)

    YANG Sheng; YI Danqing; ZHANG Hong; YAO Sujuan

    2008-01-01

    The hot deformation behavior of Al-Cu-Mg-Ag was studied by isothermal hot compression tests in the temperature range of 573-773 K and strain rate range of 0.001-1 s-1 on a Gleeble 1500 D thermal mechanical simulator. The results show the flow stress of Al-Cu-Mg-Ag alloy increases with strain rate and decreases after a peak value, indicating dynamic recovery and recrystallization. A hyperbolic sine relationship is found to correlate well the flow stress with the strain rate and temperature, the flow stress equation is estimated to illustrate the relation of strain rate and stress and temperature during high temperature deformation process. The processing maps exhibit two domains as optimum fields for hot deformation at different strains, including the high strain rate domain in 623-773 K and the low strain rate domain in 573-673 K.

  2. Application of Data Smoothing Method in Signal Processing for Vortex Flow Meters

    Directory of Open Access Journals (Sweden)

    Zhang Jun

    2017-01-01

    Full Text Available Vortex flow meter is typical flow measure equipment. Its measurement output signals can easily be impaired by environmental conditions. In order to obtain an improved estimate of the time-averaged velocity from the vortex flow meter, a signal filter method is applied in this paper. The method is based on a simple Savitzky-Golay smoothing filter algorithm. According with the algorithm, a numerical program is developed in Python with the scientific library numerical Numpy. Two sample data sets are processed through the program. The results demonstrate that the processed data is available accepted compared with the original data. The improved data of the time-averaged velocity is obtained within smoothing curves. Finally the simple data smoothing program is useable and stable for this filter.

  3. Debris-flow deposits and watershed erosion rates near southern Death Valley, CA, United States

    Science.gov (United States)

    Schmidt, K.M.; Menges, C.M.; ,

    2003-01-01

    Debris flows from the steep, granitic hillslopes of the Kingston Range, CA are commensurate in age with nearby fluvial deposits. Quaternary chronostratigraphic differentiation of debris-flow deposits is based upon time-dependent characteristics such as relative boulder strength, derived from Schmidt Hammer measurements, degree of surface desert varnish, pedogenesis, and vertical separation. Rock strength is highest for Holocene-aged boulders and decreases for Pleistocene-aged boulders weathering to grus. Volumes of age-stratified debris-flow deposits, constrained by deposit thickness above bedrock, GPS surveys, and geologic mapping, are greatest for Pleistocene deposits. Shallow landslide susceptibility, derived from a topographically based GIS model, in conjunction with deposit volumes produces watershed-scale erosion rates of ???2-47 mm ka-1, with time-averaged Holocene rates exceeding Pleistocene rates. ?? 2003 Millpress.

  4. Predicting the probability and volume of postwildfire debris flows in the intermountain western United States

    Science.gov (United States)

    Cannon, S.H.; Gartner, J.E.; Rupert, M.G.; Michael, J.A.; Rea, A.H.; Parrett, C.

    2010-01-01

    Empirical models to estimate the probability of occurrence and volume of postwildfire debris flows can be quickly implemented in a geographic information system (GIS) to generate debris-flow hazard maps either before or immediately following wildfires. Models that can be used to calculate the probability of debris-flow production from individual drainage basins in response to a given storm were developed using logistic regression analyses of a database from 388 basins located in 15 burned areas located throughout the U.S. Intermountain West. The models describe debris-flow probability as a function of readily obtained measures of areal burned extent, soil properties, basin morphology, and rainfall from short-duration and low-recurrence-interval convective rainstorms. A model for estimating the volume of material that may issue from a basin mouth in response to a given storm was developed using multiple linear regression analysis of a database from 56 basins burned by eight fires. This model describes debris-flow volume as a function of the basin gradient, aerial burned extent, and storm rainfall. Applications of a probability model and the volume model for hazard assessments are illustrated using information from the 2003 Hot Creek fire in central Idaho. The predictive strength of the approach in this setting is evaluated using information on the response of this fire to a localized thunderstorm in August 2003. The mapping approach presented here identifies those basins that are most prone to the largest debris-flow events and thus provides information necessary to prioritize areas for postfire erosion mitigation, warnings, and prefire management efforts throughout the Intermountain West.

  5. Case Studies of Internationalization in Adult and Higher Education: Inside the Processes of Four Universities in the United States and the United Kingdom

    Science.gov (United States)

    Coryell, Joellen Elizabeth; Durodoye, Beth A.; Wright, Robin Redmon; Pate, P. Elizabeth; Nguyen, Shelbee

    2012-01-01

    This report outlines a method for learning about the internationalization processes at institutions of adult and higher education and then provides the analysis of data gathered from the researchers' own institution and from site visits to three additional universities in the United States and the United Kingdom. It was found that campus…

  6. Applying Process-Based Models for Subsurface Flow Treatment Wetlands: Recent Developments and Challenges

    Directory of Open Access Journals (Sweden)

    Guenter Langergraber

    2016-12-01

    Full Text Available To date, only few process-based models for subsurface flow treatment wetlands have been developed. For modelling a treatment wetland, these models have to comprise a number of sub-models to describe water flow, pollutant transport, pollutant transformation and degradation, effects of wetland plants, and transport and deposition of suspended particulate matter. The two most advanced models are the HYDRUS Wetland Module and BIO-PORE. These two models are briefly described. This paper shows typical simulation results for vertical flow wetlands and discusses experiences and challenges using process-based wetland models in relation to the sub-models describing the most important wetland processes. It can be demonstrated that existing simulation tools can be applied for simulating processes in treatment wetlands. Most important for achieving a good match between measured and simulated pollutant concentrations is a good calibration of the water flow and transport models. Only after these calibrations have been made and the effect of the influent fractionation on simulation results has been considered, should changing the parameters of the biokinetic models be taken into account. Modelling the effects of wetland plants is possible and has to be considered when important. Up to now, models describing clogging are the least established models among the sub-models required for a complete wetland model and thus further development and research is required.

  7. Toward a Unified Modeling of Learner's Growth Process and Flow Theory

    Science.gov (United States)

    Challco, Geiser C.; Andrade, Fernando R. H.; Borges, Simone S.; Bittencourt, Ig I.; Isotani, Seiji

    2016-01-01

    Flow is the affective state in which a learner is so engaged and involved in an activity that nothing else seems to matter. In this sense, to help students in the skill development and knowledge acquisition (referred to as learners' growth process) under optimal conditions, the instructional designers should create learning scenarios that favor…

  8. Flow Dynamics of green sand in the DISAMATIC moulding process using Discrete element method (DEM)

    DEFF Research Database (Denmark)

    Hovad, Emil; Larsen, P.; Walther, Jens Honore

    2015-01-01

    The DISAMATIC casting process production of sand moulds is simulated with DEM (discrete element method). The main purpose is to simulate the dynamics of the flow of green sand, during the production of the sand mould with DEM. The sand shot is simulated, which is the first stage of the DISAMATIC...

  9. Simulating the DISAMATIC process using the discrete element method — a dynamical study of granular flow

    DEFF Research Database (Denmark)

    Hovad, Emil; Spangenberg, Jon; Larsen, P.

    2016-01-01

    The discrete element method (DEM) is applied to simulate the dynamics of the flow of green sand while filling a mould using the DISAMATIC process. The focus is to identify relevant physical experiments that can be used to characterize the material properties of green sand in the numerical model...

  10. Continuous-flow processes for the catalytic partial hydrogenation reaction of alkynes

    Directory of Open Access Journals (Sweden)

    Carmen Moreno-Marrodan

    2017-04-01

    Full Text Available The catalytic partial hydrogenation of substituted alkynes to alkenes is a process of high importance in the manufacture of several market chemicals. The present paper shortly reviews the heterogeneous catalytic systems engineered for this reaction under continuous flow and in the liquid phase. The main contributions appeared in the literature from 1997 up to August 2016 are discussed in terms of reactor design. A comparison with batch and industrial processes is provided whenever possible.

  11. Turbulent Fluid Flow and Heat Transfer Calculation in Mold Filling and Solidification Processes of Castings

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the time-averaging equations and a modified engineering turbulence model, the mold filling and solidification processes of castings are approximately described. The algorithm for the control equations is briefly introduced, and some problems and improvement methods for the traditional method are also presented. Both calculation and tests proved that, comparing with the laminar fluid flow and heat transfer, the simulation results by using the turbulence model are closer to the real mold filling and solidification processes of castings.

  12. Experimental investigation and numerical simulation of plastic flow behavior during forward-backward-radial extrusion process

    OpenAIRE

    A. Farhoumand; R. Ebrahimi

    2016-01-01

    Finite element method was employed to investigate the effect of process parameters of plastic deformation behavior in Forward-Backward-Radial Extrusion (FBRE) process. The result of an axisymmetric model shows that the friction between die components and the sample has a substantial effect on the material flow behavior. Although strain heterogeneity index (SHI) slightly decreases with an increase in friction, large portion of the sample experiences significant strain heterogeneity. Increasing...

  13. Continuous ‘Passive’ flow-proportional monitoring of drainage using a new modified Sutro weir (MSW) unit

    DEFF Research Database (Denmark)

    Vendelboe, Anders Lindblad; Rozemeijer, Joachim; de Jonge, Lis Wollesen;

    2016-01-01

    In view of their crucial role in water and solute transport, enhanced monitoring of agricultural subsurface drain tile systems is important for adequate water quality management. However, existing monitoring techniques for flow and contaminant loads from tile drains are expensive and labour...... information for the selection and evaluation of mitigation options to improve water quality.Results from this type of monitoring can provide data for the evaluation and optimisation of best management practices in agriculture in order to produce the highest yield without water quality and recipient surface...... intensive. The aim of this study was to develop a cost-effective and simple method for monitoring loads from tile drains. The Flowcap is a modified Sutro weir (MSW) unit that canbe attached to the outlet of tile drains. It is capable of registering total flow, contaminant loads and flowaveraged...

  14. Geothermal Resource/Reservoir Investigations Based on Heat Flow and Thermal Gradient Data for the United States

    Energy Technology Data Exchange (ETDEWEB)

    D. D. Blackwell; K. W. Wisian; M. C. Richards; J. L. Steele

    2000-04-01

    Several activities related to geothermal resources in the western United States are described in this report. A database of geothermal site-specific thermal gradient and heat flow results from individual exploration wells in the western US has been assembled. Extensive temperature gradient and heat flow exploration data from the active exploration of the 1970's and 1980's were collected, compiled, and synthesized, emphasizing previously unavailable company data. Examples of the use and applications of the database are described. The database and results are available on the world wide web. In this report numerical models are used to establish basic qualitative relationships between structure, heat input, and permeability distribution, and the resulting geothermal system. A series of steady state, two-dimensional numerical models evaluate the effect of permeability and structural variations on an idealized, generic Basin and Range geothermal system and the results are described.

  15. Methodology for systematic analysis and improvement of manufacturing unit process life-cycle inventory (UPLCI)—CO2PE! initiative (cooperative effort on process emissions in manufacturing). Part 1: Methodology description

    DEFF Research Database (Denmark)

    Kellens, Karel; Dewulf, Wim; Overcash, Michael

    2012-01-01

    and resource efficiency improvements of the manufacturing unit process. To ensure optimal reproducibility and applicability, documentation guidelines for data and metadata are included in both approaches. Guidance on definition of functional unit and reference flow as well as on determination of system...... boundaries specifies the generic goal and scope definition requirements according to ISO 14040 (2006) and ISO 14044 (2006).The proposed methodology aims at ensuring solid foundations for the provision of high-quality LCI data for the use phase of manufacturing unit processes. Envisaged usage encompasses...... in the framework of the CO2PE! collaborative research programme (CO2PE! 2011a) and comprises two approaches with different levels of detail, respectively referred to as the screening approach and the in-depth approach.The screening approach relies on representative, publicly available data and engineering...

  16. Hydrologic Data for the Groundwater Flow and Contaminant Transport Model of Corrective Action Units 101 and 102: Central and Western Pahute Mesa, Nye County, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Drici, Warda

    2004-02-01

    This report documents the analysis of the available hydrologic data conducted in support of the development of a Corrective Action Unit (CAU) groundwater flow model for Central and Western Pahute Mesa: CAUs 101 and 102.

  17. Contaminant Transport Parameters for the Groundwater Flow and Contaminant Transport Model of Corrective Action Units 101 and 102: Central and Western Pahute Mesa, Nye County, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Drici, Warda

    2003-08-01

    This report documents the analysis of the available transport parameter data conducted in support of the development of a Corrective Action Unit (CAU) groundwater flow model for Central and Western Pahute Mesa: CAUs 101 and 102.

  18. Linked migration systems: immigration and internal labor flows in the United States.

    Science.gov (United States)

    R. Walker; M. Ellis; R. Barff

    1992-01-01

    We investigate the relationship between immigration and internal labor movements in the US. Wedding the literatures on immigration and internal migration, we develop a mobility model linking these various flows on the basis of occupational status of worker, producction and institutional relations in the economy, and economic restructuring.

  19. Patterns of gene flow between crop and wild carrot, Daucus carota (Apiaceae) in the United States

    Science.gov (United States)

    Studies of gene flow between crops and their wild relatives have implications for both management practices for farming and breeding as well as understanding the risk of transgene escape. These types of studies may also yield insight into population dynamics and the evolutionary consequences of gene...

  20. Optimal Power Flow in three-phase islanded microgrids with inverter interfaced units

    DEFF Research Database (Denmark)

    Sanseverino, Eleonora Riva; Quang, Ninh Nguyen; Di Silvestre, Maria Luisa

    2015-01-01

    In this paper, the solution of the Optimal Power Flow (OPF) problem for three phase islanded microgrids is studied, the OPF being one of the core functions of the tertiary regulation level for an AC islanded microgrid with a hierarchical control architecture. The study also aims at evaluating...

  1. Relationships among the energy, emergy, and money flows of the United States from 1900 to 2011.

    Science.gov (United States)

    Energy Systems Language models of the resource base for the U.S. economy and of economic exchange were used, respectively, (1) to show how energy consumption and emergy use contribute to real and nominal gross domestic product (GDP) and (2) to propose a model of coupled flows tha...

  2. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity

    Directory of Open Access Journals (Sweden)

    Cuicui Wang

    2015-01-01

    Full Text Available This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing “to buy” or “not to buy,” participants were presented with feedback. At the same time, event-related potentials (ERPs were used to record investor’s brain activity and capture the event-related negativity (ERN and feedback-related negativity (FRN components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the “not to buy” stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.

  3. Impact of flow velocity on biochemical processes – a laboratory experiment

    Directory of Open Access Journals (Sweden)

    A. Boisson

    2014-08-01

    Full Text Available Understanding and predicting hydraulic and chemical properties of natural environments are current crucial challenges. It requires considering hydraulic, chemical and biological processes and evaluating how hydrodynamic properties impact on biochemical reactions. In this context, an original laboratory experiment to study the impact of flow velocity on biochemical reactions along a one-dimensional flow streamline has been developed. Based on the example of nitrate reduction, nitrate-rich water passes through plastic tubes at several flow velocities (from 6.2 to 35 mm min−1, while nitrate concentration at the tube outlet is monitored for more than 500 h. This experimental setup allows assessing the biologically controlled reaction between a mobile electron acceptor (nitrate and an electron donor (carbon coming from an immobile phase (tube that produces carbon during its degradation by microorganisms. It results in observing a dynamic of the nitrate transformation associated with biofilm development which is flow-velocity dependent. It is proposed that the main behaviors of the reaction rates are related to phases of biofilm development through a simple analytical model including assimilation. Experiment results and their interpretation demonstrate a significant impact of flow velocity on reaction performance and stability and highlight the relevance of dynamic experiments over static experiments for understanding biogeochemical processes.

  4. Scale-up from batch to flow-through wet milling process for injectable depot formulation.

    Science.gov (United States)

    Lehocký, Róbert; Pěček, Daniel; Štěpánek, František

    2016-12-01

    Injectable depot formulations are aimed at providing long-term sustained release of a drug into systemic circulation, thus reducing plasma level fluctuations and improving patient compliance. The particle size distribution of the formulation in the form of suspension is a key parameter that controls the release rate. In this work, the process of wet stirred media milling (ball milling) of a poorly water-soluble substance has been investigated with two main aims: (i) to determine the parametric sensitivity of milling kinetics; and (ii) to develop scale-up methodology for process transfer from batch to flow-through arrangement. Ball milling experiments were performed in two types of ball mills, a batch mill with a 30ml maximum working volume, and a flow-through mill with a 250ml maximum working volume. Milling parameters were investigated in detail by methodologies of QbD to map the parametric space. Specifically, the effects of ball size, ball fill level, and rpm on the particle breakage kinetics were systematically investigated at both mills, with an additional parameter (flow-rate) in the case of the flow-through mill. The breakage rate was found to follow power-law kinetics with respect to dimensionless time, with an asymptotic d50 particle size in the range of 200-300nm. In the case of the flow-through mill, the number of theoretical passes through the mill was found to be an important scale-up parameter.

  5. Integrating turbulent flow, biogeochemical, and poromechanical processes in rippled coastal sediment (Invited)

    Science.gov (United States)

    Cardenas, M. B.; Cook, P. L.; Jiang, H.; Traykovski, P.

    2010-12-01

    Coastal sediments are the locus of multiple coupled processes. Turbulent flow associated with waves and currents induces porewater flow through sediment leading to fluid exchange with the water column. This porewater flow is determined by the hydraulic and elastic properties of the sediment. Porewater flow also ultimately controls biogeochemical reactions in the sediment whose rates depend on delivery of reactants and export of products. We present results from numerical modeling studies directed at integrating these processes with the goal of shedding light on these complex environments. We show how denitrification rates inside ripples are largest at intermediate permeability which represents the optimal balance of reactant delivery and anoxic conditions. It is clear that nutrient cycling and distribution within the sediment is strongly dependent on the character of the multidimensional flow field inside of sediment. More recent studies illustrate the importance of the elastic properties of the saturated sediment on modulating fluid exchange between the water column and the sediment when pressure fluctuations along the sediment-water interface occur at the millisecond scale. Pressure fluctuations occur at this temporal scale due to turbulence and associated shedding of vortices due to the ripple geometry. This suggests that biogeochemical cycling may also be affected by these high-frequency elastic effects. Future studies should be directed towards this and should take advantage of modeling tools such as those we present.

  6. A Neuroeconomics Analysis of Investment Process with Money Flow Information: The Error-Related Negativity.

    Science.gov (United States)

    Wang, Cuicui; Vieito, João Paulo; Ma, Qingguo

    2015-01-01

    This investigation is among the first ones to analyze the neural basis of an investment process with money flow information of financial market, using a simplified task where volunteers had to choose to buy or not to buy stocks based on the display of positive or negative money flow information. After choosing "to buy" or "not to buy," participants were presented with feedback. At the same time, event-related potentials (ERPs) were used to record investor's brain activity and capture the event-related negativity (ERN) and feedback-related negativity (FRN) components. The results of ERN suggested that there might be a higher risk and more conflict when buying stocks with negative net money flow information than positive net money flow information, and the inverse was also true for the "not to buy" stocks option. The FRN component evoked by the bad outcome of a decision was more negative than that by the good outcome, which reflected the difference between the values of the actual and expected outcome. From the research, we could further understand how investors perceived money flow information of financial market and the neural cognitive effect in investment process.

  7. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    Science.gov (United States)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  8. Techno-economic analysis of a gas-to-liquid process with different placements of a CO{sub 2} removal unit

    Energy Technology Data Exchange (ETDEWEB)

    Rafiee, A.; Hillestad, M. [Norwegian University of Science and Technology (NTNU), Department of Chemical Engineering, Trondheim (Norway)

    2012-03-15

    Five placements of a CO{sub 2} removal unit in a gas-to-liquid (GTL) process are evaluated from an economical point of view. The kinetic model is the one given by Iglesia et al. for a cobalt-based Fischer-Tropsch (FT) reactor. For each alternative, the process is optimized with respect to steam-to-carbon ratio, purge ratio of light ends, amount of tail gas recycled to syngas and FT units, reactor volume, and CO{sub 2} recovery. The results indicate that carbon and energy efficiencies and the annual net cash flow of the process with or without CO{sub 2} removal unit are not significantly different, and that there is not much to gain by removing CO{sub 2} from the process. It is optimal to recycle about 97 % of the light ends to the process (mainly to the FT unit) to obtain higher conversion of CO and H{sub 2} in the reactor. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  9. Post-fire hillslope debris flows: Evidence of a distinct erosion process

    Science.gov (United States)

    Langhans, Christoph; Nyman, Petter; Noske, Philip J.; Van der Sant, Rene E.; Lane, Patrick N. J.; Sheridan, Gary J.

    2017-10-01

    After wildfire a hitherto unexplained erosion process that some authors have called 'miniature debris flows on hillslopes' and that leave behind levee-lined rills has been observed in some regions of the world. Despite the unusual proposition of debris flow on planar hillslopes, the process has not received much attention. The objectives of this study were to (1) accumulate observational evidence of Hillslope Debris Flows (HDF) as we have defined the process, to (2) understand their initiation process by conducting runoff experiments on hillslopes, to (3) propose a conceptual model of HDF, and to (4) contrast and classify HDF relative to other erosion and transport processes in the post-wildfire hillslope domain. HDF have been observed at relatively steep slope gradients (0.4-0.8), on a variety of geologies, and after fire of at least moderate severity and consist of a lobe of gravel- to cobble-sized material 0.2-1 m wide that is pushed by runoff damming up behind it. During initiation, runoff moved individual particles that accumulated a small distance downslope until the accumulation of grains failed and formed the granular lobe of the HDF. HDF are a threshold process, and runoff rates of 0.5 L s- 1 2 L s- 1 were required for their initiation during the experiments. The conceptual model highlights HDF as a geomorphic process distinct from channel debris flows, because they occur on planar, unconfined hillslopes rather than confined channels. HDF can erode very coarse non-cohesive surface soil, which distinguishes them from rill erosion that have suspended and bedload transport. On a matrix of slope and grain size, HDF are enveloped between purely gravity-driven dry ravel, and mostly runoff driven bedload transport in rills.

  10. Flow behavior of polymers during the roll-to-roll hot embossing process

    Science.gov (United States)

    Deng, Yujun; Yi, Peiyun; Peng, Linfa; Lai, Xinmin; Lin, Zhongqin

    2015-06-01

    The roll-to-roll (R2R) hot embossing process is a recent advancement in the micro hot embossing process and is capable of continuously fabricating micro/nano-structures on polymers, with a high efficiency and a high throughput. However, the fast forming of the R2R hot embossing process limits the time for material flow and results in complicated flow behavior in the polymers. This study presents a fundamental investigation into the flow behavior of polymers and aims towards the comprehensive understanding of the R2R hot embossing process. A three-dimensional (3D) finite element (FE) model based on the viscoelastic model of polymers is established and validated for the fabrication of micro-pyramids using the R2R hot embossing process. The deformation and recovery of micro-pyramids on poly(vinyl chloride) (PVC) film are analyzed in the filling stage and the demolding stage, respectively. Firstly, in the analysis of the filling stage, the temperature distribution on the PVC film is discussed. A large temperature gradient is observed along the thickness direction of the PVC film and the temperature of the top surface is found to be higher than that of the bottom surface, due to the poor thermal conductivity of PVC. In addition, creep strains are demonstrated to depend highly on the temperature and are also observed to concentrate on the top layer of the PVC film because of high local temperature. In the demolding stage, the recovery of the embossed micro-pyramids is obvious. The cooling process is shown to be efficient for the reduction of recovery, especially when the mold temperature is high. In conclusion, this research advances the understanding of the flow behavior of polymers in the R2R hot embossing process and might help in the development of the highly accurate and highly efficient fabrication of microstructures on polymers.

  11. Performance of transonic fan stage with weight flow per unit annulus area of 198 kilograms per second per square meter (40.6(lb/sec)/sq ft)

    Science.gov (United States)

    Kovich, G.; Moore, R. D.; Urasek, D. C.

    1973-01-01

    The overall and blade-element performance are presented for an air compressor stage designed to study the effect of weight flow per unit annulus area on efficiency and flow range. At the design speed of 424.8 m/sec the peak efficiency of 0.81 occurred at the design weight flow and a total pressure ratio of 1.56. Design pressure ratio and weight flow were 1.57 and 29.5 kg/sec (65.0 lb/sec), respectively. Stall margin at design speed was 19 percent based on the weight flow and pressure ratio at peak efficiency and at stall.

  12. Titanium recycling in the United States in 2004, chap. Y of Sibley, S.F., ed., Flow studies for recycling metal commodities in the United States

    Science.gov (United States)

    Goonan, Thomas G.

    2010-01-01

    As one of a series of reports that describe the recycling of metal commodities in the United States, this report discusses the titanium metal fraction of the titanium economy, which generates and uses titanium metal scrap in its operations. Data for 2004 were selected to demonstrate the titanium flows associated with these operations. This report includes a description of titanium metal supply and demand in the United States to illustrate the extent of titanium recycling and to identify recycling trends. In 2004, U.S. apparent consumption of titanium metal (contained in various titanium-bearing products) was 45,000 metric tons (t) of titanium, which was distributed as follows: 25,000 t of titanium recovered as new scrap, 9,000 t of titanium as titanium metal and titanium alloy products delivered to the U.S. titanium products reservoir, 7,000 t of titanium consumed by steelmaking and other industries, and 4,000 t of titanium contained in unwrought and wrought products exported. Titanium recycling is concentrated within the titanium metals sector of the total titanium market. The titanium market is otherwise dominated by pigment (titanium oxide) products, which generate dissipative losses instead of recyclable scrap. In 2004, scrap (predominantly new scrap) was the source of roughly 54 percent of the titanium metal content of U.S.-produced titanium metal products.

  13. A Hydrostratigraphic System for Modeling Groundwater Flow and Radionuclide Migration at the Corrective Action Unit Scale, Nevada Test Site and Surrounding Areas, Clark, Lincoln, and Nye Counties, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Prothro, Lance; Drellack Jr., Sigmund; Mercadante, Jennifer

    2009-01-31

    Underground Test Area (UGTA) corrective action unit (CAU) groundwater flow and contaminant transport models of the Nevada Test Site (NTS) and vicinity are built upon hydrostratigraphic framework models (HFMs) that utilize the hydrostratigraphic unit (HSU) as the fundamental modeling component. The delineation and three-dimensional (3-D) modeling of HSUs within the highly complex geologic terrain that is the NTS requires a hydrostratigraphic system that is internally consistent, yet flexible enough to account for overlapping model areas, varied geologic terrain, and the development of multiple alternative HFMs. The UGTA CAU-scale hydrostratigraphic system builds on more than 50 years of geologic and hydrologic work in the NTS region. It includes 76 HSUs developed from nearly 300 stratigraphic units that span more than 570 million years of geologic time, and includes rock units as diverse as marine carbonate and siliciclastic rocks, granitic intrusives, rhyolitic lavas and ash-flow tuffs, and alluvial valley-fill deposits. The UGTA CAU-scale hydrostratigraphic system uses a geology-based approach and two-level classification scheme. The first, or lowest, level of the hydrostratigraphic system is the hydrogeologic unit (HGU). Rocks in a model area are first classified as one of ten HGUs based on the rock’s ability to transmit groundwater (i.e., nature of their porosity and permeability), which at the NTS is mainly a function of the rock’s primary lithology, type and degree of postdepositional alteration, and propensity to fracture. The second, or highest, level within the UGTA CAU-scale hydrostratigraphic system is the HSU, which is the fundamental mapping/modeling unit within UGTA CAU-scale HFMs. HSUs are 3-D bodies that are represented in the finite element mesh for the UGTA groundwater modeling process. HSUs are defined systematically by stratigraphically organizing HGUs of similar character into larger HSUs designations. The careful integration of

  14. The safety and regulatory process for low calorie sweeteners in the United States.

    Science.gov (United States)

    Roberts, Ashley

    2016-10-01

    Low calorie sweeteners are some of the most thoroughly tested and evaluated of all food additives. Products including aspartame and saccharin, have undergone several rounds of risk assessment by the United States Food and Drug Administration (FDA) and the European Food Safety Authority (EFSA), in relation to a number of potential safety concerns, including carcinogenicity and more recently, effects on body weight gain, glycemic control and effects on the gut microbiome. The majority of the modern day sweeteners; acesulfame K, advantame, aspartame, neotame and sucralose have been approved in the United States through the food additive process, whereas the most recent sweetener approvals for steviol glycosides and lo han guo have occurred through the Generally Recognized as Safe (GRAS) system, based on scientific procedures. While the regulatory process and review time of these two types of sweetener evaluations by the FDA differ, the same level of scientific evidence is required to support safety, so as to ensure a reasonable certainty of no harm.

  15. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  16. Using Graphics Processing Units to solve the classical N-body problem in physics and astrophysics

    CERN Document Server

    Spera, Mario

    2014-01-01

    Graphics Processing Units (GPUs) can speed up the numerical solution of various problems in astrophysics including the dynamical evolution of stellar systems; the performance gain can be more than a factor 100 compared to using a Central Processing Unit only. In this work I describe some strategies to speed up the classical N-body problem using GPUs. I show some features of the N-body code HiGPUs as template code. In this context, I also give some hints on the parallel implementation of a regularization method and I introduce the code HiGPUs-R. Although the main application of this work concerns astrophysics, some of the presented techniques are of general validity and can be applied to other branches of physics such as electrodynamics and QCD.

  17. Multi-unit Integration in Microfluidic Processes: Current Status and Future Horizons

    Directory of Open Access Journals (Sweden)

    Pratap R. Patnaik

    2011-07-01

    Full Text Available Microfluidic processes, mainly for biological and chemical applications, have expanded rapidly in recent years. While the initial focus was on single units, principally microreactors, technological and economic considerations have caused a shift to integrated microchips in which a number of microdevices function coherently. These integrated devices have many advantages over conventional macro-scale processes. However, the small scale of operation, complexities in the underlying physics and chemistry, and differences in the time constants of the participating units, in the interactions among them and in the outputs of interest make it difficult to design and optimize integrated microprocesses. These aspects are discussed here, current research and applications are reviewed, and possible future directions are considered.

  18. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  19. Energy flows, material cycles and global development. A process engineering approach to the Earth system

    Energy Technology Data Exchange (ETDEWEB)

    Schaub, Georg [Karlsruher Institut fuer Technologie, Karlsruhe (Germany). Engler-Bunte-Institut; Turek, Thomas [TU Clausthal, Clausthal-Zellerfeld (Germany). Inst. fuer Chemische Verfahrenstechnik

    2011-07-01

    The book deals with the global flows of energy and materials, and changes caused by human activities. Based on these facts, the limitations of anthropogenic energy and material flows and the resulting consequences for the development of human societies are discussed. Different scenarios for lifestyle patterns are correlated with the world's future development of energy supply and climate. The book provides a process engineering approach to the Earth system and global development. It requires basic understanding of mathematics, physics, chemistry and biology, and provides an insight into the complex matter for readers ranging from undergraduate students to experts. (orig.)

  20. Whiteness process of tile ceramics: using a synthetic flow as a modifier agent of color firing

    Science.gov (United States)

    dos Santos, G. R.; Pereira, M. C.; Olzon-Dionysio, M.; de Souza, S. D.; Morelli, M. R.

    2014-01-01

    Synthetic flow is proposed as a modifier agent of color firing in tile ceramic mass during the sinterization process, turning the red color firing into whiteness. Therefore, the 57Fe Mössbauer spectroscopy was used to understand how the interaction of the iron element in the mechanism of color firing mass occurs in this system. The results suggest that the change of color firing can be alternatively due to two main factors: (i) diluting the hematite content in the sample because of the use of synthetic flow and (ii) part of the hematite is converted in other uncolored crystal structures, which makes the final color firing lighter.