WorldWideScience

Sample records for unit process models

  1. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  2. COST ESTIMATION MODELS FOR DRINKING WATER TREATMENT UNIT PROCESSES

    Science.gov (United States)

    Cost models for unit processes typically utilized in a conventional water treatment plant and in package treatment plant technology are compiled in this paper. The cost curves are represented as a function of specified design parameters and are categorized into four major catego...

  3. Point process models for household distributions within small areal units

    Directory of Open Access Journals (Sweden)

    Zack W. Almquist

    2012-06-01

    Full Text Available Spatio-demographic data sets are increasingly available worldwide, permitting ever more realistic modeling and analysis of social processes ranging from mobility to disease trans- mission. The information provided by these data sets is typically aggregated by areal unit, for reasons of both privacy and administrative cost. Unfortunately, such aggregation does not permit fine-grained assessment of geography at the level of individual households. In this paper, we propose to partially address this problem via the development of point pro- cess models that can be used to effectively simulate the location of individual households within small areal units.

  4. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. Presentation at the ICWL 2008 conference. August, 20, 2008, Jinhua, China.

  5. Simulating Lattice Spin Models on Graphics Processing Units

    CERN Document Server

    Levy, Tal; Rabani, Eran; 10.1021/ct100385b

    2012-01-01

    Lattice spin models are useful for studying critical phenomena and allow the extraction of equilibrium and dynamical properties. Simulations of such systems are usually based on Monte Carlo (MC) techniques, and the main difficulty is often the large computational effort needed when approaching critical points. In this work, it is shown how such simulations can be accelerated with the use of NVIDIA graphics processing units (GPUs) using the CUDA programming architecture. We have developed two different algorithms for lattice spin models, the first useful for equilibrium properties near a second-order phase transition point and the second for dynamical slowing down near a glass transition. The algorithms are based on parallel MC techniques, and speedups from 70- to 150-fold over conventional single-threaded computer codes are obtained using consumer-grade hardware.

  6. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  7. Parallelizing the Cellular Potts Model on graphics processing units

    Science.gov (United States)

    Tapia, José Juan; D'Souza, Roshan M.

    2011-04-01

    The Cellular Potts Model (CPM) is a lattice based modeling technique used for simulating cellular structures in computational biology. The computational complexity of the model means that current serial implementations restrict the size of simulation to a level well below biological relevance. Parallelization on computing clusters enables scaling the size of the simulation but marginally addresses computational speed due to the limited memory bandwidth between nodes. In this paper we present new data-parallel algorithms and data structures for simulating the Cellular Potts Model on graphics processing units. Our implementations handle most terms in the Hamiltonian, including cell-cell adhesion constraint, cell volume constraint, cell surface area constraint, and cell haptotaxis. We use fine level checkerboards with lock mechanisms using atomic operations to enable consistent updates while maintaining a high level of parallelism. A new data-parallel memory allocation algorithm has been developed to handle cell division. Tests show that our implementation enables simulations of >10 cells with lattice sizes of up to 256 3 on a single graphics card. Benchmarks show that our implementation runs ˜80× faster than serial implementations, and ˜5× faster than previous parallel implementations on computing clusters consisting of 25 nodes. The wide availability and economy of graphics cards mean that our techniques will enable simulation of realistically sized models at a fraction of the time and cost of previous implementations and are expected to greatly broaden the scope of CPM applications.

  8. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  9. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  10. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th

  11. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  12. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  13. Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units

    Science.gov (United States)

    This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...

  14. THE APPLICATION OF THE “UNIVERSAL” MATHEMATICAL MODELS WITHIN THE ANALYSIS OF MUNICIPAL UNIT PROCESSES

    Directory of Open Access Journals (Sweden)

    Tatyana Nikolayevna Gordeeva

    2017-06-01

    Full Text Available The article describes the application possibility of the well-known Lotka-Volterra interaction model to the analysis of processes within a municipal unit as an administrative territory operating as a lower managerial level, i.e. local self-government. Object: to prove the application of the mathematical models in management sociology and local self-government sociology. The approach realization is possible due to universality of the mathematical models which can be applied in different areas regardless of their specificity. Methods or methodology of the research: deals with features of ODE solutions without their solving. Its basis was formed in the classical researches of H. Poincare and A.M. Lyapunov in the late XIX century. Nowadays the methods are widely used to analyze evolution systems describing dynamic processes in mechanics and physics, as well as in economics, ecology, medicine, and sociology. Results: The article proves applicability of the developed mathematical models within the analysis of municipal unit processes. Application field for the results: the interpretation of the model analysis results gives an opportunity to amplify and extend scientific basis in the field of management sociology and local self-government sociology.

  15. The Open Physiology workflow: modeling processes over physiology circuitboards of interoperable tissue units

    Science.gov (United States)

    de Bono, Bernard; Safaei, Soroush; Grenon, Pierre; Nickerson, David P.; Alexander, Samuel; Helvensteijn, Michiel; Kok, Joost N.; Kokash, Natallia; Wu, Alan; Yu, Tommy; Hunter, Peter; Baldock, Richard A.

    2015-01-01

    A key challenge for the physiology modeling community is to enable the searching, objective comparison and, ultimately, re-use of models and associated data that are interoperable in terms of their physiological meaning. In this work, we outline the development of a workflow to modularize the simulation of tissue-level processes in physiology. In particular, we show how, via this approach, we can systematically extract, parcellate and annotate tissue histology data to represent component units of tissue function. These functional units are semantically interoperable, in terms of their physiological meaning. In particular, they are interoperable with respect to [i] each other and with respect to [ii] a circuitboard representation of long-range advective routes of fluid flow over which to model long-range molecular exchange between these units. We exemplify this approach through the combination of models for physiology-based pharmacokinetics and pharmacodynamics to quantitatively depict biological mechanisms across multiple scales. Links to the data, models and software components that constitute this workflow are found at http://open-physiology.org/. PMID:25759670

  16. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    Science.gov (United States)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  17. Full Stokes finite-element modeling of ice sheets using a graphics processing unit

    Science.gov (United States)

    Seddik, H.; Greve, R.

    2016-12-01

    Thermo-mechanical simulation of ice sheets is an important approach to understand and predict their evolution in a changing climate. For that purpose, higher order (e.g., ISSM, BISICLES) and full Stokes (e.g., Elmer/Ice, http://elmerice.elmerfem.org) models are increasingly used to more accurately model the flow of entire ice sheets. In parallel to this development, the rapidly improving performance and capabilities of Graphics Processing Units (GPUs) allows to efficiently offload more calculations of complex and computationally demanding problems on those devices. Thus, in order to continue the trend of using full Stokes models with greater resolutions, using GPUs should be considered for the implementation of ice sheet models. We developed the GPU-accelerated ice-sheet model Sainō. Sainō is an Elmer (http://www.csc.fi/english/pages/elmer) derivative implemented in Objective-C which solves the full Stokes equations with the finite element method. It uses the standard OpenCL language (http://www.khronos.org/opencl/) to offload the assembly of the finite element matrix on the GPU. A mesh-coloring scheme is used so that elements with the same color (non-sharing nodes) are assembled in parallel on the GPU without the need for synchronization primitives. The current implementation shows that, for the ISMIP-HOM experiment A, during the matrix assembly in double precision with 8000, 87,500 and 252,000 brick elements, Sainō is respectively 2x, 10x and 14x faster than Elmer/Ice (when both models are run on a single processing unit). In single precision, Sainō is even 3x, 20x and 25x faster than Elmer/Ice. A detailed description of the comparative results between Sainō and Elmer/Ice will be presented, and further perspectives in optimization and the limitations of the current implementation.

  18. Lunar-Forming Giant Impact Model Utilizing Modern Graphics Processing Units

    Indian Academy of Sciences (India)

    J. C. Eiland; T. C. Salzillo; B. H. Hokr; J. L. Highland; W. D. Mayfield; B. M. Wyatt

    2014-12-01

    Recent giant impact models focus on producing a circumplanetary disk of the proper composition around the Earth and defer to earlier works for the accretion of this disk into the Moon. The discontinuity between creating the circumplanetary disk and accretion of the Moon is unnatural and lacks simplicity. In addition, current giant impact theories are being questioned due to their inability to find conditions that will produce a system with both the proper angular momentum and a resultant Moon that is isotopically similar to the Earth. Here we return to first principles and produce a continuous model that can be used to rapidly search the vast impact parameter space to identify plausible initial conditions. This is accomplished by focusing on the three major components of planetary collisions: constant gravitational attraction, short range repulsion and energy transfer. The structure of this model makes it easily parallelizable and well-suited to harness the power of modern Graphics Processing Units (GPUs). The model makes clear the physically relevant processes, and allows a physical picture to naturally develop. We conclude by demonstrating how the model readily produces stable Earth–Moon systems from a single, continuous simulation. The resultant systems possess many desired characteristics such as an iron-deficient, heterogeneously-mixed Moon and accurate axial tilt of the Earth.

  19. Quantum Chemistry for Solvated Molecules on Graphical Processing Units (GPUs)using Polarizable Continuum Models

    CERN Document Server

    Liu, Fang; Kulik, Heather J; Martínez, Todd J

    2015-01-01

    The conductor-like polarization model (C-PCM) with switching/Gaussian smooth discretization is a widely used implicit solvation model in chemical simulations. However, its application in quantum mechanical calculations of large-scale biomolecular systems can be limited by computational expense of both the gas phase electronic structure and the solvation interaction. We have previously used graphical processing units (GPUs) to accelerate the first of these steps. Here, we extend the use of GPUs to accelerate electronic structure calculations including C-PCM solvation. Implementation on the GPU leads to significant acceleration of the generation of the required integrals for C-PCM. We further propose two strategies to improve the solution of the required linear equations: a dynamic convergence threshold and a randomized block-Jacobi preconditioner. These strategies are not specific to GPUs and are expected to be beneficial for both CPU and GPU implementations. We benchmark the performance of the new implementat...

  20. ASAMgpu V1.0 - a moist fully compressible atmospheric model using graphics processing units (GPUs)

    Science.gov (United States)

    Horn, S.

    2012-03-01

    In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs). To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  1. Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units

    Directory of Open Access Journals (Sweden)

    Zhang Le

    2011-12-01

    Full Text Available Abstract Multiscale agent-based modeling (MABM has been widely used to simulate Glioblastoma Multiforme (GBM and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa. At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated.

  2. Improved Inventory Models for the United States Coast Guard Requirements Determination Process

    Science.gov (United States)

    1993-10-01

    Trepp present two versions of 5-24 a multi-item, supply availability safety level model.9 They used the Method of Lagrange Multipliers to solve for ki...the safety-level factor for item i. The Presutti and Trepp models address units backordered. To convert their unit models to requisition models, that...requisition size. R sE-Tnw MODELING In their paper, Presutti and Trepp also gave two ve.sions of a multi-item, response-time, safety-level model

  3. Signal processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Boswell, J.

    1983-01-01

    The architecture of the signal processing unit (SPU) comprises an ROM connected to a program bus, and an input-output bus connected to a data bus and register through a pipeline multiplier accumulator (pmac) and a pipeline arithmetic logic unit (palu), each associated with a random access memory (ram1,2). The system pulse frequency is from 20 mhz. The pmac is further detailed, and has a capability of 20 mega operations per second. There is also a block diagram for the palu, showing interconnections between the register block (rbl), separator for bus (bs), register (reg), shifter (sh) and combination unit. The first and second rams have formats 64*16 and 32*32 bits, respectively. Further data are a 5-v power supply and 2.5 micron n-channel silicon gate mos technology with about 50000 transistors.

  4. Parallel flow accumulation algorithms for graphical processing units with application to RUSLE model

    Science.gov (United States)

    Sten, Johan; Lilja, Harri; Hyväluoma, Jari; Westerholm, Jan; Aspnäs, Mats

    2016-04-01

    Digital elevation models (DEMs) are widely used in the modeling of surface hydrology, which typically includes the determination of flow directions and flow accumulation. The use of high-resolution DEMs increases the accuracy of flow accumulation computation, but as a drawback, the computational time may become excessively long if large areas are analyzed. In this paper we investigate the use of graphical processing units (GPUs) for efficient flow accumulation calculations. We present two new parallel flow accumulation algorithms based on dependency transfer and topological sorting and compare them to previously published flow transfer and indegree-based algorithms. We benchmark the GPU implementations against industry standards, ArcGIS and SAGA. With the flow-transfer D8 flow routing model and binary input data, a speed up of 19 is achieved compared to ArcGIS and 15 compared to SAGA. We show that on GPUs the topological sort-based flow accumulation algorithm leads on average to a speedup by a factor of 7 over the flow-transfer algorithm. Thus a total speed up of the order of 100 is achieved. We test the algorithms by applying them to the Revised Universal Soil Loss Equation (RUSLE) erosion model. For this purpose we present parallel versions of the slope, LS factor and RUSLE algorithms and show that the RUSLE erosion results for an area of 12 km x 24 km containing 72 million cells can be calculated in less than a second. Since flow accumulation is needed in many hydrological models, the developed algorithms may find use in many other applications than RUSLE modeling. The algorithm based on topological sorting is particularly promising for dynamic hydrological models where flow accumulations are repeatedly computed over an unchanged DEM.

  5. Predicting Summer Dryness Under a Warmer Climate: Modeling Land Surface Processes in the Midwestern United States

    Science.gov (United States)

    Winter, J. M.; Eltahir, E. A.

    2009-12-01

    One of the most significant impacts of climate change is the potential alteration of local hydrologic cycles over agriculturally productive areas. As the world’s food supply continues to be taxed by its burgeoning population, a greater percentage of arable land will need to be utilized and land currently producing food must become more efficient. This study seeks to quantify the effects of climate change on soil moisture in the American Midwest. A series of 24-year numerical experiments were conducted to assess the ability of Regional Climate Model Version 3 coupled to Integrated Biosphere Simulator (RegCM3-IBIS) and Biosphere-Atmosphere Transfer Scheme 1e (RegCM3-BATS1e) to simulate the observed hydroclimatology of the midwestern United States. Model results were evaluated using NASA Surface Radiation Budget, NASA Earth Radiation Budget Experiment, Illinois State Water Survey, Climate Research Unit Time Series 2.1, Global Soil Moisture Data Bank, and regional-scale estimations of evapotranspiration. The response of RegCM3-IBIS and RegCM3-BATS1e to a surrogate climate change scenario, a warming of 3oC at the boundaries and doubling of CO2, was explored. Precipitation increased significantly during the spring and summer in both RegCM3-IBIS and RegCM3-BATS1e, leading to additional runoff. In contrast, enhancement of evapotranspiration and shortwave radiation were modest. Soil moisture remained relatively unchanged in RegCM3-IBIS, while RegCM3-BATS1e exhibited some fall and winter wetting.

  6. Reforging the Wedding Ring: Exploring a Semi-Artificial Model of Population for the United Kingdom with Gaussian process emulators

    Directory of Open Access Journals (Sweden)

    Viet Dung Cao

    2013-10-01

    Full Text Available Background: We extend the "Wedding Ring‟ agent-based model of marriage formation to include some empirical information on the natural population change for the United Kingdom together with behavioural explanations that drive the observed nuptiality trends. Objective: We propose a method to explore statistical properties of agent-based demographic models. By coupling rule-based explanations driving the agent-based model with observed data we wish to bring agent-based modelling and demographic analysis closer together. Methods: We present a Semi-Artificial Model of Population, which aims to bridge demographic micro-simulation and agent-based traditions. We then utilise a Gaussian process emulator - a statistical model of the base model - to analyse the impact of selected model parameters on two key model outputs: population size and share of married agents. A sensitivity analysis is attempted, aiming to assess the relative importance of different inputs. Results: The resulting multi-state model of population dynamics has enhanced predictive capacity as compared to the original specification of the Wedding Ring, but there are some trade-offs between the outputs considered. The sensitivity analysis allows identification of the most important parameters in the modelled marriage formation process. Conclusions: The proposed methods allow for generating coherent, multi-level agent-based scenarios aligned with some aspects of empirical demographic reality. Emulators permit a statistical analysis of their properties and help select plausible parameter values. Comments: Given non-linearities in agent-based models such as the Wedding Ring, and the presence of feedback loops, the uncertainty in the model may not be directly computable by using traditional statistical methods. The use of statistical emulators offers a way forward.

  7. Sunlight inactivation of viruses in open-water unit process treatment wetlands: modeling endogenous and exogenous inactivation rates.

    Science.gov (United States)

    Silverman, Andrea I; Nguyen, Mi T; Schilling, Iris E; Wenk, Jannis; Nelson, Kara L

    2015-03-03

    Sunlight inactivation is an important mode of disinfection for viruses in surface waters. In constructed wetlands, for example, open-water cells can be used to promote sunlight disinfection and remove pathogenic viruses from wastewater. To aid in the design of these systems, we developed predictive models of virus attenuation that account for endogenous and exogenous sunlight-mediated inactivation mechanisms. Inactivation rate models were developed for two viruses, MS2 and poliovirus type 3; laboratory- and field-scale experiments were conducted to evaluate the models' ability to estimate inactivation rates in a pilot-scale, open-water, unit-process wetland cell. Endogenous inactivation rates were modeled using either photoaction spectra or total, incident UVB irradiance. Exogenous inactivation rates were modeled on the basis of virus susceptibilities to singlet oxygen. Results from both laboratory- and field-scale experiments showed good agreement between measured and modeled inactivation rates. The modeling approach presented here can be applied to any sunlit surface water and utilizes easily measured inputs such as depth, solar irradiance, water matrix absorbance, singlet oxygen concentration, and the virus-specific apparent second-order rate constant with singlet oxygen (k2). Interestingly, the MS2 k2 in the open-water wetland was found to be significantly larger than k2 observed in other waters in previous studies. Examples of how the model can be used to design and optimize natural treatment systems for virus inactivation are provided.

  8. ASAMgpu V1.0 – a moist fully compressible atmospheric model using graphics processing units (GPUs

    Directory of Open Access Journals (Sweden)

    S. Horn

    2012-03-01

    Full Text Available In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs. To ensure platform independence OpenGL and GLSL are used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a time-splitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment, and a DYCOMS-II case.

  9. ASAMgpu V1.0 – a moist fully compressible atmospheric model using graphics processing units (GPUs

    Directory of Open Access Journals (Sweden)

    S. Horn

    2011-10-01

    Full Text Available In this work the three dimensional compressible moist atmospheric model ASAMgpu is presented. The calculations are done using graphics processing units (GPUs. To ensure platform independence OpenGL and GLSL is used, with that the model runs on any hardware supporting fragment shaders. The MPICH2 library enables interprocess communication allowing the usage of more than one GPU through domain decomposition. Time integration is done with an explicit three step Runge-Kutta scheme with a timesplitting algorithm for the acoustic waves. The results for four test cases are shown in this paper. A rising dry heat bubble, a cold bubble induced density flow, a rising moist heat bubble in a saturated environment and a DYCOMS-II case.

  10. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  11. Modeling and analysis of chill and fill processes for the cryogenic storage and transfer engineering development unit tank

    Science.gov (United States)

    Hedayat, A.; Cartagena, W.; Majumdar, A. K.; LeClair, A. C.

    2016-03-01

    NASA's future missions may require long-term storage and transfer of cryogenic propellants. The Engineering Development Unit (EDU), a NASA in-house effort supported by both Marshall Space Flight Center (MSFC) and Glenn Research Center, is a cryogenic fluid management (CFM) test article that primarily serves as a manufacturing pathfinder and a risk reduction task for a future CFM payload. The EDU test article comprises a flight-like tank, internal components, insulation, and attachment struts. The EDU is designed to perform integrated passive thermal control performance testing with liquid hydrogen (LH2) in a test-like vacuum environment. A series of tests, with LH2 as a testing fluid, was conducted at Test Stand 300 at MSFC during the summer of 2014. The objective of this effort was to develop a thermal/fluid model for evaluating the thermodynamic behavior of the EDU tank during the chill and fill processes. The Generalized Fluid System Simulation Program, an MSFC in-house general-purpose computer program for flow network analysis, was utilized to model and simulate the chill and fill portion of the testing. The model contained the LH2 supply source, feed system, EDU tank, and vent system. The test setup, modeling description, and comparison of model predictions with the test data are presented.

  12. Computerized prediction of intensive care unit discharge after cardiac surgery: development and validation of a Gaussian processes model.

    Science.gov (United States)

    Meyfroidt, Geert; Güiza, Fabian; Cottem, Dominiek; De Becker, Wilfried; Van Loon, Kristien; Aerts, Jean-Marie; Berckmans, Daniël; Ramon, Jan; Bruynooghe, Maurice; Van den Berghe, Greet

    2011-10-25

    The intensive care unit (ICU) length of stay (LOS) of patients undergoing cardiac surgery may vary considerably, and is often difficult to predict within the first hours after admission. The early clinical evolution of a cardiac surgery patient might be predictive for his LOS. The purpose of the present study was to develop a predictive model for ICU discharge after non-emergency cardiac surgery, by analyzing the first 4 hours of data in the computerized medical record of these patients with Gaussian processes (GP), a machine learning technique. Non-interventional study. Predictive modeling, separate development (n = 461) and validation (n = 499) cohort. GP models were developed to predict the probability of ICU discharge the day after surgery (classification task), and to predict the day of ICU discharge as a discrete variable (regression task). GP predictions were compared with predictions by EuroSCORE, nurses and physicians. The classification task was evaluated using aROC for discrimination, and Brier Score, Brier Score Scaled, and Hosmer-Lemeshow test for calibration. The regression task was evaluated by comparing median actual and predicted discharge, loss penalty function (LPF) ((actual-predicted)/actual) and calculating root mean squared relative errors (RMSRE). Median (P25-P75) ICU length of stay was 3 (2-5) days. For classification, the GP model showed an aROC of 0.758 which was significantly higher than the predictions by nurses, but not better than EuroSCORE and physicians. The GP had the best calibration, with a Brier Score of 0.179 and Hosmer-Lemeshow p-value of 0.382. For regression, GP had the highest proportion of patients with a correctly predicted day of discharge (40%), which was significantly better than the EuroSCORE (p nurses (p = 0.044) but equivalent to physicians. GP had the lowest RMSRE (0.408) of all predictive models. A GP model that uses PDMS data of the first 4 hours after admission in the ICU of scheduled adult cardiac surgery

  13. Modelling process integration and its management – case of a public housing delivery organization in United Arab Emirates

    Directory of Open Access Journals (Sweden)

    Venkatachalam Senthilkumar

    2017-01-01

    Full Text Available Huge volume of project information are generated during the life cycle of an AEC projects. These project information are categorized in to technical and administrative information and managed through appropriate processes. There are many tools such as Document Management Systems, Building Information Modeling (BIM available to manage and integrate the technical information. However, the administrative information and its related processes such as the payment, status, authorization, approval etc. are not effectively managed. The current study aims to explore the administrative information management process of a local housing delivery public agency. This agency manages more than 2000 housing projects at any time of a year. The administrative processesare characterized withdelivery inconsistencies among various project participants. Though there are many commercially available process management systems, there exist limitations on the customization of the modules/ systems. Hence there is a need to develop an information management system which can integrates and manage these housing projects processes effectively. This requires the modeling of administrative processes and its interfaces among the various stakeholder processes. Hence this study aims to model the administrative processes and its related information during the life cycle of the project using IDEF0 and IDEF1X modeling. The captured processes and information interfaces are analyzed and appropriate process integration is suggested to avoid the delay in their project delivery processes. Further, the resultant model can be used for effectively managing the housing delivery projects.

  14. The Structure of Matter in the Unit of 4E Model and Its Impact on Science Process Skills

    Directory of Open Access Journals (Sweden)

    Sevilay KARAMUSTAFAOĞLU

    2014-07-01

    Full Text Available This research with four-stage model is enriched with activities supported by the scientific process skills of teachers to develop guidance material and this material is carried out to determine the effectiveness of the process of teaching science process skills. For this purpose, scientific process skills test developed by Enger and Yager and applied on 48 students. Iteman program and 5 item with low reliability are removed. The final status is 31 items. The test result of the analysis of the collected data indicated that the test scores of students’ science process skills are significantly different and they were in favour of the experimental group. Primary education is the first stage of the education and this sort of research is extremely good for teacher candidates. By the help of this research, teacher candidates can receive a very good science education.

  15. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-04-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m-2 yr-1 and total NPP in the range of 318–490 Tg C yr-1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m-2 yr-1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m-2 yr-1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. Finally, we suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  16. Comparing cropland net primary production estimates from inventory, a satellite-based model, and a process-based model in the Midwest of the United States

    Science.gov (United States)

    Li, Zhengpeng; Liu, Shuguang; Tan, Zhengxi; Bliss, Norman B.; Young, Claudia J.; West, Tristram O.; Ogle, Stephen M.

    2014-01-01

    Accurately quantifying the spatial and temporal variability of net primary production (NPP) for croplands is essential to understand regional cropland carbon dynamics. We compared three NPP estimates for croplands in the Midwestern United States: inventory-based estimates using crop yield data from the U.S. Department of Agriculture (USDA) National Agricultural Statistics Service (NASS); estimates from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) NPP product; and estimates from the General Ensemble biogeochemical Modeling System (GEMS) process-based model. The three methods estimated mean NPP in the range of 469–687 g C m−2 yr−1and total NPP in the range of 318–490 Tg C yr−1 for croplands in the Midwest in 2007 and 2008. The NPP estimates from crop yield data and the GEMS model showed the mean NPP for croplands was over 650 g C m−2 yr−1 while the MODIS NPP product estimated the mean NPP was less than 500 g C m−2 yr−1. MODIS NPP also showed very different spatial variability of the cropland NPP from the other two methods. We found these differences were mainly caused by the difference in the land cover data and the crop specific information used in the methods. Our study demonstrated that the detailed mapping of the temporal and spatial change of crop species is critical for estimating the spatial and temporal variability of cropland NPP. We suggest that high resolution land cover data with species–specific crop information should be used in satellite-based and process-based models to improve carbon estimates for croplands.

  17. Spared piriform cortical single-unit odor processing and odor discrimination in the Tg2576 mouse model of Alzheimer's disease.

    Science.gov (United States)

    Xu, Wenjin; Lopez-Guzman, Mirielle; Schoen, Chelsea; Fitzgerald, Shane; Lauer, Stephanie L; Nixon, Ralph A; Levy, Efrat; Wilson, Donald A

    2014-01-01

    Alzheimer's disease is a neurodegenerative disorder that is the most common cause of dementia in the elderly today. One of the earliest reported signs of Alzheimer's disease is olfactory dysfunction, which may manifest in a variety of ways. The present study sought to address this issue by investigating odor coding in the anterior piriform cortex, the primary cortical region involved in higher order olfactory function, and how it relates to performance on olfactory behavioral tasks. An olfactory habituation task was performed on cohorts of transgenic and age-matched wild-type mice at 3, 6 and 12 months of age. These animals were then anesthetized and acute, single-unit electrophysiology was performed in the anterior piriform cortex. In addition, in a separate group of animals, a longitudinal odor discrimination task was conducted from 3-12 months of age. Results showed that while odor habituation was impaired at all ages, Tg2576 performed comparably to age-matched wild-type mice on the olfactory discrimination task. The behavioral data mirrored intact anterior piriform cortex single-unit odor responses and receptive fields in Tg2576, which were comparable to wild-type at all age groups. The present results suggest that odor processing in the olfactory cortex and basic odor discrimination is especially robust in the face of amyloid β precursor protein (AβPP) over-expression and advancing amyloid β (Aβ) pathology. Odor identification deficits known to emerge early in Alzheimer's disease progression, therefore, may reflect impairments in linking the odor percept to associated labels in cortical regions upstream of the primary olfactory pathway, rather than in the basic odor processing itself.

  18. Spared piriform cortical single-unit odor processing and odor discrimination in the Tg2576 mouse model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Wenjin Xu

    Full Text Available Alzheimer's disease is a neurodegenerative disorder that is the most common cause of dementia in the elderly today. One of the earliest reported signs of Alzheimer's disease is olfactory dysfunction, which may manifest in a variety of ways. The present study sought to address this issue by investigating odor coding in the anterior piriform cortex, the primary cortical region involved in higher order olfactory function, and how it relates to performance on olfactory behavioral tasks. An olfactory habituation task was performed on cohorts of transgenic and age-matched wild-type mice at 3, 6 and 12 months of age. These animals were then anesthetized and acute, single-unit electrophysiology was performed in the anterior piriform cortex. In addition, in a separate group of animals, a longitudinal odor discrimination task was conducted from 3-12 months of age. Results showed that while odor habituation was impaired at all ages, Tg2576 performed comparably to age-matched wild-type mice on the olfactory discrimination task. The behavioral data mirrored intact anterior piriform cortex single-unit odor responses and receptive fields in Tg2576, which were comparable to wild-type at all age groups. The present results suggest that odor processing in the olfactory cortex and basic odor discrimination is especially robust in the face of amyloid β precursor protein (AβPP over-expression and advancing amyloid β (Aβ pathology. Odor identification deficits known to emerge early in Alzheimer's disease progression, therefore, may reflect impairments in linking the odor percept to associated labels in cortical regions upstream of the primary olfactory pathway, rather than in the basic odor processing itself.

  19. Numerical simulation of gas-dynamic, thermal processes and evaluation of the stress-strain state in the modeling compressor of the gas-distributing unit

    Science.gov (United States)

    Shmakov, A. F.; Modorskii, V. Ya.

    2016-10-01

    This paper presents the results of numerical modeling of gas-dynamic processes occurring in the flow path, thermal analysis and evaluation of the stress-strain state of a three-stage design of the compressor gas pumping unit. Physical and mathematical models of the processes developed. Numerical simulation was carried out in the engineering software ANSYS 13. The problem is solved in a coupled statement, in which the results of the gas-dynamic calculation transferred as boundary conditions for the evaluation of the thermal and stress-strain state of a three-stage design of the compressor gas pumping unit. The basic parameters, which affect the stress-strain state of the housing and changing gaps of labyrinth seals in construction. The method of analysis of the pumped gas flow influence on the strain of construction was developed.

  20. Temperature of the Central Processing Unit

    Directory of Open Access Journals (Sweden)

    Ivan Lavrov

    2016-10-01

    Full Text Available Heat is inevitably generated in the semiconductors during operation. Cooling in a computer, and in its main part – the Central Processing Unit (CPU, is crucial, allowing the proper functioning without overheating, malfunctioning, and damage. In order to estimate the temperature as a function of time, it is important to solve the differential equations describing the heat flow and to understand how it depends on the physical properties of the system. This project aims to answer these questions by considering a simplified model of the CPU + heat sink. A similarity with the electrical circuit and certain methods from electrical circuit analysis are discussed.

  1. ON DEVELOPING CLEANER ORGANIC UNIT PROCESSES

    Science.gov (United States)

    Organic waste products, potentially harmful to the human health and the environment, are primarily produced in the synthesis stage of manufacturing processes. Many such synthetic unit processes, such as halogenation, oxidation, alkylation, nitration, and sulfonation are common to...

  2. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method.

  3. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  4. Graphics Processing Units (GPU) and the Goddard Earth Observing System atmospheric model (GEOS-5): Implementation and Potential Applications

    Science.gov (United States)

    Putnam, William M.

    2011-01-01

    Earth system models like the Goddard Earth Observing System model (GEOS-5) have been pushing the limits of large clusters of multi-core microprocessors, producing breath-taking fidelity in resolving cloud systems at a global scale. GPU computing presents an opportunity for improving the efficiency of these leading edge models. A GPU implementation of GEOS-5 will facilitate the use of cloud-system resolving resolutions in data assimilation and weather prediction, at resolutions near 3.5 km, improving our ability to extract detailed information from high-resolution satellite observations and ultimately produce better weather and climate predictions

  5. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  6. Effect of land cover on atmospheric processes and air quality over the continental United States – a NASA unified WRF (NU-WRF model study

    Directory of Open Access Journals (Sweden)

    Z. Tao

    2013-02-01

    Full Text Available The land surface plays a crucial role in regulating water and energy fluxes at the land–atmosphere (L–A interface and controls many processes and feedbacks in the climate system. Land cover and vegetation type remains one key determinant of soil moisture content that impacts air temperature, planetary boundary layer (PBL evolution, and precipitation through soil moisture–evapotranspiration coupling. In turn it will affect atmospheric chemistry and air quality. This paper presents the results of a modeling study of the effect of land cover on some key L–A processes with a focus on air quality. The newly developed NASA Unified Weather Research and Forecast (NU-WRF modeling system couples NASA's Land Information System (LIS with the community WRF model and allows users to explore the L–A processes and feedbacks. Three commonly used satellite-derived land cover datasets, i.e. from the US Geological Survey (USGS and University of Maryland (UMD that are based on the Advanced Very High Resolution Radiometer (AVHRR and from the Moderate Resolution Imaging Spectroradiometer (MODIS, bear large differences in agriculture, forest, grassland, and urban spatial distributions in the continental United States, and thus provide an excellent case to investigate how land cover change would impact atmospheric processes and air quality. The weeklong simulations demonstrate the noticeable differences in soil moisture/temperature, latent/sensible heat flux, PBL height, wind, NO2/ozone, and PM2.5 air quality. These discrepancies can be traced to associate with the land cover properties, e.g. stomatal resistance, albedo and emissivity, and roughness characteristics. It also implies that the rapid urban growth may have complex air quality implications with reductions in peak ozone but more frequent high ozone events.

  7. Effect of land cover on atmospheric processes and air quality over the continental United States – a NASA Unified WRF (NU-WRF model study

    Directory of Open Access Journals (Sweden)

    Z. Tao

    2013-07-01

    Full Text Available The land surface plays a crucial role in regulating water and energy fluxes at the land–atmosphere (L–A interface and controls many processes and feedbacks in the climate system. Land cover and vegetation type remains one key determinant of soil moisture content that impacts air temperature, planetary boundary layer (PBL evolution, and precipitation through soil-moisture–evapotranspiration coupling. In turn, it will affect atmospheric chemistry and air quality. This paper presents the results of a modeling study of the effect of land cover on some key L–A processes with a focus on air quality. The newly developed NASA Unified Weather Research and Forecast (NU-WRF modeling system couples NASA's Land Information System (LIS with the community WRF model and allows users to explore the L–A processes and feedbacks. Three commonly used satellite-derived land cover datasets – i.e., from the US Geological Survey (USGS and University of Maryland (UMD, which are based on the Advanced Very High Resolution Radiometer (AVHRR, and from the Moderate Resolution Imaging Spectroradiometer (MODIS – bear large differences in agriculture, forest, grassland, and urban spatial distributions in the continental United States, and thus provide an excellent case to investigate how land cover change would impact atmospheric processes and air quality. The weeklong simulations demonstrate the noticeable differences in soil moisture/temperature, latent/sensible heat flux, PBL height, wind, NO2/ozone, and PM2.5 air quality. These discrepancies can be traced to associate with the land cover properties, e.g., stomatal resistance, albedo and emissivity, and roughness characteristics. It also implies that the rapid urban growth may have complex air quality implications with reductions in peak ozone but more frequent high ozone events.

  8. Analysis and Optimization of Central Processing Unit Process Parameters

    Science.gov (United States)

    Kaja Bantha Navas, R.; Venkata Chaitana Vignan, Budi; Durganadh, Margani; Rama Krishna, Chunduri

    2017-05-01

    The rapid growth of computer has made processing more data capable, which increase the heat dissipation. Hence the system unit CPU must be cooled against operating temperature. This paper presents a novel approach for the optimization of operating parameters on Central Processing Unit with single response based on response graph method. These methods have a series of steps from of proposed approach which are capable of decreasing uncertainty caused by engineering judgment in the Taguchi method. Orthogonal Array value was taken from ANSYS report. The method shows a good convergence with the experimental and the optimum process parameters.

  9. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  10. Syllables as Processing Units in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Alvarez, Carlos J.; Vallee, Nathalie

    2006-01-01

    This research focused on the syllable as a processing unit in handwriting. Participants wrote, in uppercase letters, words that had been visually presented. The interletter intervals provide information on the timing of motor production. In Experiment 1, French participants wrote words that shared the initial letters but had different syllable…

  11. Graphics processing unit-assisted lossless decompression

    Science.gov (United States)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  12. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  13. Using trauma informed care as a nursing model of care in an acute inpatient mental health unit: A practice development process.

    Science.gov (United States)

    Isobel, Sophie; Edwards, Clair

    2017-02-01

    Without agreeing on an explicit approach to care, mental health nurses may resort to problem focused, task oriented practice. Defining a model of care is important but there is also a need to consider the philosophical basis of any model. The use of Trauma Informed Care as a guiding philosophy provides a robust framework from which to review nursing practice. This paper describes a nursing workforce practice development process to implement Trauma Informed Care as an inpatient model of mental health nursing care. Trauma Informed Care is an evidence-based approach to care delivery that is applicable to mental health inpatient units; while there are differing strategies for implementation, there is scope for mental health nurses to take on Trauma Informed Care as a guiding philosophy, a model of care or a practice development project within all of their roles and settings in order to ensure that it has considered, relevant and meaningful implementation. The principles of Trauma Informed Care may also offer guidance for managing workforce stress and distress associated with practice change.

  14. Added Value Improvement on Arabica Coffee Wet Process MethodUsing Model Kemitraan Bermediasi (Motramed on Unit Pengolahan Hasil at Ngada Residence - NTT

    Directory of Open Access Journals (Sweden)

    Djoko Soemarno

    2009-05-01

    Full Text Available Ngada Residence is main producen region Arabica coffee in Nusa Tenggara Timur province. There are scattered on district of Bajawa and Golewa, that all of them effort by farmers and low quality, so farmers get low price and coffee development slowly than other coffee region in Indonesia. But, on the other hand, Arabica coffee from this region have potential special taste to be export quality coffee beans. One of way to solve to develop this quality is implementation coffee processing by Wet Process methode and support marketing system better by Model Kemitraan Bermediasi (Motramed. This research started from June until October 2007 at two centre district of Arabica coffee, there are district Bajawa are UPH Fa Masa on Beiwali village, UPH Wonga Wali on Susu village, UPH Papa Taki on Bomari village, UPH Suka Maju on Ubedolumolo village and Kecamatan Golewa are UPH Papa Wiu on Mangulewa village, UPH Meza Mogo on Rakateda II village and UPH Ate Riji on Were I village. This research want to know added value, cost efficiency, and profit on Arabica coffee processing used wet process methode on Unit Pengolahan Hasil (UPH at Ngada Residence. Data was analysed by approximation added value, R-C Ratio analisys and t-One Sample Test. The result showed that Arabica coffee wet process could improved phisic and taste quality, lower of beans size, higher quality grade, smaller defect beans, moisture content lower, had special taste and very few taste defect. Those quality improvement improved price market to be higher, the added value about Rp4,390,- per kg and improved profit for farmers.Key words : Arabica coffee, wet process, quality, added value, efisiency, revenue.

  15. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    Science.gov (United States)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  16. A Scientific Trigger Unit for Space-Based Real-Time Gamma Ray Burst Detection, II - Data Processing Model and Benchmarks

    CERN Document Server

    Provost, Hervé Le; Flouzat, Christophe; Kestener, Pierre; Chaminade, Thomas; Donati, Modeste; Château, Frédéric; Daly, François; Fontignie, Jean

    2014-01-01

    The Scientific Trigger Unit (UTS) is a satellite equipment designed to detect Gamma Ray Bursts (GRBs) observed by the onboard 6400 pixels camera ECLAIRs. It is foreseen to equip the low-Earth orbit French-Chinese satellite SVOM and acts as the GRB trigger unit for the mission. The UTS analyses in real-time and in great details the onboard camera data in order to select the GRBs, to trigger a spacecraft slew re-centering each GRB for the narrow field-of-view instruments, and to alert the ground telescope network for GRB follow-up observations. A few GRBs per week are expected to be observed by the camera; the UTS targets a close to 100% trigger efficiency, while being selective enough to avoid fake alerts. This is achieved by running the complex scientific algorithms on a radiation tolerant hardware, based on a FPGA data pre-processor and a CPU with a Real-Time Operating System. The UTS is a scientific software, firmware and hardware co-development. A Data Processing Model (DPM) has been developed to fully val...

  17. Numerical Integration with Graphical Processing Unit for QKD Simulation

    Science.gov (United States)

    2014-03-27

    existing and proposed Quantum Key Distribution (QKD) systems. This research investigates using graphical processing unit ( GPU ) technology to more...Time Pad GPU graphical processing unit API application programming interface CUDA Compute Unified Device Architecture SIMD single-instruction-stream...and can be passed by value or reference [2]. 2.3 Graphical Processing Units Programming with graphical processing unit ( GPU ) requires a different

  18. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  19. Modeling multiphase materials processes

    CERN Document Server

    Iguchi, Manabu

    2010-01-01

    ""Modeling Multiphase Materials Processes: Gas-Liquid Systems"" describes the methodology and application of physical and mathematical modeling to multi-phase flow phenomena in materials processing. The book focuses on systems involving gas-liquid interaction, the most prevalent in current metallurgical processes. The performance characteristics of these processes are largely dependent on transport phenomena. This volume covers the inherent characteristics that complicate the modeling of transport phenomena in such systems, including complex multiphase structure, intense turbulence, opacity of

  20. INNOVATION PROCESS MODELLING

    Directory of Open Access Journals (Sweden)

    JANUSZ K. GRABARA

    2011-01-01

    Full Text Available Modelling phenomena in accordance with the structural approach enables one to simplify the observed relations and to present the classification grounds. An example may be a model of organisational structure identifying the logical relations between particular units and presenting the division of authority, work.

  1. Determinants of profitability of smallholder palm oil processing units ...

    African Journals Online (AJOL)

    ... of profitability of smallholder palm oil processing units in Ogun state, Nigeria. ... as well as their geographical spread covering the entire land space of the state. ... The F-ratio value is statistically significant (P<0.01) implying that the model is ...

  2. Product Development Process Modeling

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The use of Concurrent Engineering and other modern methods of product development and maintenance require that a large number of time-overlapped "processes" be performed by many people. However, successfully describing and optimizing these processes are becoming even more difficult to achieve. The perspective of industrial process theory (the definition of process) and the perspective of process implementation (process transition, accumulation, and inter-operations between processes) are used to survey the method used to build one base model (multi-view) process model.

  3. Anisotropic interfacial tension, contact angles, and line tensions: A graphics-processing-unit-based Monte Carlo study of the Ising model

    Science.gov (United States)

    Block, Benjamin J.; Kim, Suam; Virnau, Peter; Binder, Kurt

    2014-12-01

    As a generic example for crystals where the crystal-fluid interface tension depends on the orientation of the interface relative to the crystal lattice axes, the nearest-neighbor Ising model on the simple cubic lattice is studied over a wide temperature range, both above and below the roughening transition temperature. Using a thin-film geometry Lx×Ly×Lz with periodic boundary conditions along the z axis and two free Lx×Ly surfaces at which opposing surface fields ±H1 act, under conditions of partial wetting, a single planar interface inclined under a contact angle θ interface tension, the contact angle, and the line tension (which depends on the contact angle, and on temperature). All these quantities are extracted from suitable thermodynamic integration procedures. In order to keep finite-size effects as well as statistical errors small enough, rather large lattice sizes (of the order of 46 million sites) were found to be necessary, and the availability of very efficient code implementation of graphics processing units was crucial for the feasibility of this study.

  4. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  5. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models. These ...

  6. Modified Claus process probabilistic model

    Energy Technology Data Exchange (ETDEWEB)

    Larraz Mora, R. [Chemical Engineering Dept., Univ. of La Laguna (Spain)

    2006-03-15

    A model is proposed for the simulation of an industrial Claus unit with a straight-through configuration and two catalytic reactors. Process plant design evaluations based on deterministic calculations does not take into account the uncertainties that are associated with the different input variables. A probabilistic simulation method was applied in the Claus model to obtain an impression of how some of these inaccuracies influences plant performance. (orig.)

  7. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  8. Graphics Processing Unit Assisted Thermographic Compositing

    Science.gov (United States)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  9. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar Saavedra, J.A.; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  10. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  11. Accelerating the Fourier split operator method via graphics processing units

    CERN Document Server

    Bauke, Heiko

    2010-01-01

    Current generations of graphics processing units have turned into highly parallel devices with general computing capabilities. Thus, graphics processing units may be utilized, for example, to solve time dependent partial differential equations by the Fourier split operator method. In this contribution, we demonstrate that graphics processing units are capable to calculate fast Fourier transforms much more efficiently than traditional central processing units. Thus, graphics processing units render efficient implementations of the Fourier split operator method possible. Performance gains of more than an order of magnitude as compared to implementations for traditional central processing units are reached in the solution of the time dependent Schr\\"odinger equation and the time dependent Dirac equation.

  12. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  13. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  14. Graphics Processing Units for HEP trigger systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R. [INFN Sezione di Roma “Tor Vergata”, Via della Ricerca Scientifica 1, 00133 Roma (Italy); Bauce, M. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Biagioni, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Chiozzi, S.; Cotta Ramusino, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Fantechi, R. [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); CERN, Geneve (Switzerland); Fiorini, M. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Giagu, S. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); Gianoli, A. [INFN Sezione di Ferrara, Via Saragat 1, 44122 Ferrara (Italy); University of Ferrara, Via Saragat 1, 44122 Ferrara (Italy); Lamanna, G., E-mail: gianluca.lamanna@cern.ch [INFN Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); INFN Laboratori Nazionali di Frascati, Via Enrico Fermi 40, 00044 Frascati (Roma) (Italy); Lonardo, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); Messina, A. [INFN Sezione di Roma “La Sapienza”, P.le A. Moro 2, 00185 Roma (Italy); University of Rome “La Sapienza”, P.lee A.Moro 2, 00185 Roma (Italy); and others

    2016-07-11

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  15. Business Process Modeling: Blueprinting

    OpenAIRE

    Al-Fedaghi, Sabah

    2017-01-01

    This paper presents a flow-based methodology for capturing processes specified in business process modeling. The proposed methodology is demonstrated through re-modeling of an IBM Blueworks case study. While the Blueworks approach offers a well-proven tool in the field, this should not discourage workers from exploring other ways of thinking about effectively capturing processes. The diagrammatic representation presented here demonstrates a viable methodology in this context. It is hoped this...

  16. Energy Efficient Iris Recognition With Graphics Processing Units

    National Research Council Canada - National Science Library

    Rakvic, Ryan; Broussard, Randy; Ngo, Hau

    2016-01-01

    .... In the past few years, however, this growth has slowed for central processing units (CPUs). Instead, there has been a shift to multicore computing, specifically with the general purpose graphic processing units (GPUs...

  17. Business Model Process Configurations

    DEFF Research Database (Denmark)

    Taran, Yariv; Nielsen, Christian; Thomsen, Peter

    2015-01-01

    Purpose – The paper aims: 1) To develop systematically a structural list of various business model process configuration and to group (deductively) these selected configurations in a structured typological categorization list. 2) To facilitate companies in the process of BM innovation......, by developing (inductively) an ontological classification framework, in view of the BM process configurations typology developed. Design/methodology/approach – Given the inconsistencies found in the business model studies (e.g. definitions, configurations, classifications) we adopted the analytical induction...... method of data analysis. Findings - A comprehensive literature review and analysis resulted in a list of business model process configurations systematically organized under five classification groups, namely, revenue model; value proposition; value configuration; target customers, and strategic...

  18. WWTP Process Tank Modelling

    DEFF Research Database (Denmark)

    Laursen, Jesper

    hydrofoil shaped propellers. These two sub-processes deliver the main part of the supplied energy to the activated sludge tank, and for this reason they are important for the mixing conditions in the tank. For other important processes occurring in the activated sludge tank, existing models and measurements...

  19. Biosphere Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  20. Steady-State Process Modelling

    DEFF Research Database (Denmark)

    2011-01-01

    illustrate the “equation oriented” approach as well as the “sequential modular” approach to solving complex flowsheets for steady state applications. The applications include the Williams-Otto plant, the hydrodealkylation (HDA) of toluene, conversion of ethylene to ethanol and a bio-ethanol process.......This chapter covers the basic principles of steady state modelling and simulation using a number of case studies. Two principal approaches are illustrated that develop the unit operation models from first principles as well as through application of standard flowsheet simulators. The approaches...

  1. Numerical modelling of lighting process in pulverized-coal burner of a boiler unit by the low-temperature plasma jet

    Energy Technology Data Exchange (ETDEWEB)

    Miloshevich, H.; Rychkov, A.D. [Siberian Branch of Russian Academy of Sciences, Novosibirsk (Russian Federation). Inst. of Occupational Technologies

    1999-07-01

    The authors numerically modelled the process of aeromixture ignition in a pulverized-coal burner by a central axysymmetric jet of air that is heated in an electrical are plasma generator up to about 5000 K. The aim was to investigate the process of coal particle ignition in the flow and establish the conditions under which the independent combustion of pulverized coal mixture occurs. The results obtained showed the important role of radiation heat transfer in initiating the combustion process of solid fuel particles. 8 refs., 5 figs.

  2. Quantification of terrestrial ecosystem carbon dynamics in the conterminous United States combining a process-based biogeochemical model and MODIS and AmeriFlux data

    Directory of Open Access Journals (Sweden)

    M. Chen

    2011-09-01

    Full Text Available Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM, should provide a more adequate quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS Enhanced Vegetation Index (EVI, Land Surface Water Index (LSWI and carbon flux data of AmeriFlux to conduct such a study. We first modify the gross primary production (GPP modeling in TEM by incorporating EVI and LSWI to account for the effects of the changes of canopy photosynthetic capacity, phenology and water stress. Second, we parameterize and verify the new version of TEM with eddy flux data. We then apply the model to the conterminous United States over the period 2000–2005 at a 0.05° × 0.05° spatial resolution. We find that the new version of TEM made improvement over the previous version and generally captured the expected temporal and spatial patterns of regional carbon dynamics. We estimate that regional GPP is between 7.02 and 7.78 Pg C yr−1 and net primary production (NPP ranges from 3.81 to 4.38 Pg C yr−1 and net ecosystem production (NEP varies within 0.08–0.73 Pg C yr−1 over the period 2000–2005 for the conterminous United States. The uncertainty due to parameterization is 0.34, 0.65 and 0.18 Pg C yr−1 for the regional estimates of GPP, NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Our study provides a new independent and more adequate measure of carbon fluxes for the conterminous United States, which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon management and climate.

  3. Quantification of Terrestrial Ecosystem Carbon Dynamics in the Conterminous United States Combining a Process-Based Biogeochemical Model and MODIS and AmeriFlux data

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Min; Zhuang, Qianlai; Cook, David R.; Coulter, Richard L.; Pekour, Mikhail S.; Scott, Russell L.; Munger, J. W.; Bible, Ken

    2011-09-21

    Satellite remote sensing provides continuous temporal and spatial information of terrestrial 24 ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical 25 models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate 26 quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution 27 Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), Land Surface Water Index 28 (LSWI) and carbon flux data of AmeriFlux to conduct such a study. We first modify the gross primary 29 production (GPP) modeling in TEM by incorporating EVI and LSWI to account for the effects of the 30 changes of canopy photosynthetic capacity, phenology and water stress. Second, we parameterize and 31 verify the new version of TEM with eddy flux data. We then apply the model to the conterminous 32 United States over the period 2000-2005 at a 0.05o ×0.05o spatial resolution. We find that the new 33 version of TEM generally captured the expected temporal and spatial patterns of regional carbon 34 dynamics. We estimate that regional GPP is between 7.02 and 7.78 Pg C yr-1 and net primary 35 production (NPP) ranges from 3.81 to 4.38 Pg C yr-1 and net ecosystem production (NEP) varies 36 within 0.08-0.73 Pg C yr-1 over the period 2000-2005 for the conterminous United States. The 37 uncertainty due to parameterization is 0.34, 0.65 and 0.18 Pg C yr-1 for the regional estimates of GPP, 38 NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 39 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Our study provides a 40 new independent and more adequate measure of carbon fluxes for the conterminous United States, 41 which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon 42 management and climate.

  4. Quantification of terrestrial ecosystem carbon dynamics in the conterminous United States combining a process-based biogeochemical model and MODIS and AmeriFlux data

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Min; Zhuang, Qianlai; Cook, D.; Coulter, Richard L.; Pekour, Mikhail S.; Scott, Russell L.; Munger, J. W.; Bible, Ken

    2011-08-31

    Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM), should provide a more adequate quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), Land Surface Water Index (LSWI) and carbon flux data of AmeriFlux to conduct such a study. We first modify the gross primary production (GPP) modeling in TEM by incorporating EVI and LSWI to account for the effects of the changes of canopy photosynthetic capacity, phenology and water stress. Second, we parameterize and verify the new version of TEM with eddy flux data. We then apply the model to the conterminous United States over the period 2000-2005 at a 0.05-0.05 spatial resolution. We find that the new version of TEM made improvement over the previous version and generally captured the expected temporal and spatial patterns of regional carbon dynamics. We estimate that regional GPP is between 7.02 and 7.78 PgC yr{sup -1} and net primary production (NPP) ranges from 3.81 to 4.38 Pg Cyr{sup -1} and net ecosystem production (NEP) varies within 0.08- 0.73 PgC yr{sup -1} over the period 2000-2005 for the conterminous United States. The uncertainty due to parameterization is 0.34, 0.65 and 0.18 PgC yr{sup -1} for the regional estimates of GPP, NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Our study provides a new independent and more adequate measure of carbon fluxes for the conterminous United States, which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon management and climate.

  5. Foam process models.

    Energy Technology Data Exchange (ETDEWEB)

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  6. Product- and Process Units in the CRITT Translation Process Research Database

    DEFF Research Database (Denmark)

    Carl, Michael

    The first version of the "Translation Process Research Database" (TPR DB v1.0) was released In August 2012, containing logging data of more than 400 translation and text production sessions. The current version of the TPR DB, (v1.4), contains data from more than 940 sessions, which represents more...... than 300 hours of text production. The database provides the raw logging data, as well as Tables of pre-processed product- and processing units. The TPR-DB includes various types of simple and composed product and process units that are intended to support the analysis and modelling of human text...... reception, production, and translation processes. In this talk I describe some of the functions and features of the TPR-DB v1.4, and how they can be deployed in empirical human translation process research....

  7. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    2008-01-01

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are introdu

  8. Refactoring Process Models in Large Process Repositories.

    NARCIS (Netherlands)

    Weber, B.; Reichert, M.U.

    With the increasing adoption of process-aware information systems (PAIS), large process model repositories have emerged. Over time respective models have to be re-aligned to the real-world business processes through customization or adaptation. This bears the risk that model redundancies are

  9. Unified Model of Purification Units in Hydrogen Networks

    Institute of Scientific and Technical Information of China (English)

    吴思东; 王彧斐; 冯霄

    2014-01-01

    Purification processes are widely used in hydrogen networks of refineries to increase hydrogen reuse. In refineries, hydrogen purification techniques include hydrocarbon, hydrogen sulfide and CO removal units. In addi-tion, light hydrocarbon recovery from the hydrogen source streams can also result in hydrogen purification. In order to simplify the superstructure and mathematical model of hydrogen network integration, the models of different pu-rification processes are unified in this paper, including mass balance and the expressions for hydrogen recovery and impurity removal ratios, which are given for all the purification units in refineries. Based on the proposed unified model, a superstructure of hydrogen networks with purification processes is constructed.

  10. Mesh-particle interpolations on graphics processing units and multicore central processing units.

    Science.gov (United States)

    Rossinelli, Diego; Conti, Christian; Koumoutsakos, Petros

    2011-06-13

    Particle-mesh interpolations are fundamental operations for particle-in-cell codes, as implemented in vortex methods, plasma dynamics and electrostatics simulations. In these simulations, the mesh is used to solve the field equations and the gradients of the fields are used in order to advance the particles. The time integration of particle trajectories is performed through an extensive resampling of the flow field at the particle locations. The computational performance of this resampling turns out to be limited by the memory bandwidth of the underlying computer architecture. We investigate how mesh-particle interpolation can be efficiently performed on graphics processing units (GPUs) and multicore central processing units (CPUs), and we present two implementation techniques. The single-precision results for the multicore CPU implementation show an acceleration of 45-70×, depending on system size, and an acceleration of 85-155× for the GPU implementation over an efficient single-threaded C++ implementation. In double precision, we observe a performance improvement of 30-40× for the multicore CPU implementation and 20-45× for the GPU implementation. With respect to the 16-threaded standard C++ implementation, the present CPU technique leads to a performance increase of roughly 2.8-3.7× in single precision and 1.7-2.4× in double precision, whereas the GPU technique leads to an improvement of 9× in single precision and 2.2-2.8× in double precision.

  11. Neural and Cognitive Modeling with Networks of Leaky Integrator Units

    Science.gov (United States)

    Graben, Peter beim; Liebscher, Thomas; Kurths, Jürgen

    After reviewing several physiological findings on oscillations in the electroencephalogram (EEG) and their possible explanations by dynamical modeling, we present neural networks consisting of leaky integrator units as a universal paradigm for neural and cognitive modeling. In contrast to standard recurrent neural networks, leaky integrator units are described by ordinary differential equations living in continuous time. We present an algorithm to train the temporal behavior of leaky integrator networks by generalized back-propagation and discuss their physiological relevance. Eventually, we show how leaky integrator units can be used to build oscillators that may serve as models of brain oscillations and cognitive processes.

  12. Model United Nations at CERN

    CERN Document Server

    2012-01-01

    From 20 to 22 January, 300 young people from international secondary schools in Switzerland, France and Turkey will meet at CERN to debate scientific topics at a Model UN Conference.   Representing some 50 countries, they will form committees and a model General Assembly to discuss the meeting’s chosen topic: “UN – World Science Pole for Progress”.

  13. Proton Testing of Advanced Stellar Compass Digital Processing Unit

    DEFF Research Database (Denmark)

    Thuesen, Gøsta; Denver, Troelz; Jørgensen, Finn E

    1999-01-01

    The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland.......The Advanced Stellar Compass Digital Processing Unit was radiation tested with 300 MeV protons at Proton Irradiation Facility (PIF), Paul Scherrer Institute, Switzerland....

  14. Non-linear Loudspeaker Unit Modelling

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde; Agerkvist, Finn T.

    2008-01-01

    Simulations of a 6½-inch loudspeaker unit are performed and compared with a displacement measurement. The non-linear loudspeaker model is based on the major nonlinear functions and expanded with time-varying suspension behaviour and flux modulation. The results are presented with FFT plots of three...... frequencies and different displacement levels. The model errors are discussed and analysed including a test with loudspeaker unit where the diaphragm is removed....

  15. Modeling of active beam units with Modelica

    DEFF Research Database (Denmark)

    Maccarini, Alessandro; Hultmark, Göran; Vorre, Anders

    2015-01-01

    This paper proposes an active beam model suitable for building energy simulations with the programming language Modelica. The model encapsulates empirical equations derived by a novel active beam terminal unit that operates with low-temperature heating and high-temperature cooling systems....... Measurements from a full-scale experiment are used to compare the thermal behavior of the active beam with the one predicted by simulations. The simulation results show that the model corresponds closely with the actual operation. The model predicts the outlet water temperature of the active beam...... with a maximum mean absolute error of 0.18 °C. In term of maximum mean absolute percentage error, simulation results differ by 0.9%. The methodology presented is general enough to be applied for modeling other active beam units. Modeling of active beam units with Modelica. Available from: https...

  16. Line-by-line spectroscopic simulations on graphics processing units

    Science.gov (United States)

    Collange, Sylvain; Daumas, Marc; Defour, David

    2008-01-01

    We report here on software that performs line-by-line spectroscopic simulations on gases. Elaborate models (such as narrow band and correlated-K) are accurate and efficient for bands where various components are not simultaneously and significantly active. Line-by-line is probably the most accurate model in the infrared for blends of gases that contain high proportions of H 2O and CO 2 as this was the case for our prototype simulation. Our implementation on graphics processing units sustains a speedup close to 330 on computation-intensive tasks and 12 on memory intensive tasks compared to implementations on one core of high-end processors. This speedup is due to data parallelism, efficient memory access for specific patterns and some dedicated hardware operators only available in graphics processing units. It is obtained leaving most of processor resources available and it would scale linearly with the number of graphics processing units in parallel machines. Line-by-line simulation coupled with simulation of fluid dynamics was long believed to be economically intractable but our work shows that it could be done with some affordable additional resources compared to what is necessary to perform simulations on fluid dynamics alone. Program summaryProgram title: GPU4RE Catalogue identifier: ADZY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 62 776 No. of bytes in distributed program, including test data, etc.: 1 513 247 Distribution format: tar.gz Programming language: C++ Computer: x86 PC Operating system: Linux, Microsoft Windows. Compilation requires either gcc/g++ under Linux or Visual C++ 2003/2005 and Cygwin under Windows. It has been tested using gcc 4.1.2 under Ubuntu Linux 7.04 and using Visual C

  17. Modular process modeling for OPC

    Science.gov (United States)

    Keck, M. C.; Bodendorf, C.; Schmidtling, T.; Schlief, R.; Wildfeuer, R.; Zumpe, S.; Niehoff, M.

    2007-03-01

    Modular OPC modeling, describing mask, optics, resist and etch processes separately is an approach to keep efforts for OPC manageable. By exchanging single modules of a modular OPC model, a fast response to process changes during process development is possible. At the same time efforts can be reduced, since only single modular process steps have to be re-characterized as input for OPC modeling as the process is adjusted and optimized. Commercially available OPC tools for full chip processing typically make use of semi-empirical models. The goal of our work is to investigate to what extent these OPC tools can be applied for modeling of single process steps as separate modules. For an advanced gate level process we analyze the modeling accuracy over different process conditions (focus and dose) when combining models for each process step - optics, resist and etch - for differing single processes to a model describing the total process.

  18. Optimized Technology for Residuum Processing in the ARGG Unit

    Institute of Scientific and Technical Information of China (English)

    Pan Luoqi; Yuan hongxing; Nie Baiqiu

    2006-01-01

    The influence of feedstock property on operation in the FCC unit was studied to identify the cause leading to deteriorated products distribution related with increasingly heavier feedstock for the ARGG unit. In order to maximize the economic benefits of the ARGG unit a string of measures, including the modification of catalyst formulation, retention of high catalyst activity, application of mixed termination agents to control the reaction temperature and once-through operation, and optimization of catalyst regeneration technique, were adopted to adapt the ARGG unit to processing of the heavy feedstock with its carbon residue equating to 7% on an average. The heavy oil processing technology has brought about apparent economic benefits.

  19. On the hazard rate process for imperfectly monitored multi-unit systems

    Energy Technology Data Exchange (ETDEWEB)

    Barros, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)]. E-mail: anne.barros@utt.fr; Berenguer, C. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France); Grall, A. [Institut des Sciences et Techonologies de l' Information de Troyes (ISTIT-CNRS), Equipe de Modelisation et Surete des Systemes, Universite de Technologie de Troyes (UTT), 12, rue Marie Curie, BP2060, 10010 Troyes cedex (France)

    2005-12-01

    The aim of this paper is to present a stochastic model to characterize the failure distribution of multi-unit systems when the current units state is imperfectly monitored. The definition of the hazard rate process existing with perfect monitoring is extended to the realistic case where the units failure time are not always detected (non-detection events). The so defined observed hazard rate process gives a better representation of the system behavior than the classical failure rate calculated without any information on the units state and than the hazard rate process based on perfect monitoring information. The quality of this representation is, however, conditioned by the monotony property of the process. This problem is mainly discussed and illustrated on a practical example (two parallel units). The results obtained motivate the use of the observed hazard rate process to characterize the stochastic behavior of the multi-unit systems and to optimize for example preventive maintenance policies.

  20. Professional Competence of the Head of External Relations Unit and its Development in the Study Process

    OpenAIRE

    Turuševa, Larisa

    2010-01-01

    Dissertation Annotation Larisa Turuševa’s promotion paper „Professional Competence of the Head of External Relations Unit and its Development in the Study Process” is a fulfilled research on the development of Professional competence of the heads of external relations units, conditions for the study programme development. A model of professional competence of the head of external relations unit is worked out, its indicators and levels are described. A study process model for th...

  1. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2016-07-08

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  2. Accelerating sparse linear algebra using graphics processing units

    Science.gov (United States)

    Spagnoli, Kyle E.; Humphrey, John R.; Price, Daniel K.; Kelmelis, Eric J.

    2011-06-01

    The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math processor capable of over 1 TFLOPS of peak computational throughput at a cost similar to a high-end CPU with excellent FLOPS-to-watt ratio. High-level sparse linear algebra operations are computationally intense, often requiring large amounts of parallel operations and would seem a natural fit for the processing power of the GPU. Our work is on a GPU accelerated implementation of sparse linear algebra routines. We present results from both direct and iterative sparse system solvers. The GPU execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally. For example, the CPU is responsible for graph theory portion of the direct solvers while the GPU simultaneously performs the low level linear algebra routines.

  3. Parallelization of heterogeneous reactor calculations on a graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Malofeev, V. M., E-mail: vm-malofeev@mail.ru; Pal’shin, V. A. [National Research Center Kurchatov Institute (Russian Federation)

    2016-12-15

    Parallelization is applied to the neutron calculations performed by the heterogeneous method on a graphics processing unit. The parallel algorithm of the modified TREC code is described. The efficiency of the parallel algorithm is evaluated.

  4. Diffusion tensor fiber tracking on graphics processing units.

    Science.gov (United States)

    Mittmann, Adiel; Comunello, Eros; von Wangenheim, Aldo

    2008-10-01

    Diffusion tensor magnetic resonance imaging has been successfully applied to the process of fiber tracking, which determines the location of fiber bundles within the human brain. This process, however, can be quite lengthy when run on a regular workstation. We present a means of executing this process by making use of the graphics processing units of computers' video cards, which provide a low-cost parallel execution environment that algorithms like fiber tracking can benefit from. With this method we have achieved performance gains varying from 14 to 40 times on common computers. Because of accuracy issues inherent to current graphics processing units, we define a variation index in order to assess how close the results obtained with our method are to those generated by programs running on the central processing units of computers. This index shows that results produced by our method are acceptable when compared to those of traditional programs.

  5. On Activity modelling in process modeling

    Directory of Open Access Journals (Sweden)

    Dorel Aiordachioaie

    2001-12-01

    Full Text Available The paper is looking to the dynamic feature of the meta-models of the process modelling process, the time. Some principles are considered and discussed as main dimensions of any modelling activity: the compatibility of the substances, the equipresence of phenomena and the solvability of the model. The activity models are considered and represented at meta-level.

  6. Centralization of Intensive Care Units: Process Reengineering in a Hospital

    Directory of Open Access Journals (Sweden)

    Arun Kumar

    2010-03-01

    Full Text Available Centralization of intensive care units (ICUs is a concept that has been around for several decades and the OECD countries have led the way in adopting this in their operations. Singapore Hospital was built in 1981, before the concept of centralization of ICUs took off. The hospital's ICUs were never centralized and were spread out across eight different blocks with the specialization they were associated with. Coupled with the acquisitions of the new concept of centralization and its benefits, the hospital recognizes the importance of having a centralized ICU to better handle major disasters. Using simulation models, this paper attempts to study the feasibility of centralization of ICUs in Singapore Hospital, subject to space constraints. The results will prove helpful to those who consider reengineering the intensive care process in hospitals.

  7. Business Process Compliance through Reusable Units of Compliant Processes

    NARCIS (Netherlands)

    Shumm, D.; Turetken, O.; Kokash, N.; Elgammal, A.; Leymann, F.; Heuvel, J. van den

    2010-01-01

    Compliance management is essential for ensuring that organizational business processes and supporting information systems are in accordance with a set of prescribed requirements originating from laws, regulations, and various legislative or technical documents such as Sarbanes-Oxley Act or ISO 17799

  8. Embankment erosion process model

    Science.gov (United States)

    The USDA, Agricultural Research Service (ARS) Hydraulic Engineering Research Unit (HERU) laboratory’s conducts research in support of the USDA NRCS Small Watershed Program by addressing dam safety issues. This presentation describes research on improving methods for predicting earthen embankment er...

  9. Auditory processing models

    DEFF Research Database (Denmark)

    Dau, Torsten

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook will pr...

  10. Qualidade microbiológica na obtenção de farinha e fécula de mandioca em unidades tradicionais e modelo Microbiological quality in the flour and starch cassava processing in traditional and model unit

    Directory of Open Access Journals (Sweden)

    Raquel Resende Dósea

    2010-02-01

    Full Text Available O objetivo deste trabalho foi avaliar a qualidade microbiológica de farinha e fécula durante as diferentes etapas do processamento de mandioca, em unidades tradicionais e em uma unidade modelo. Foram determinados índices de coliformes totais e termotolerantes, Bacillus cereus, Salmonela spp., bactérias e fungos nas farinhas e féculas. Não foram observadas presenças de B. cereus e Salmonella spp. na farinha e fécula de mandioca produzidas nas unidades estudadas. A incidência microbiana diminui com o avanço da etapa do processamento para obtenção de farinha e foi menor na unidade modelo. Após o processo de torra, a carga microbiana estava de acordo com os valores preconizados pela legislação brasileira, concluindo-se que essa etapa pode ser considerada como crítica na obtenção de farinha. Na obtenção de fécula, a carga microbiana nas unidades tradicionais são maiores que na modelo, e o aumento do número de extrações promove o aumento da incidência de microrganismos, sendo recomendadas apenas quatro extrações.The objective of this research was to evaluate microbiological contamination in the flour and starch during cassava processing in traditional and model units. The total and fecal coliforms indexes, Bacillus cereus, Salmonella, bacteria, yeast and fungi were determined. Bacillus cereus and Salmonella were not detected in any sample. The incidence of microorganisms decreased along the processing to obtain cassava flour, and is lower in model unit. After the roasting process, the microbial load was below the values established by the Brazilian legislation, and can be regarded as a critical step in obtaining cassava flour. Concerning starch production, the microbial load in the traditional units was higher than in the model units, and the increase of the extraction steps has promoted the growth of microorganisms. It's recommended the used of only 4 extractions.

  11. Efficient magnetohydrodynamic simulations on graphics processing units with CUDA

    Science.gov (United States)

    Wong, Hon-Cheng; Wong, Un-Hong; Feng, Xueshang; Tang, Zesheng

    2011-10-01

    Magnetohydrodynamic (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the best of the author's knowledge, the first implementation of MHD simulations entirely on GPUs with CUDA, named GPU-MHD, to accelerate the simulation process. GPU-MHD supports both single and double precision computations. A series of numerical tests have been performed to validate the correctness of our code. Accuracy evaluation by comparing single and double precision computation results is also given. Performance measurements of both single and double precision are conducted on both the NVIDIA GeForce GTX 295 (GT200 architecture) and GTX 480 (Fermi architecture) graphics cards. These measurements show that our GPU-based implementation achieves between one and two orders of magnitude of improvement depending on the graphics card used, the problem size, and the precision when comparing to the original serial CPU MHD implementation. In addition, we extend GPU-MHD to support the visualization of the simulation results and thus the whole MHD simulation and visualization process can be performed entirely on GPUs.

  12. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  13. Adaptive-optics Optical Coherence Tomography Processing Using a Graphics Processing Unit*

    Science.gov (United States)

    Shafer, Brandon A.; Kriske, Jeffery E.; Kocaoglu, Omer P.; Turner, Timothy L.; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T.

    2015-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability. PMID:25570838

  14. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    Science.gov (United States)

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  15. A Stochastic Unit Commitment Model for a Local CHP Plant

    DEFF Research Database (Denmark)

    Ravn, Hans V.; Riisom, Jannik; Schaumburg-Müller, Camilla

    2005-01-01

    Local CHP development in Denmark has during the 90’s been characterised by large growth primarily due to government subsidies in the form of feed-in tariffs. In line with the liberalisation process in the EU, Danish local CHPs of a certain size must operate on market terms from 2005. This paper...... presents a stochastic unit commitment model for a single local CHP plant (consisting of CHP unit, boiler, and heat storage facility) which takes into account varying spot prices. Further, additional technology is implemented in the model in the form of an immersion heater. Simulations are conducted using...

  16. Steady-State Process Modelling

    DEFF Research Database (Denmark)

    2011-01-01

    This chapter covers the basic principles of steady state modelling and simulation using a number of case studies. Two principal approaches are illustrated that develop the unit operation models from first principles as well as through application of standard flowsheet simulators. The approaches i...

  17. GREAT Process Modeller user manual

    OpenAIRE

    Rueda, Urko; España, Sergio; Ruiz, Marcela

    2015-01-01

    This report contains instructions to install, uninstall and use GREAT Process Modeller, a tool that supports Communication Analysis, a communication-oriented business process modelling method. GREAT allows creating communicative event diagrams (i.e. business process models), specifying message structures (which describe the messages associated to each communicative event), and automatically generating a class diagram (representing the data model of an information system that would support suc...

  18. Unit Operations for the Food Industry: Equilibrium Processes & Mechanical Operations

    OpenAIRE

    Guiné, Raquel

    2013-01-01

    Unit operations are an area of engineering that is at the same time very fascinating and most essential for the industry in general and the food industry in particular. This book was prepared in a way to achieve simultaneously the academic and practical perspectives. It is organized into two parts: the unit operations based on equilibrium processes and the mechanical operations. Each topic starts with a presentation of the fundamental concepts and principles, followed by a discussion of ...

  19. Formalizing the Process of Constructing Chains of Lexical Units

    Directory of Open Access Journals (Sweden)

    Grigorij Chetverikov

    2015-06-01

    Full Text Available Formalizing the Process of Constructing Chains of Lexical Units The paper investigates mathematical aspects of describing the construction of chains of lexical units on the basis of finite-predicate algebra. Analyzing the construction peculiarities is carried out and application of the method of finding the power of linear logical transformation for removing characteristic words of a dictionary entry is given. Analysis and perspectives of the results of the study are provided.

  20. Four-component united-atom model of bitumen

    DEFF Research Database (Denmark)

    Hansen, Jesper Schmidt; Lemarchand, Claire; Nielsen, Erik

    2013-01-01

    We propose a four-component united-atom molecular model of bitumen. The model includes realistic chemical constituents and introduces a coarse graining level that suppresses the highest frequency modes. Molecular dynamics simulations of the model are carried out using graphic-processor-units based...... software in time spans in order of microseconds, which enables the study of slow relaxation processes characterizing bitumen. This paper also presents results of the model dynamics as expressed through the mean-square displacement, the stress autocorrelation function, and rotational relaxation...... the stress autocorrelation function, the shear viscosity and shear modulus are evaluated, showing a viscous response at frequencies below 100 MHz. The model predictions of viscosity and diffusivities are compared to experimental data, giving reasonable agreement. The model shows that the asphaltene, resin...

  1. Performance Analysis of the United States Marine Corps War Reserve Materiel Program Process Flow

    Science.gov (United States)

    2016-12-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT PERFORMANCE ANALYSIS OF THE UNITED STATES MARINE CORPS...PERFORMANCE ANALYSIS OF THE UNITED STATES MARINE CORPS WAR RESERVE MATERIEL PROGRAM PROCESS FLOW 5. FUNDING NUMBERS 6. AUTHOR(S) Nathan A. Campbell...an item is requested but not maintained in the WRM inventory. By conducting a process analysis and using computer modeling, our recommendations are

  2. Improving the Quotation Process of an After-Sales Unit

    OpenAIRE

    Matilainen, Janne

    2013-01-01

    The purpose of this study was to model and analyze the quotation process of area managers at a global company. Process improvement requires understanding the fundamentals of the process. The study was conducted a case study. Data comprised of internal documentation of the case company, literature, and semi-structured, themed interviews of process performers and stakeholders. The objective was to produce model of the current state of the process. The focus was to establish a holistic view o...

  3. The Automation of Nowcast Model Assessment Processes

    Science.gov (United States)

    2016-09-01

    S) Leelinda P Dawson, John W Raby, and Jeffrey A Smith 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION...model runs .............................13 Fig. 6 An example PSA log file, ps_auto_log, using DDA, one case- study date, 3 domains, 3 model runs, and...case study date could be set for each run. This process was time-consuming when multiple configurations were required by the user. Also, each run

  4. BPMN Impact on Process Modeling

    OpenAIRE

    Polak, Przemyslaw

    2013-01-01

    Recent years have seen huge rise in popularity of BPMN in the area of business process modeling, especially among business analysts. This notation has characteristics that distinguish it significantly from the previously popular process modeling notations, such as EPC. The article contains the analysis of some important characteristics of BPMN and provides author’s conclusions on the impact that the popularity and specificity of BPMN can have on the practice of process modeling. Author's obse...

  5. Anisotropic interfacial tension, contact angles, and line tensions: A graphics-processing-unit-based Monte Carlo study of the Ising model

    CERN Document Server

    Block, Benjamin J; Virnau, Peter; Binder, Kurt

    2014-01-01

    As a generic example for crystals where the crystal-fluid interface tension depends on the orientation of the interface relative to the crystal lattice axes, the nearest neighbor Ising model on the simple cubic lattice is studied over a wide temperature range, both above and below the roughening transition temperature. Using a thin film geometry $L_x \\times L_y \\times L_z$ with periodic boundary conditions along the z-axis and two free $L_x \\times L_y$ surfaces at which opposing surface fields $\\pm H_{1}$ act, under conditions of partial wetting, a single planar interface inclined under a contact angle $\\theta < \\pi/2$ relative to the yz-plane is stabilized. In the y-direction, a generalization of the antiperiodic boundary condition is used that maintains the translational invariance in y-direction despite the inhomogeneity of the magnetization distribution in this system. This geometry allows a simultaneous study of the angle-dependent interface tension, the contact angle, and the line tension (which depe...

  6. Minimization of entropy production in separate and connected process units

    Energy Technology Data Exchange (ETDEWEB)

    Roesjorde, Audun

    2004-08-01

    The objective of this thesis was to further develop a methodology for minimizing the entropy production of single and connected chemical process units. When chemical process equipment is designed and operated at the lowest entropy production possible, the energy efficiency of the equipment is enhanced. We have found for single process units that the entropy production could be reduced with up to 20-40%, given the degrees of freedom in the optimization. In processes, our results indicated that even bigger reductions were possible. The states of minimum entropy production were studied and important painter's for obtaining significant reductions in the entropy production were identified. Both from sustain ability and economical viewpoints knowledge of energy efficient design and operation are important. In some of the systems we studied, nonequilibrium thermodynamics was used to model the entropy production. In Chapter 2, we gave a brief introduction to different industrial applications of nonequilibrium thermodynamics. The link between local transport phenomena and overall system description makes nonequilibrium thermodynamics a useful tool for understanding design of chemical process units. We developed the methodology of minimization of entropy production in several steps. First, we analyzed and optimized the entropy production of single units: Two alternative concepts of adiabatic distillation; diabatic and heat-integrated distillation, were analyzed and optimized in Chapter 3 to 5. In diabatic distillation, heat exchange is allowed along the column, and it is this feature that increases the energy efficiency of the distillation column. In Chapter 3, we found how a given area of heat transfer should be optimally distributed among the trays in a column separating a mixture of propylene and propane. The results showed that heat exchange was most important on the trays close to the re boiler and condenser. In Chapter 4 and 5, we studied how the entropy

  7. Radiolysis Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Buck, Edgar C.; Wittman, Richard S.; Skomurski, Frances N.; Cantrell, Kirk J.; McNamara, Bruce K.; Soderquist, Chuck Z.

    2012-07-17

    Assessing the performance of spent (used) nuclear fuel in geological repository requires quantification of time-dependent phenomena that may influence its behavior on a time-scale up to millions of years. A high-level waste repository environment will be a dynamic redox system because of the time-dependent generation of radiolytic oxidants and reductants and the corrosion of Fe-bearing canister materials. One major difference between used fuel and natural analogues, including unirradiated UO2, is the intense radiolytic field. The radiation emitted by used fuel can produce radiolysis products in the presence of water vapor or a thin-film of water (including OH• and H• radicals, O2-, eaq, H2O2, H2, and O2) that may increase the waste form degradation rate and change radionuclide behavior. H2O2 is the dominant oxidant for spent nuclear fuel in an O2 depleted water environment, the most sensitive parameters have been identified with respect to predictions of a radiolysis model under typical conditions. As compared with the full model with about 100 reactions it was found that only 30-40 of the reactions are required to determine [H2O2] to one part in 10–5 and to preserve most of the predictions for major species. This allows a systematic approach for model simplification and offers guidance in designing experiments for validation.

  8. Modeling of column apparatus processes

    CERN Document Server

    Boyadjiev, Christo; Boyadjiev, Boyan; Popova-Krumova, Petya

    2016-01-01

    This book presents a new approach for the modeling of chemical and interphase mass transfer processes in industrial column apparatuses, using convection-diffusion and average-concentration models. The convection-diffusion type models are used for a qualitative analysis of the processes and to assess the main, small and slight physical effects, and then reject the slight effects. As a result, the process mechanism can be identified. It also introduces average concentration models for quantitative analysis, which use the average values of the velocity and concentration over the cross-sectional area of the column. The new models are used to analyze different processes (simple and complex chemical reactions, absorption, adsorption and catalytic reactions), and make it possible to model the processes of gas purification with sulfur dioxide, which form the basis of several patents.

  9. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate...... the performance of an implementation and demonstrate that while there are some implementational pitfalls, a careful implementation can result in impressive improvements....

  10. Utilizing Graphics Processing Units for Network Anomaly Detection

    Science.gov (United States)

    2012-09-13

    matching system using deterministic finite automata and extended finite automata resulting in a speedup of 9x over the CPU implementation [SGO09]. Kovach ...pages 14–18, 2009. [Kov10] Nicholas S. Kovach . Accelerating malware detection via a graphics processing unit, 2010. http://www.dtic.mil/dtic/tr

  11. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2010-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the Graphics Processing Unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  12. Acceleration of option pricing technique on graphics processing units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2014-01-01

    The acceleration of an option pricing technique based on Fourier cosine expansions on the graphics processing unit (GPU) is reported. European options, in particular with multiple strikes, and Bermudan options will be discussed. The influence of the number of terms in the Fourier cosine series expan

  13. UML in business process modeling

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2013-03-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were popularized in practice in recent decades. Most significant of the notations include Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contemporary business process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, methodology of research is discussed. Section 2 presents selected case study results. The paper is concluded with a summary.

  14. Modelling fluidized catalytic cracking unit stripper efficiency

    Directory of Open Access Journals (Sweden)

    García-Dopico M.

    2015-01-01

    Full Text Available This paper presents our modelling of a FCCU stripper, following our earlier research. This model can measure stripper efficiency against the most important variables: pressure, temperature, residence time and steam flow. Few models in the literature model the stripper and usually they do against only one variable. Nevertheless, there is general agreement on the importance of the stripper in the overall process, and the fact that there are few models maybe it is due to the difficulty to obtain a comprehensive model. On the other hand, the proposed model does use all the variables of the stripper, calculating efficiency on the basis of steam flow, pressure, residence time and temperature. The correctness of the model is then analysed, and we examine several possible scenarios, like decreasing the steam flow, which is achieved by increasing the temperature in the stripper.

  15. Dynamic modeling of ultrafiltration membranes for whey separation processes

    NARCIS (Netherlands)

    Saltık, M.B.; Özkan, Leyla; Jacobs, Marc; Padt, van der Albert

    2017-01-01

    In this paper, we present a control relevant rigorous dynamic model for an ultrafiltration membrane unit in a whey separation process. The model consists of a set of differential algebraic equations and is developed for online model based applications such as model based control and process monitori

  16. Modeling Software Processes and Artifacts

    NARCIS (Netherlands)

    van den Berg, Klaas; Bosch, Jan; Mitchell, Stuart

    1997-01-01

    The workshop on Modeling Software Processes and Artifacts explored the application of object technology in process modeling. After the introduction and the invited lecture, a number of participants presented their position papers. First, an overview is given on some background work, and the aims, as

  17. Temperature Modelling of the Biomass Pretreatment Process

    DEFF Research Database (Denmark)

    2012-01-01

    In a second generation biorefinery, the biomass pretreatment stage has an important contribution to the efficiency of the downstream processing units involved in biofuel production. Most of the pretreatment process occurs in a large pressurized thermal reactor that presents an irregular temperature...... distribution. Therefore, an accurate temperature model is critical for observing the biomass pretreatment. More than that, the biomass is also pushed with a constant horizontal speed along the reactor in order to ensure a continuous throughput. The goal of this paper is to derive a temperature model...

  18. Mathematical model of layered metallurgical furnaces and units

    Science.gov (United States)

    Shvydkiy, V. S.; Spirin, N. A.; Lavrov, V. V.

    2016-09-01

    The basic approaches to mathematical modeling of the layered steel furnaces and units are considered. It is noted that the particular importance have the knowledge about the mechanisms and physical nature of processes of the charge column movement and the gas flow in the moving layer, as well as regularities of development of heat- and mass-transfer in them. The statement and mathematical description of the problem solution targeting the potential gas flow in the layered unit of an arbitrary profile are presented. On the basis of the proposed mathematical model the software implementation of information-modeling system of BF gas dynamics is carried out. The results of the computer modeling of BF non-isothermal gas dynamics with regard to the cohesion zone, gas dynamics of the combustion zone and calculation of hot-blast stoves are provided

  19. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    . In this way the model parameters that drives the main dynamic behavior can be identified and thus a better understanding of this type of processes. In order to develop, test and verify the methodology, three case studies were selected, specifically the bi-enzyme process for the production of lactobionic acid......The subject of this thesis is to develop a methodological framework that can systematically guide mathematical model building for better understanding of multi-enzyme processes. In this way, opportunities for process improvements can be identified by analyzing simulations of either existing...... in the scientific literature. Reliable mathematical models of such multi-catalytic schemes can exploit the potential benefit of these processes. In this way, the best outcome of the process can be obtained understanding the types of modification that are required for process optimization. An effective evaluation...

  20. Business process modeling in healthcare.

    Science.gov (United States)

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  1. Meta-model Based Model Organization and Transformation of Design Pattern Units in MDA

    Institute of Scientific and Technical Information of China (English)

    Chang-chun YANG; Zi-yi ZHAO; Jing Sun

    2010-01-01

    Tb achieve the purpose of applying design patterns which are various in kind and constant in changing in MDA from idea and application,one way is used to solve the problem of pattern disappearance which occurs at the process of pattern instantiation,to guarantee the independenceJpatterns,and at the same time,to apply this process to miltiple design patterns.Ib solve these two problems,the modeling method of design pattern units based on seta-models is adopted,I.e.,to divide the basic operations into atones in the metamodel tier and then combine the atones to complete design pattern units seta-models without business logic.After one process of conversion,the kxupose of making up various pattern units seta-model and dividing business logic and pattern logic is achieved.

  2. Modeling nuclear processes by Simulink

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my [Faculty of Engineering, International Islamic University Malaysia, Jalan Gombak, Selangor (Malaysia)

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  3. Modeling nuclear processes by Simulink

    Science.gov (United States)

    Rashid, Nahrul Khair Alang Md

    2015-04-01

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  4. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...

  5. 简化球谐近似模型的图形处理器加速求解%Graphics processing units-accelerated solving for simplify spherical harmonic approximation model

    Institute of Scientific and Technical Information of China (English)

    贺小伟; 陈政; 侯榆青; 郭红波

    2016-01-01

    As a high-order approximation model to Radiative Transfer Equation, simplify spherical harmonic (SPN) approximation has become a hot research topic in optical molecular imaging research. However, low computational efficiency imposes restrictions on its wide applications. This paper presented a graphics processing units (GPU)-parallel accelerated strategy for solving SPN model. The proposed strategy adopted compute unified device architecture (CUDA) parallel processing architecture introduced by NVIDIA Company to build parallel acceleration of two most time-consuming modules, generation of stiffness matrix and solving linear equations. Based on the feature of CUDA, the strategy optimized the parallel computing in tasks distribution, use of memory units and data preprocessing. Simulations on phantom and digital mouse model are designed to evaluate the accelerating effect by comparing the time for system matrix generation and average time of each step iteration. Experimental results show that the overall speedup ratio is around 30 times, which exhibit the advantage and potential of the proposed strategy in optical molecular imaging.%作为辐射传输方程的高阶近似,简化球谐近似模型成为近年光学分子成像研究的重点,但计算效率低限制了它的广泛应用,为此提出一种基于图形处理器的并行加速策略,采用NVIDIA 公司推出的统一计算设备架构,对求解过程中耗时最多的两个模块———有限元刚度矩阵的生成和线性方程组的求解进行基于图形处理器的并行加速;根据统一计算设备架构的特点,进行计算任务的分配、存储器的合理使用以及数据的预处理三方面的优化;仿体及数字鼠仿真实验对比刚度矩阵生成时间以及平均迭代时间,以评价所提出方法的加速效果。实验结果表明,该方法可使求解速度提高30倍左右,展示了该方法在光学分子成像中的优势及潜力。

  6. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...... specifying the cumulative hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore we also analyze specifications obtained via a simple deterministic time......-change of a homogeneous Levy process. While the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on the single names included in the iTraxx Europe index. The performances are compared...

  7. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    2010-01-01

    In reduced form default models, the instantaneous default intensity is the classical modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative...... hazard process directly. Within this framework we present a new model class where cumulative hazards are described by self-similar additive processes, also known as Sato processes. Furthermore, we analyze specifications obtained via a simple deterministic time-change of a homogeneous Lévy process. While...... the processes in these two classes share the same average behavior over time, the associated intensities exhibit very different properties. Concrete specifications are calibrated to data on all the single names included in the iTraxx Europe index. The performances are compared with those of the classical CIR...

  8. Modelling of CWS combustion process

    Science.gov (United States)

    Rybenko, I. A.; Ermakova, L. A.

    2016-10-01

    The paper considers the combustion process of coal water slurry (CWS) drops. The physico-chemical process scheme consisting of several independent parallel-sequential stages is offered. This scheme of drops combustion process is proved by the particle size distribution test and research stereomicroscopic analysis of combustion products. The results of mathematical modelling and optimization of stationary regimes of CWS combustion are provided. During modeling the problem of defining possible equilibrium composition of products, which can be obtained as a result of CWS combustion processes at different temperatures, is solved.

  9. Homology modeling, docking studies and molecular dynamic simulations using graphical processing unit architecture to probe the type-11 phosphodiesterase catalytic site: a computational approach for the rational design of selective inhibitors.

    Science.gov (United States)

    Cichero, Elena; D'Ursi, Pasqualina; Moscatelli, Marco; Bruno, Olga; Orro, Alessandro; Rotolo, Chiara; Milanesi, Luciano; Fossa, Paola

    2013-12-01

    Phosphodiesterase 11 (PDE11) is the latest isoform of the PDEs family to be identified, acting on both cyclic adenosine monophosphate and cyclic guanosine monophosphate. The initial reports of PDE11 found evidence for PDE11 expression in skeletal muscle, prostate, testis, and salivary glands; however, the tissue distribution of PDE11 still remains a topic of active study and some controversy. Given the sequence similarity between PDE11 and PDE5, several PDE5 inhibitors have been shown to cross-react with PDE11. Accordingly, many non-selective inhibitors, such as IBMX, zaprinast, sildenafil, and dipyridamole, have been documented to inhibit PDE11. Only recently, a series of dihydrothieno[3,2-d]pyrimidin-4(3H)-one derivatives proved to be selective toward the PDE11 isoform. In the absence of experimental data about PDE11 X-ray structures, we found interesting to gain a better understanding of the enzyme-inhibitor interactions using in silico simulations. In this work, we describe a computational approach based on homology modeling, docking, and molecular dynamics simulation to derive a predictive 3D model of PDE11. Using a Graphical Processing Unit architecture, it is possible to perform long simulations, find stable interactions involved in the complex, and finally to suggest guideline for the identification and synthesis of potent and selective inhibitors.

  10. Accelerated space object tracking via graphic processing unit

    Science.gov (United States)

    Jia, Bin; Liu, Kui; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    In this paper, a hybrid Monte Carlo Gauss mixture Kalman filter is proposed for the continuous orbit estimation problem. Specifically, the graphic processing unit (GPU) aided Monte Carlo method is used to propagate the uncertainty of the estimation when the observation is not available and the Gauss mixture Kalman filter is used to update the estimation when the observation sequences are available. A typical space object tracking problem using the ground radar is used to test the performance of the proposed algorithm. The performance of the proposed algorithm is compared with the popular cubature Kalman filter (CKF). The simulation results show that the ordinary CKF diverges in 5 observation periods. In contrast, the proposed hybrid Monte Carlo Gauss mixture Kalman filter achieves satisfactory performance in all observation periods. In addition, by using the GPU, the computational time is over 100 times less than that using the conventional central processing unit (CPU).

  11. Ising Processing Units: Potential and Challenges for Discrete Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Coffrin, Carleton James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagarajan, Harsha [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bent, Russell Whitford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-05

    The recent emergence of novel computational devices, such as adiabatic quantum computers, CMOS annealers, and optical parametric oscillators, presents new opportunities for hybrid-optimization algorithms that leverage these kinds of specialized hardware. In this work, we propose the idea of an Ising processing unit as a computational abstraction for these emerging tools. Challenges involved in using and bench- marking these devices are presented, and open-source software tools are proposed to address some of these challenges. The proposed benchmarking tools and methodology are demonstrated by conducting a baseline study of established solution methods to a D-Wave 2X adiabatic quantum computer, one example of a commercially available Ising processing unit.

  12. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  13. Accelerating Malware Detection via a Graphics Processing Unit

    Science.gov (United States)

    2010-09-01

    Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...pro.mspx, Accessed July 2010, 2001. 79 Mic06. Microsoft. Common object file format ( coff ). MSDN, November 2006. Re- vision 4.1. Mic07a. Microsoft

  14. An Architecture of Deterministic Quantum Central Processing Unit

    OpenAIRE

    Xue, Fei; Chen, Zeng-Bing; Shi, Mingjun; Zhou, Xianyi; Du, Jiangfeng; Han, Rongdian

    2002-01-01

    We present an architecture of QCPU(Quantum Central Processing Unit), based on the discrete quantum gate set, that can be programmed to approximate any n-qubit computation in a deterministic fashion. It can be built efficiently to implement computations with any required accuracy. QCPU makes it possible to implement universal quantum computation with a fixed, general purpose hardware. Thus the complexity of the quantum computation can be put into the software rather than the hardware.

  15. BitTorrent Processing Unit BPU发展观望

    Institute of Scientific and Technical Information of China (English)

    Zone; 杨原青

    2007-01-01

    在电脑发展的早期,无论是运算处理、还是图形处理、还是输入、输出处理,都由CPU(Central Processing Unit,中央处理器)一力承担,然而随着处理专用化发展,1999年NVIDIA率先将图形处理独立出来,提出了GPU(Graphics Processing unit,绘图处理单元)概念。八年过去,现在GPU已经成为图形处理的中坚力量,并让所玩家耳熟能详。而近期,台湾2家公刊则提出了BPU(BitTorrent Processing Unit,BT处理单元)概念。下面,就让我们一起看看,这款极为新鲜的概念产品。

  16. Social Models: Blueprints or Processes?

    Science.gov (United States)

    Little, Graham R.

    1981-01-01

    Discusses the nature and implications of two different models for societal planning: (1) the problem-solving process approach based on Karl Popper; and (2) the goal-setting "blueprint" approach based on Karl Marx. (DC)

  17. Unit testing, model validation, and biological simulation

    Science.gov (United States)

    Watts, Mark D.; Ghayoomie, S. Vahid; Larson, Stephen D.; Gerkin, Richard C.

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models. PMID:27635225

  18. Molecular dynamics for long-range interacting systems on Graphic Processing Units

    CERN Document Server

    Filho, Tarcísio M Rocha

    2012-01-01

    We present implementations of a fourth-order symplectic integrator on graphic processing units for three $N$-body models with long-range interactions of general interest: the Hamiltonian Mean Field, Ring and two-dimensional self-gravitating models. We discuss the algorithms, speedups and errors using one and two GPU units. Speedups can be as high as 140 compared to a serial code, and the overall relative error in the total energy is of the same order of magnitude as for the CPU code. The number of particles used in the tests range from 10,000 to 50,000,000 depending on the model.

  19. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  20. Model feedstock supply processing plants

    Directory of Open Access Journals (Sweden)

    V. M. Bautin

    2013-01-01

    Full Text Available The model of raw providing the processing enterprises entering into vertically integrated structure on production and processing of dairy raw materials, differing by an orientation on achievement of cumulative effect by the integrated structure acting as criterion function which maximizing is reached by optimization of capacities, volumes of deliveries of raw materials and its qualitative characteristics, costs of industrial processing of raw materials and demand for dairy production is developed.

  1. Model Checking of Boolean Process Models

    CERN Document Server

    Schneider, Christoph

    2011-01-01

    In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean sy...

  2. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  3. Command Process Modeling & Risk Analysis

    Science.gov (United States)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  4. Path modeling and process control

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Rodionova, O.; Pomerantsev, A.

    2007-01-01

    and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of path modeling can be applied to estimate variables...... of the next stage with the purpose of obtaining optimal or almost optimal quality of the output variable. An important aspect of the methods presented is the possibility of extensive graphic analysis of data that can provide the engineer with a detailed view of the multi-variate variation in data.......Many production processes are carried out in stages. At the end of each stage, the production engineer can analyze the intermediate results and correct process parameters (variables) of the next stage. Both analysis of the process and correction to process parameters at next stage should...

  5. Modelling Hospital Materials Management Processes

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-06-01

    integrated and detailed analysis and description model for hospital materials management data and tasks, which is able to tackle information from patient requirements to usage, from replenishment requests to supplying and handling activities. The model takes account of medical risk reduction, traceability and streamlined processes perspectives. Second, the paper translates this information into a business process model and mathematical formalization.The study provides a useful guide to the various relevant technology‐related, management and business issues, laying the foundations of an efficient reengineering of the supply chain to reduce healthcare costs and improve the quality of care.

  6. MELCOR modeling of Fukushima unit 2 accident

    Energy Technology Data Exchange (ETDEWEB)

    Sevon, Tuomo [VTT Technical Research Centre of Finland, Espoo (Finland)

    2014-12-15

    A MELCOR model of the Fukushima Daiichi unit 2 accident was created in order to get a better understanding of the event and to improve severe accident modeling methods. The measured pressure and water level could be reproduced relatively well with the calculation. This required adjusting the RCIC system flow rates and containment leak area so that a good match to the measurements is achieved. Modeling of gradual flooding of the torus room with water that originated from the tsunami was necessary for a satisfactory reproduction of the measured containment pressure. The reactor lower head did not fail in this calculation, and all the fuel remained in the RPV. 13 % of the fuel was relocated from the core area, and all the fuel rods lost their integrity, releasing at least some volatile radionuclides. According to the calculation, about 90 % of noble gas inventory and about 0.08 % of cesium inventory was released to the environment. The release started 78 h after the earthquake, and a second release peak came at 90 h. Uncertainties in the calculation are very large because there is scarce public data available about the Fukushima power plant and because it is not yet possible to inspect the status of the reactor and the containment. Uncertainty in the calculated cesium release is larger than factor of ten.

  7. Fast calculation of HELAS amplitudes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use the graphics processing unit (GPU) for fast calculations of helicity amplitudes of physics processes. As our first attempt, we compute $u\\overline{u}\\to n\\gamma$ ($n=2$ to 8) processes in $pp$ collisions at $\\sqrt{s} = 14$TeV by transferring the MadGraph generated HELAS amplitudes (FORTRAN) into newly developed HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes written in CUDA, a C-platform developed by NVIDIA for general purpose computing on the GPU. Compared with the usual CPU programs, we obtain 40-150 times better performance on the GPU.

  8. Use of general purpose graphics processing units with MODFLOW.

    Science.gov (United States)

    Hughes, Joseph D; White, Jeremy T

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  9. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  10. ECONOMIC MODELING PROCESSES USING MATLAB

    Directory of Open Access Journals (Sweden)

    Anamaria G. MACOVEI

    2008-06-01

    Full Text Available To study economic phenomena and processes using mathem atical modeling, and to determine the approximatesolution to a problem we need to choose a method of calculation and a numerical computer program, namely thepackage of programs MatLab. Any economic process or phenomenon is a mathematical description of h is behavior,and thus draw up an economic and mathematical model that has the following stages: formulation of the problem, theanalysis process modeling, the production model and design verification, validation and implementation of the model.This article is presented an economic model and its modeling is using mathematical equations and software packageMatLab, which helps us approximation effective solution. As data entry is considered the net cost, the cost of direct andtotal cost and the link between them. I presented the basic formula for determining the total cost. Economic modelcalculations were made in MatLab software package and with graphic representation of its interpretation of the resultsachieved in terms of our specific problem.

  11. Congestion estimation technique in the optical network unit registration process.

    Science.gov (United States)

    Kim, Geunyong; Yoo, Hark; Lee, Dongsoo; Kim, Youngsun; Lim, Hyuk

    2016-07-01

    We present a congestion estimation technique (CET) to estimate the optical network unit (ONU) registration success ratio for the ONU registration process in passive optical networks. An optical line terminal (OLT) estimates the number of collided ONUs via the proposed scheme during the serial number state. The OLT can obtain congestion level among ONUs to be registered such that this information may be exploited to change the size of a quiet window to decrease the collision probability. We verified the efficiency of the proposed method through simulation and experimental results.

  12. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  13. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  14. Modelling Of Manufacturing Processes With Membranes

    Science.gov (United States)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2015-07-01

    The current objectives to increase the standards of quality and efficiency in manufacturing processes can be achieved only through the best combination of inputs, independent of spatial distance between them. This paper proposes modelling production processes based on membrane structures introduced in [4]. Inspired from biochemistry, membrane computation [4] is based on the concept of membrane represented in its formalism by the mathematical concept of multiset. The manufacturing process is the evolution of a super cell system from its initial state according to the given actions of aggregation. In this paper we consider that the atomic production unit of the process is the action. The actions and the resources on which the actions are produced, are distributed in a virtual network of companies working together. The destination of the output resources is specified by corresponding output events.

  15. Active microchannel fluid processing unit and method of making

    Science.gov (United States)

    Bennett, Wendy D [Kennewick, WA; Martin, Peter M [Kennewick, WA; Matson, Dean W [Kennewick, WA; Roberts, Gary L [West Richland, WA; Stewart, Donald C [Richland, WA; Tonkovich, Annalee Y [Pasco, WA; Zilka, Jennifer L [Pasco, WA; Schmitt, Stephen C [Dublin, OH; Werner, Timothy M [Columbus, OH

    2001-01-01

    The present invention is an active microchannel fluid processing unit and method of making, both relying on having (a) at least one inner thin sheet; (b) at least one outer thin sheet; (c) defining at least one first sub-assembly for performing at least one first unit operation by stacking a first of the at least one inner thin sheet in alternating contact with a first of the at least one outer thin sheet into a first stack and placing an end block on the at least one inner thin sheet, the at least one first sub-assembly having at least a first inlet and a first outlet; and (d) defining at least one second sub-assembly for performing at least one second unit operation either as a second flow path within the first stack or by stacking a second of the at least one inner thin sheet in alternating contact with second of the at least one outer thin sheet as a second stack, the at least one second sub-assembly having at least a second inlet and a second outlet.

  16. Modeling of the Hydroentanglement Process

    Directory of Open Access Journals (Sweden)

    Ping Xiang

    2006-11-01

    Full Text Available Mechanical performance of hydroentangled nonwovens is determined by the degree of the fiber entanglement, which depends on parameters of the fibers, fiberweb, forming surface, water jet and the process speed. This paper develops a computational fluid dynamics model of the hydroentanglement process. Extensive comparison with experimental data showed that the degree of fiber entanglement is linearly related to flow vorticity in the fiberweb, which is induced by impinging water jets. The fiberweb is modeled as a porous material of uniform porosity and the actual geometry of forming wires is accounted for in the model. Simulation results are compared with experimental data for a Perfojet ® sleeve and four woven forming surfaces. Additionally, the model is used to predict the effect of fiberweb thickness on the degree of fiber entanglement for different forming surfaces.

  17. Process Models for Security Architectures

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2006-01-01

    Full Text Available This paper presents a model for an integrated security system, which can be implemented in any organization. It is based on security-specific standards and taxonomies as ISO 7498-2 and Common Criteria. The functionalities are derived from the classes proposed in the Common Criteria document. In the paper we present the process model for each functionality and also we focus on the specific components.

  18. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    The subject of this thesis is to develop a methodological framework that can systematically guide mathematical model building for better understanding of multi-enzyme processes. In this way, opportunities for process improvements can be identified by analyzing simulations of either existing...... are affected (in a positive or negative way) by the presence of the other enzymes and compounds in the media. In this thesis the concept of multi-enzyme in-pot term is adopted for processes that are carried out by the combination of enzymes in a single reactor and implemented at pilot or industrial scale...

  19. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  20. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    Purpose/Objective For online IGRT, rapid image processing is needed. Fast parallel computations using graphics processing units (GPUs) have recently been made more accessible through general purpose programming interfaces. We present a GPU implementation of the Horn and Schunck method...... respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  1. Fast free-form deformation using graphics processing units.

    Science.gov (United States)

    Modat, Marc; Ridgway, Gerard R; Taylor, Zeike A; Lehmann, Manja; Barnes, Josephine; Hawkes, David J; Fox, Nick C; Ourselin, Sébastien

    2010-06-01

    A large number of algorithms have been developed to perform non-rigid registration and it is a tool commonly used in medical image analysis. The free-form deformation algorithm is a well-established technique, but is extremely time consuming. In this paper we present a parallel-friendly formulation of the algorithm suitable for graphics processing unit execution. Using our approach we perform registration of T1-weighted MR images in less than 1 min and show the same level of accuracy as a classical serial implementation when performing segmentation propagation. This technology could be of significant utility in time-critical applications such as image-guided interventions, or in the processing of large data sets. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  2. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  3. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Krichinsky, A.M.

    1983-02-01

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to estimate compositions of materials in vessels involved in unit operations and chemical processes. This program has been implemented in a remotely operated nuclear fuel processing plant. NUMATH provides estimates of the steady-state composition of materials residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimations, the results are determined for the cataloged in container-oriented files. The estimated compositions represent materials collected in applicable vessels - including consideration for materials previously acknowledged in these vessels. The program utilizes process measurements and simple performance models to estimate material holdup and distribution within unit operations. In simulated run-testing, NUMATH typically produced estimates within 5% of the measured inventories for uranium and within 8% of the measured inventories for thorium during steady-state process operation.

  4. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  5. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  6. Network Model Building (Process Mapping)

    OpenAIRE

    Blau, Gary; Yih, Yuehwern

    2004-01-01

    12 slides Provider Notes:See Project Planning Video (Windows Media) Posted at the bottom are Gary Blau's slides. Before watching, please note that "process mapping" and "modeling" are mentioned in the video and notes. Here they are meant to refer to the NSCORT "project plan"

  7. Model United Nations comes to CERN

    CERN Document Server

    Anaïs Schaeffer

    2012-01-01

    From 20 to 22 January pupils from international schools in Switzerland, France and Turkey came to CERN for three days of "UN-type" conferences.   The MUN organisers, who are all pupils at the Lycée international in Ferney-Voltaire, worked tirelessly for weeks to make the event a real success. The members of the MUN/MFNU association at the Lycée international in Ferney-Voltaire spent several months preparing for their first "Model United Nations" (MUN),  a simulation of a UN session at which young "diplomats" take on the role of delegates representing different nations to discuss a given topic. And as their chosen topic was science, it was only natural that they should hold the event at CERN. For three days, from 20 to 22 January, no fewer than 340 pupils from 12 international schools* in Switzerland, France and Turkey came together to deliberate, consult and debate on the importance of scientific progress fo...

  8. Modeling of the reburning process

    Energy Technology Data Exchange (ETDEWEB)

    Rota, R.; Bonini, F.; Servida, A.; Morbidelli, M.; Carra, S. [Politecnico di Milano, Milano (Italy). Dip. di Chimica Fisica Applicata

    1997-07-01

    Reburning has become a popular method of abating NO{sub x} emission in power plants. Its effectiveness is strongly affected by the interaction between gas phase chemistry and combustion chamber fluid dynamics. Both the mixing of the reactant streams and the elementary reactions in the gas phase control the overall kinetics of the process. This work developed a model coupling a detailed kinetic mechanism to a simplified description of the fluid dynamics of the reburning chamber. The model was checked with reference to experimental data from the literature. Detailed kinetic modeling was found to be essential to describe the reburning process, since the fluid dynamics of the reactor have a strong influence on reactions within. 20 refs., 9 figs., 3 tabs.

  9. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  10. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  11. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    B. Zhang (Bo); C.W. Oosterlee (Cornelis)

    2009-01-01

    htmlabstractIn this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover, th

  12. Option pricing with COS method on Graphics Processing Units

    NARCIS (Netherlands)

    Zhang, B.; Oosterlee, C.W.

    2009-01-01

    In this paper, acceleration on the GPU for option pricing by the COS method is demonstrated. In particular, both European and Bermudan options will be discussed in detail. For Bermudan options, we consider both the Black-Scholes model and Levy processes of infinite activity. Moreover, the influence

  13. Four-component united-atom model of bitumen

    CERN Document Server

    Hansen, Jesper S; Nielsen, Erik; Dyre, Jeppe C; Schrøder, Thomas B

    2013-01-01

    We propose a four-component molecular model of bitumen. The model includes realistic chemical constituents and introduces a coarse-graining level that suppresses the highest frequency modes. Molecular dynamics simulations of the model are being carried out using Graphic-Processor-Units based software in time spans in order of microseconds, and this enables the study of slow relaxation processes characterizing bitumen. This paper focuses on the high-temperature dynamics as expressed through the mean-square displacement, the stress autocorrelation function, and rotational relaxation. The diffusivity of the individual molecules changes little as a function of temperature and reveals distinct dynamical time scales as a result of the different constituents in the system. Different time scales are also observed for the rotational relaxation. The stress autocorrelation function features a slow non-exponential decay for all temperatures studied. From the stress autocorrelation function, the shear viscosity and shear ...

  14. Animal models and conserved processes

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-09-01

    Full Text Available Abstract Background The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? Methods We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Results Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. Conclusion We conclude that even the presence of conserved processes is

  15. Advanced oxidation processes: overall models

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, M. [Univ. de los Andes, Escuela Basica de Ingenieria, La Hechicera, Merida (Venezuela); Curco, D.; Addardak, A.; Gimenez, J.; Esplugas, S. [Dept. de Ingenieria Quimica. Univ. de Barcelona, Barcelona (Spain)

    2003-07-01

    Modelling AOPs implies to consider all the steps included in the process, that means, mass transfer, kinetic (reaction) and luminic steps. In this way, recent works develop models which relate the global reaction rate to catalyst concentration and radiation absorption. However, the application of such models requires to know what is the controlling step for the overall process. In this paper, a simple method is explained which allows to determine the controlling step. Thus, it is assumed that reactor is divided in two hypothetical zones (dark and illuminated), and according to the experimental results, obtained by varying only the reaction volume, it can be decided if reaction occurs only in the illuminated zone or in the all reactor, including dark zone. The photocatalytic degradation of phenol, by using titania degussa P-25 as catalyst, is studied as reaction model. The preliminary results obtained are presented here, showing that it seems that, in this case, reaction only occurs in the illuminated zone of photoreactor. A model is developed to explain this behaviour. (orig.)

  16. Model for amorphous aggregation processes

    Science.gov (United States)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  17. Face Processing: Models For Recognition

    Science.gov (United States)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  18. Switching Processes in Queueing Models

    CERN Document Server

    Anisimov, Vladimir V

    2008-01-01

    Switching processes, invented by the author in 1977, is the main tool used in the investigation of traffic problems from automotive to telecommunications. The title provides a new approach to low traffic problems based on the analysis of flows of rare events and queuing models. In the case of fast switching, averaging principle and diffusion approximation results are proved and applied to the investigation of transient phenomena for wide classes of overloading queuing networks.  The book is devoted to developing the asymptotic theory for the class of switching queuing models which covers  mode

  19. GENETIC ALGORITHM ON GENERAL PURPOSE GRAPHICS PROCESSING UNIT: PARALLELISM REVIEW

    Directory of Open Access Journals (Sweden)

    A.J. Umbarkar

    2013-01-01

    Full Text Available Genetic Algorithm (GA is effective and robust method for solving many optimization problems. However, it may take more runs (iterations and time to get optimal solution. The execution time to find the optimal solution also depends upon the niching-technique applied to evolving population. This paper provides the information about how various authors, researchers, scientists have implemented GA on GPGPU (General purpose Graphics Processing Units with and without parallelism. Many problems have been solved on GPGPU using GA. GA is easy to parallelize because of its SIMD nature and therefore can be implemented well on GPGPU. Thus, speedup can definitely be achieved if bottleneck in GAs are identified and implemented effectively on GPGPU. Paper gives review of various applications solved using GAs on GPGPU with the future scope in the area of optimization.

  20. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  1. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  2. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  3. Iterative Methods for MPC on Graphical Processing Units

    DEFF Research Database (Denmark)

    2012-01-01

    The high oating point performance and memory bandwidth of Graphical Processing Units (GPUs) makes them ideal for a large number of computations which often arises in scientic computing, such as matrix operations. GPUs achieve this performance by utilizing massive par- allelism, which requires...... on their applicability for GPUs. We examine published techniques for iterative methods in interior points methods (IPMs) by applying them to simple test cases, such as a system of masses connected by springs. Iterative methods allows us deal with the ill-conditioning occurring in the later iterations of the IPM as well...... as to avoid the use of dense matrices, which may be too large for the limited memory capacity of current graphics cards....

  4. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Tamascelli, Dario; Dambrosio, Francesco Saverio [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano (Italy); Conte, Riccardo [Department of Chemistry and Cherry L. Emerson Center for Scientific Computation, Emory University, Atlanta, Georgia 30322 (United States); Ceotto, Michele, E-mail: michele.ceotto@unimi.it [Dipartimento di Chimica, Università degli Studi di Milano, via Golgi 19, 20133 Milano (Italy)

    2014-05-07

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly.

  5. Polymer Field-Theory Simulations on Graphics Processing Units

    CERN Document Server

    Delaney, Kris T

    2012-01-01

    We report the first CUDA graphics-processing-unit (GPU) implementation of the polymer field-theoretic simulation framework for determining fully fluctuating expectation values of equilibrium properties for periodic and select aperiodic polymer systems. Our implementation is suitable both for self-consistent field theory (mean-field) solutions of the field equations, and for fully fluctuating simulations using the complex Langevin approach. Running on NVIDIA Tesla T20 series GPUs, we find double-precision speedups of up to 30x compared to single-core serial calculations on a recent reference CPU, while single-precision calculations proceed up to 60x faster than those on the single CPU core. Due to intensive communications overhead, an MPI implementation running on 64 CPU cores remains two times slower than a single GPU.

  6. Graphics Processing Units and High-Dimensional Optimization.

    Science.gov (United States)

    Zhou, Hua; Lange, Kenneth; Suchard, Marc A

    2010-08-01

    This paper discusses the potential of graphics processing units (GPUs) in high-dimensional optimization problems. A single GPU card with hundreds of arithmetic cores can be inserted in a personal computer and dramatically accelerates many statistical algorithms. To exploit these devices fully, optimization algorithms should reduce to multiple parallel tasks, each accessing a limited amount of data. These criteria favor EM and MM algorithms that separate parameters and data. To a lesser extent block relaxation and coordinate descent and ascent also qualify. We demonstrate the utility of GPUs in nonnegative matrix factorization, PET image reconstruction, and multidimensional scaling. Speedups of 100 fold can easily be attained. Over the next decade, GPUs will fundamentally alter the landscape of computational statistics. It is time for more statisticians to get on-board.

  7. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  8. Implementing wide baseline matching algorithms on a graphics processing unit.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.

    2007-10-01

    Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

  9. Combustion Process Modelling and Control

    Directory of Open Access Journals (Sweden)

    Vladimír Maduda

    2007-10-01

    Full Text Available This paper deals with realization of combustion control system on programmable logic controllers. Control system design is based on analysis of the current state of combustion control systems in technological device of raw material processing area. Control system design is composed of two subsystems. First subsystem is represented by software system for measured data processing and for data processing from simulation of the combustion mathematical model. Outputs are parameters for setting of controller algorithms. Second subsystem consists from programme modules. The programme module is presented by specific control algorithm, for example proportional regulation, programmed proportional regulation, proportional regulation with correction on the oxygen in waste gas, and so on. According to the specific combustion control requirements it is possible built-up concrete control system by programme modules. The programme modules were programmed by Automation studio that is used for development, debugging and testing software for B&R controllers.

  10. Lipid Processing Technology: Building a Multilevel Modeling Network

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel; Mustaffa, Azizul Azri; Hukkerikar, Amol

    of these unit operations with respect to performance parameters such as minimum total cost, product yield improvement, operability etc., and process intensification for the retrofit of existing biofuel plants. In the fourth level the information and models developed are used as building blocks...... in the upcoming years major challenges in terms of design and development of better products and more sustainable processes. Although the oleo chemical industry is mature and based on well established processes, the complex systems that lipid compounds form, the lack of accurate predictive models...... for their physical properties and unit operation models for their processing have limited computeraided methods and tools for process synthesis, modeling and simulation to be widely used for design, analysis, and optimization of these processes. In consequence, the aim of this work is to present the development...

  11. The ATLAS Fast Tracker Processing Units - track finding and fitting

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00384270; The ATLAS collaboration; Alison, John; Ancu, Lucian Stefan; Andreani, Alessandro; Annovi, Alberto; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Bogdan, Mircea Arghir; Bryant, Patrick; Calabro, Domenico; Citraro, Saverio; Crescioli, Francesco; Dell'Orso, Mauro; Donati, Simone; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Greco, Virginia; Horyn, Lesya Anna; Iovene, Alessandro; Kalaitzidis, Panagiotis; Kim, Young-Kee; Kimura, Naoki; Kordas, Kostantinos; Kubota, Takashi; Lanza, Agostino; Liberali, Valentino; Luciano, Pierluigi; Magnin, Betty; Sakellariou, Andreas; Sampsonidis, Dimitrios; Saxon, James; Shojaii, Seyed Ruhollah; Sotiropoulou, Calliope Louisa; Stabile, Alberto; Swiatlowski, Maximilian; Volpi, Guido; Zou, Rui; Shochet, Mel

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  12. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  13. Beowulf Distributed Processing and the United States Geological Survey

    Science.gov (United States)

    Maddox, Brian G.

    2002-01-01

    Introduction In recent years, the United States Geological Survey's (USGS) National Mapping Discipline (NMD) has expanded its scientific and research activities. Work is being conducted in areas such as emergency response research, scientific visualization, urban prediction, and other simulation activities. Custom-produced digital data have become essential for these types of activities. High-resolution, remotely sensed datasets are also seeing increased use. Unfortunately, the NMD is also finding that it lacks the resources required to perform some of these activities. Many of these projects require large amounts of computer processing resources. Complex urban-prediction simulations, for example, involve large amounts of processor-intensive calculations on large amounts of input data. This project was undertaken to learn and understand the concepts of distributed processing. Experience was needed in developing these types of applications. The idea was that this type of technology could significantly aid the needs of the NMD scientific and research programs. Porting a numerically intensive application currently being used by an NMD science program to run in a distributed fashion would demonstrate the usefulness of this technology. There are several benefits that this type of technology can bring to the USGS's research programs. Projects can be performed that were previously impossible due to a lack of computing resources. Other projects can be performed on a larger scale than previously possible. For example, distributed processing can enable urban dynamics research to perform simulations on larger areas without making huge sacrifices in resolution. The processing can also be done in a more reasonable amount of time than with traditional single-threaded methods (a scaled version of Chester County, Pennsylvania, took about fifty days to finish its first calibration phase with a single-threaded program). This paper has several goals regarding distributed processing

  14. Density functional theory calculation on many-cores hybrid central processing unit-graphic processing unit architectures.

    Science.gov (United States)

    Genovese, Luigi; Ospici, Matthieu; Deutsch, Thierry; Méhaut, Jean-François; Neelov, Alexey; Goedecker, Stefan

    2009-07-21

    We present the implementation of a full electronic structure calculation code on a hybrid parallel architecture with graphic processing units (GPUs). This implementation is performed on a free software code based on Daubechies wavelets. Such code shows very good performances, systematic convergence properties, and an excellent efficiency on parallel computers. Our GPU-based acceleration fully preserves all these properties. In particular, the code is able to run on many cores which may or may not have a GPU associated, and thus on parallel and massive parallel hybrid machines. With double precision calculations, we may achieve considerable speedup, between a factor of 20 for some operations and a factor of 6 for the whole density functional theory code.

  15. Conceptual Model of Quantities, Units, Dimensions, and Values

    Science.gov (United States)

    Rouquette, Nicolas F.; DeKoenig, Hans-Peter; Burkhart, Roger; Espinoza, Huascar

    2011-01-01

    JPL collaborated with experts from industry and other organizations to develop a conceptual model of quantities, units, dimensions, and values based on the current work of the ISO 80000 committee revising the International System of Units & Quantities based on the International Vocabulary of Metrology (VIM). By providing support for ISO 80000 in SysML via the International Vocabulary of Metrology (VIM), this conceptual model provides, for the first time, a standard-based approach for addressing issues of unit coherence and dimensional analysis into the practice of systems engineering with SysML-based tools. This conceptual model provides support for two kinds of analyses specified in the International Vocabulary of Metrology (VIM): coherence of units as well as of systems of units, and dimension analysis of systems of quantities. To provide a solid and stable foundation, the model for defining quantities, units, dimensions, and values in SysML is explicitly based on the concepts defined in VIM. At the same time, the model library is designed in such a way that extensions to the ISQ (International System of Quantities) and SI Units (Systeme International d Unites) can be represented, as well as any alternative systems of quantities and units. The model library can be used to support SysML user models in various ways. A simple approach is to define and document libraries of reusable systems of units and quantities for reuse across multiple projects, and to link units and quantity kinds from these libraries to Unit and QuantityKind stereotypes defined in SysML user models.

  16. Graphics Processing Unit-Based Bioheat Simulation to Facilitate Rapid Decision Making Associated with Cryosurgery Training.

    Science.gov (United States)

    Keelan, Robert; Zhang, Hong; Shimada, Kenji; Rabin, Yoed

    2016-04-01

    This study focuses on the implementation of an efficient numerical technique for cryosurgery simulations on a graphics processing unit as an alternative means to accelerate runtime. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a developmental model. The ability to perform rapid simulations of various test cases is critical to facilitate sound decision making associated with medical training. Consistent with clinical practice, the training tool aims at correlating the frozen region contour and the corresponding temperature field with the target region shape. The current study focuses on the feasibility of graphics processing unit-based computation using C++ accelerated massive parallelism, as one possible implementation. Benchmark results on a variety of computation platforms display between 3-fold acceleration (laptop) and 13-fold acceleration (gaming computer) of cryosurgery simulation, in comparison with the more common implementation on a multicore central processing unit. While the general concept of graphics processing unit-based simulations is not new, its application to phase-change problems, combined with the unique requirements for cryosurgery optimization, represents the core contribution of the current study.

  17. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  18. Optimal oppor tunistic maintenance model of multi-unit systems

    Institute of Scientific and Technical Information of China (English)

    Zhijun Cheng; Zheng Yang; Bo Guo

    2013-01-01

    An opportunistic maintenance model is presented for a continuously deteriorating series system with economical de-pendence. The system consists of two kinds of units, which are respectively subjected to the deterioration failure described by Gamma process and the random failure described by Poisson process. A two-level opportunistic policy defined by three decision parameters is proposed to coordinate the different maintenance actions and minimize the long-run maintenance cost rate of the system. A computable expression of the average cost rate is es-tablished by using the renewal property of the stochastic process of the maintained system state. The optimal values of three deci-sion parameters are derived by an iteration approach based on the characteristic of Gamma process. The behavior of the proposed policy is il ustrated through a numerical experiment. Comparative study with the widely used corrective maintenance policy demon-strates the advantage of the proposed opportunistic maintenance method in significantly reducing the maintenance cost. Simultane-ously, the applicable area of this opportunistic model is discussed by the sensitivity analysis of the set-up cost and random failure rate.

  19. Parallel computing for simultaneous iterative tomographic imaging by graphics processing units

    Science.gov (United States)

    Bello-Maldonado, Pedro D.; López, Ricardo; Rogers, Colleen; Jin, Yuanwei; Lu, Enyue

    2016-05-01

    In this paper, we address the problem of accelerating inversion algorithms for nonlinear acoustic tomographic imaging by parallel computing on graphics processing units (GPUs). Nonlinear inversion algorithms for tomographic imaging often rely on iterative algorithms for solving an inverse problem, thus computationally intensive. We study the simultaneous iterative reconstruction technique (SIRT) for the multiple-input-multiple-output (MIMO) tomography algorithm which enables parallel computations of the grid points as well as the parallel execution of multiple source excitation. Using graphics processing units (GPUs) and the Compute Unified Device Architecture (CUDA) programming model an overall improvement of 26.33x was achieved when combining both approaches compared with sequential algorithms. Furthermore we propose an adaptive iterative relaxation factor and the use of non-uniform weights to improve the overall convergence of the algorithm. Using these techniques, fast computations can be performed in parallel without the loss of image quality during the reconstruction process.

  20. Logistic Regression Model on Antenna Control Unit Autotracking Mode

    Science.gov (United States)

    2015-10-20

    412TW-PA-15240 Logistic Regression Model on Antenna Control Unit Autotracking Mode DANIEL T. LAIRD AIR FORCE TEST CENTER EDWARDS AFB, CA...OCT 15 4. TITLE AND SUBTITLE Logistic Regression Model on Antenna Control Unit Autotracking Mode 5a. CONTRACT NUMBER 5b. GRANT...alternative-hypothesis. This paper will present an Antenna Auto- tracking model using Logistic Regression modeling. This paper presents an example of

  1. Mathematical modeling of biological processes

    CERN Document Server

    Friedman, Avner

    2014-01-01

    This book on mathematical modeling of biological processes includes a wide selection of biological topics that demonstrate the power of mathematics and computational codes in setting up biological processes with a rigorous and predictive framework. Topics include: enzyme dynamics, spread of disease, harvesting bacteria, competition among live species, neuronal oscillations, transport of neurofilaments in axon, cancer and cancer therapy, and granulomas. Complete with a description of the biological background and biological question that requires the use of mathematics, this book is developed for graduate students and advanced undergraduate students with only basic knowledge of ordinary differential equations and partial differential equations; background in biology is not required. Students will gain knowledge on how to program with MATLAB without previous programming experience and how to use codes in order to test biological hypothesis.

  2. Thermochemical Process Development Unit: Researching Fuels from Biomass, Bioenergy Technologies (Fact Sheet)

    Energy Technology Data Exchange (ETDEWEB)

    2009-01-01

    The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a unique facility dedicated to researching thermochemical processes to produce fuels from biomass.

  3. Mapping past, present, and future climatic suitability for invasive Aedes aegypti and Aedes albopictus in the United States: a process-based modeling approach using CMIP5 downscaled climate scenarios

    Science.gov (United States)

    Donnelly, M. A. P.; Marcantonio, M.; Melton, F. S.; Barker, C. M.

    2016-12-01

    The ongoing spread of the mosquitoes, Aedes aegypti and Aedes albopictus, in the continental United States leaves new areas at risk for local transmission of dengue, chikungunya, and Zika viruses. All three viruses have caused major disease outbreaks in the Americas with infected travelers returning regularly to the U.S. The expanding range of these mosquitoes raises questions about whether recent spread has been enabled by climate change or other anthropogenic influences. In this analysis, we used downscaled climate scenarios from the NASA Earth Exchange Global Daily Downscaled Projections (NEX GDDP) dataset to model Ae. aegypti and Ae. albopictus population growth rates across the United States. We used a stage-structured matrix population model to understand past and present climatic suitability for these vectors, and to project future suitability under CMIP5 climate change scenarios. Our results indicate that much of the southern U.S. is suitable for both Ae. aegypti and Ae. albopictus year-round. In addition, a large proportion of the U.S. is seasonally suitable for mosquito population growth, creating the potential for periodic incursions into new areas. Changes in climatic suitability in recent decades for Ae. aegypti and Ae. albopictus have occurred already in many regions of the U.S., and model projections of future climate suggest that climate change will continue to reshape the range of Ae. aegypti and Ae. albopictus in the U.S., and potentially the risk of the viruses they transmit.

  4. Mapping Past, Present, and Future Climatic Suitability for Invasive Aedes Aegypti and Aedes Albopictus in the United States: A Process-Based Modeling Approach Using CMIP5 Downscaled Climate Scenarios

    Science.gov (United States)

    Donnelly, Marisa Anne Pella; Marcantonio, Matteo; Melton, Forrest S.; Barker, Christopher M.

    2016-01-01

    The ongoing spread of the mosquitoes, Aedes aegypti and Aedes albopictus, in the continental United States leaves new areas at risk for local transmission of dengue, chikungunya, and Zika viruses. All three viruses have caused major disease outbreaks in the Americas with infected travelers returning regularly to the U.S. The expanding range of these mosquitoes raises questions about whether recent spread has been enabled by climate change or other anthropogenic influences. In this analysis, we used downscaled climate scenarios from the NASA Earth Exchange Global Daily Downscaled Projections (NEX GDDP) dataset to model Ae. aegypti and Ae. albopictus population growth rates across the United States. We used a stage-structured matrix population model to understand past and present climatic suitability for these vectors, and to project future suitability under CMIP5 climate change scenarios. Our results indicate that much of the southern U.S. is suitable for both Ae. aegypti and Ae. albopictus year-round. In addition, a large proportion of the U.S. is seasonally suitable for mosquito population growth, creating the potential for periodic incursions into new areas. Changes in climatic suitability in recent decades for Ae. aegypti and Ae. albopictus have occurred already in many regions of the U.S., and model projections of future climate suggest that climate change will continue to reshape the range of Ae. aegypti and Ae. albopictus in the U.S., and potentially the risk of the viruses they transmit.

  5. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    National Research Council Canada - National Science Library

    Sungki Kim; Wonil Ko; Sungsig Bang

    2015-01-01

    ...) metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method...

  6. Principles of polymer processing modelling

    Directory of Open Access Journals (Sweden)

    Agassant Jean-François

    2016-01-01

    Full Text Available Polymer processing involves three thermo-mechanical stages: Plastication of solid polymer granules or powder to an homogeneous fluid which is shaped under pressure in moulds or dies and finally cooled and eventually drawn to obtain the final plastic part. Physical properties of polymers (high viscosity, non-linear rheology, low thermal diffusivity as well as the complex shape of most plastic parts make modelling a challenge. Several examples (film blowing extrusion dies, injection moulding, blow moulding are presented and discussed.

  7. Monte Carlo MP2 on Many Graphical Processing Units.

    Science.gov (United States)

    Doran, Alexander E; Hirata, So

    2016-10-11

    In the Monte Carlo second-order many-body perturbation (MC-MP2) method, the long sum-of-product matrix expression of the MP2 energy, whose literal evaluation may be poorly scalable, is recast into a single high-dimensional integral of functions of electron pair coordinates, which is evaluated by the scalable method of Monte Carlo integration. The sampling efficiency is further accelerated by the redundant-walker algorithm, which allows a maximal reuse of electron pairs. Here, a multitude of graphical processing units (GPUs) offers a uniquely ideal platform to expose multilevel parallelism: fine-grain data-parallelism for the redundant-walker algorithm in which millions of threads compute and share orbital amplitudes on each GPU; coarse-grain instruction-parallelism for near-independent Monte Carlo integrations on many GPUs with few and infrequent interprocessor communications. While the efficiency boost by the redundant-walker algorithm on central processing units (CPUs) grows linearly with the number of electron pairs and tends to saturate when the latter exceeds the number of orbitals, on a GPU it grows quadratically before it increases linearly and then eventually saturates at a much larger number of pairs. This is because the orbital constructions are nearly perfectly parallelized on a GPU and thus completed in a near-constant time regardless of the number of pairs. In consequence, an MC-MP2/cc-pVDZ calculation of a benzene dimer is 2700 times faster on 256 GPUs (using 2048 electron pairs) than on two CPUs, each with 8 cores (which can use only up to 256 pairs effectively). We also numerically determine that the cost to achieve a given relative statistical uncertainty in an MC-MP2 energy increases as O(n(3)) or better with system size n, which may be compared with the O(n(5)) scaling of the conventional implementation of deterministic MP2. We thus establish the scalability of MC-MP2 with both system and computer sizes.

  8. Uniting Gradual and Abrupt set Processes in Resistive Switching Oxides

    Science.gov (United States)

    Fleck, Karsten; La Torre, Camilla; Aslam, Nabeel; Hoffmann-Eifert, Susanne; Böttger, Ulrich; Menzel, Stephan

    2016-12-01

    Identifying limiting factors is crucial for a better understanding of the dynamics of the resistive switching phenomenon in transition-metal oxides. This improved understanding is important for the design of fast-switching, energy-efficient, and long-term stable redox-based resistive random-access memory devices. Therefore, this work presents a detailed study of the set kinetics of valence change resistive switches on a time scale from 10 ns to 104 s , taking Pt /SrTiO3/TiN nanocrossbars as a model material. The analysis of the transient currents reveals that the switching process can be subdivided into a linear-degradation process that is followed by a thermal runaway. The comparison with a dynamical electrothermal model of the memory cell allows the deduction of the physical origin of the degradation. The origin is an electric-field-induced increase of the oxygen-vacancy concentration near the Schottky barrier of the Pt /SrTiO3 interface that is accompanied by a steadily rising local temperature due to Joule heating. The positive feedback of the temperature increase on the oxygen-vacancy mobility, and thereby on the conductivity of the filament, leads to a self-acceleration of the set process.

  9. Integrated modelling in materials and process technology

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri

    2008-01-01

    Integrated modelling of entire process sequences and the subsequent in-service conditions, and multiphysics modelling of the single process steps are areas that increasingly support optimisation of manufactured parts. In the present paper, three different examples of modelling manufacturing...... processes from the viewpoint of combined materials and process modelling are presented: solidification of thin walled ductile cast iron, integrated modelling of spray forming and multiphysics modelling of friction stir welding. The fourth example describes integrated modelling applied to a failure analysis...

  10. Parallel direct solver for finite element modeling of manufacturing processes

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, P.A.F.

    2017-01-01

    The central processing unit (CPU) time is of paramount importance in finite element modeling of manufacturing processes. Because the most significant part of the CPU time is consumed in solving the main system of equations resulting from finite element assemblies, different approaches have been...

  11. Towards simplification of hydrologic modeling: Identification of dominant processes

    Science.gov (United States)

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  12. A Review of Process Modeling Language Paradigms

    Institute of Scientific and Technical Information of China (English)

    MA Qin-hai; GUAN Zhi-min; LI Ying; ZHAO Xi-nan

    2002-01-01

    Process representation or modeling plays an important role in business process engineering.Process modeling languages can be evaluated by the extent to which they provide constructs useful for representing and reasoning about the aspects of a process, and subsequently are chosen for a certain purpose.This paper reviews process modeling language paradigms and points out their advantages and disadvantages.

  13. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097.

  14. Massively Parallel Latent Semantic Analyzes using a Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, Joseph M [ORNL; Cui, Xiaohui [ORNL

    2009-01-01

    Latent Semantic Indexing (LSA) aims to reduce the dimensions of large Term-Document datasets using Singular Value Decomposition. However, with the ever expanding size of data sets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. The Graphics Processing Unit (GPU) can solve some highly parallel problems much faster than the traditional sequential processor (CPU). Thus, a deployable system using a GPU to speedup large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a computer cluster. Due to the GPU s application-specific architecture, harnessing the GPU s computational prowess for LSA is a great challenge. We present a parallel LSA implementation on the GPU, using NVIDIA Compute Unified Device Architecture and Compute Unified Basic Linear Algebra Subprograms. The performance of this implementation is compared to traditional LSA implementation on CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1000x1000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran five to six times faster than the CPU version. The large variation is due to architectural benefits the GPU has for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  15. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  16. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  17. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as

  18. Business Process Modelling for Measuring Quality

    NARCIS (Netherlands)

    Heidari, F.; Loucopoulos, P.; Brazier, F.M.

    2013-01-01

    Business process modelling languages facilitate presentation, communication and analysis of business processes with different stakeholders. This paper proposes an approach that drives specification and measurement of quality requirements and in doing so relies on business process models as represent

  19. Graphics processing unit-accelerated quantitative trait Loci detection.

    Science.gov (United States)

    Chapuis, Guillaume; Filangi, Olivier; Elsen, Jean-Michel; Lavenier, Dominique; Le Roy, Pascale

    2013-09-01

    Mapping quantitative trait loci (QTL) using genetic marker information is a time-consuming analysis that has interested the mapping community in recent decades. The increasing amount of genetic marker data allows one to consider ever more precise QTL analyses while increasing the demand for computation. Part of the difficulty of detecting QTLs resides in finding appropriate critical values or threshold values, above which a QTL effect is considered significant. Different approaches exist to determine these thresholds, using either empirical methods or algebraic approximations. In this article, we present a new implementation of existing software, QTLMap, which takes advantage of the data parallel nature of the problem by offsetting heavy computations to a graphics processing unit (GPU). Developments on the GPU were implemented using Cuda technology. This new implementation performs up to 75 times faster than the previous multicore implementation, while maintaining the same results and level of precision (Double Precision) and computing both QTL values and thresholds. This speedup allows one to perform more complex analyses, such as linkage disequilibrium linkage analyses (LDLA) and multiQTL analyses, in a reasonable time frame.

  20. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  1. Efficient graphics processing unit-based voxel carving for surveillance

    Science.gov (United States)

    Ober-Gecks, Antje; Zwicker, Marius; Henrich, Dominik

    2016-07-01

    A graphics processing unit (GPU)-based implementation of a space carving method for the reconstruction of the photo hull is presented. In particular, the generalized voxel coloring with item buffer approach is transferred to the GPU. The fast computation on the GPU is realized by an incrementally calculated standard deviation within the likelihood ratio test, which is applied as color consistency criterion. A fast and efficient computation of complete voxel-pixel projections is provided using volume rendering methods. This generates a speedup of the iterative carving procedure while considering all given pixel color information. Different volume rendering methods, such as texture mapping and raycasting, are examined. The termination of the voxel carving procedure is controlled through an anytime concept. The photo hull algorithm is examined for its applicability to real-world surveillance scenarios as an online reconstruction method. For this reason, a GPU-based redesign of a visual hull algorithm is provided that utilizes geometric knowledge about known static occluders of the scene in order to create a conservative and complete visual hull that includes all given objects. This visual hull approximation serves as input for the photo hull algorithm.

  2. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  3. Undergraduate Game Degree Programs in the United Kingdom and United States: A Comparison of the Curriculum Planning Process

    Science.gov (United States)

    McGill, Monica M.

    2010-01-01

    Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, post-secondary institutions in the United Kingdom (UK) and the United States (US) have started to create game degree programs. Though curriculum theorists provide insight into the process of…

  4. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  5. Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units

    Science.gov (United States)

    Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark

    2012-02-01

    We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.

  6. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Antz, Hartwig [Karlsruhe Inst. of Technology (KIT) (Germany); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom); Heuveline, Vincent [Karlsruhe Inst. of Technology (KIT) (Germany)

    2011-11-30

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.

  7. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  8. Building a multilevel modeling network for lipid processing systems

    DEFF Research Database (Denmark)

    Mustaffa, Azizul Azri; Díaz Tovar, Carlos Axel; Hukkerikar, Amol

    2011-01-01

    data collected from existing process plants, and application of validated models in design and analysis of unit operations; iv) the information and models developed are used as building blocks in the development of methods and tools for computer-aided synthesis and design of process flowsheets (CAFD......The aim of this work is to present the development of a computer aided multilevel modeling network for the systematic design and analysis of processes employing lipid technologies. This is achieved by decomposing the problem into four levels of modeling: i) pure component property modeling...... and a lipid-database of collected experimental data from industry and generated data from validated predictive property models, as well as modeling tools for fast adoption-analysis of property prediction models; ii) modeling of phase behavior of relevant lipid mixtures using the UNIFAC-CI model, development...

  9. Fast crustal deformation computing method for multiple computations accelerated by a graphics processing unit cluster

    Science.gov (United States)

    Yamaguchi, Takuma; Ichimura, Tsuyoshi; Yagi, Yuji; Agata, Ryoichiro; Hori, Takane; Hori, Muneo

    2017-08-01

    As high-resolution observational data become more common, the demand for numerical simulations of crustal deformation using 3-D high-fidelity modelling is increasing. To increase the efficiency of performing numerical simulations with high computation costs, we developed a fast solver using heterogeneous computing, with graphics processing units (GPUs) and central processing units, and then used the solver in crustal deformation computations. The solver was based on an iterative solver and was devised so that a large proportion of the computation was calculated more quickly using GPUs. To confirm the utility of the proposed solver, we demonstrated a numerical simulation of the coseismic slip distribution estimation, which requires 360 000 crustal deformation computations with 82 196 106 degrees of freedom.

  10. Developing engineering processes through integrated modelling of product and process

    DEFF Research Database (Denmark)

    Nielsen, Jeppe Bjerrum; Hvam, Lars

    2012-01-01

    This article aims at developing an operational tool for integrated modelling of product assortments and engineering processes in companies making customer specific products. Integrating a product model in the design of engineering processes will provide a deeper understanding of the engineering...... activities as well as insight into how product features affect the engineering processes. The article suggests possible ways of integrating models of products with models of engineering processes. The models have been tested and further developed in an action research study carried out in collaboration...

  11. Fast Pyrolysis Process Development Unit for Validating Bench Scale Data

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Robert C. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.; Jones, Samuel T. [Iowa State Univ., Ames, IA (United States). Biorenewables Research Lab.. Center for Sustainable Environmental Technologies. Bioeconomy Inst.

    2010-03-31

    The purpose of this project was to prepare and operate a fast pyrolysis process development unit (PDU) that can validate experimental data generated at the bench scale. In order to do this, a biomass preparation system, a modular fast pyrolysis fluidized bed reactor, modular gas clean-up systems, and modular bio-oil recovery systems were designed and constructed. Instrumentation for centralized data collection and process control were integrated. The bio-oil analysis laboratory was upgraded with the addition of analytical equipment needed to measure C, H, O, N, S, P, K, and Cl. To provide a consistent material for processing through the fluidized bed fast pyrolysis reactor, the existing biomass preparation capabilities of the ISU facility needed to be upgraded. A stationary grinder was installed to reduce biomass from bale form to 5-10 cm lengths. A 25 kg/hr rotary kiln drier was installed. It has the ability to lower moisture content to the desired level of less than 20% wt. An existing forage chopper was upgraded with new screens. It is used to reduce biomass to the desired particle size of 2-25 mm fiber length. To complete the material handling between these pieces of equipment, a bucket elevator and two belt conveyors must be installed. The bucket elevator has been installed. The conveyors are being procured using other funding sources. Fast pyrolysis bio-oil, char and non-condensable gases were produced from an 8 kg/hr fluidized bed reactor. The bio-oil was collected in a fractionating bio-oil collection system that produced multiple fractions of bio-oil. This bio-oil was fractionated through two separate, but equally important, mechanisms within the collection system. The aerosols and vapors were selectively collected by utilizing laminar flow conditions to prevent aerosol collection and electrostatic precipitators to collect the aerosols. The vapors were successfully collected through a selective condensation process. The combination of these two mechanisms

  12. The pediatric intensive care unit business model.

    Science.gov (United States)

    Schleien, Charles L

    2013-06-01

    All pediatric intensivists need a primer on ICU finance. The author describes potential alternate revenue sources for the division. Differentiating units by size or academic affiliation, the author describes drivers of expense. Strategies to manage the bottom line including negotiations for hospital services are covered. Some of the current trends in physician productivity and its described metrics, with particular focus on clinical FTE management is detailed. Methods of using this data to enhance revenue are discussed. Some of the other current trends in the ICU business related to changes at the federal and state level as well as in the insurance sector, moving away from fee-for-service are covered. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Modelling of an industrial NGL-Recovery unit considering environmental and economic impacts

    Energy Technology Data Exchange (ETDEWEB)

    Sharratt, P. N.; Hernandez-Enriquez, A.; Flores-Tlacuahuac, A.

    2009-07-01

    In this work, an integrated model is presented that identifies key areas in the operation of a cryogenic NGL-recovery unit. This methodology sets out to provide deep understanding of various interrelationship across multiple plant operating factors including reliability, which could be essential for substantial improvement of process performance. The integrated model has been developed to predict the economic and environmental impacts of a real cryogenic unit (600 MMCUF/D) during normal operation, and has been built in Aspen TM. (Author)

  14. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  15. modeling grinding modeling grinding processes as micro processes ...

    African Journals Online (AJOL)

    eobe

    workpiece material dynamics thus allowing for process planning, optimization, and control. In spite of the .... arrangement of the grain vertices at the wheel active surface. ...... on Workpiece Roughness and Process Vibration” J. of the Braz.

  16. Design of the Laboratory-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Meier, David E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Casella, Amanda J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Delegard, Calvin H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edwards, Matthew K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Orton, Robert D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rapko, Brian M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Smart, John E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report describes a design for a laboratory-scale capability to produce plutonium oxide (PuO2) for use in identifying and validating nuclear forensics signatures associated with plutonium production, as well as for use as exercise and reference materials. This capability will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including PuO2 dissolution, purification of the Pu by ion exchange, precipitation, and re-conversion to PuO2 by calcination.

  17. Building a Multilevel Modeling Network for Lipid Processing Systems

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel; Mustaffa, Azizul Azri; Hukkerikar, Amol

    for their physical properties and unit operation models for their processing have limited computer-aided methods and tools for process synthesis, modeling and simulation to be widely used for design, analysis, and optimization of these processes. The world’s fats and oils production has been growing rapidly over...... limited computer-aided methods and tools for process synthesis, modeling and simulation to be widely used for design, analysis, and optimization of these processes....... in the upcoming years major challenges in terms of design and development of better products and more sustainable processes. Although the oleo chemical industry is mature and based on well established processes, the complex systems that lipid compounds form, the lack of accurate predictive models...

  18. Managing Analysis Models in the Design Process

    Science.gov (United States)

    Briggs, Clark

    2006-01-01

    Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.

  19. Modeled Thickness of the Overburden Geomodel Unit (obthk_f)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The obthk_f grid represents the modeled thickness of the Overburden geomodel unit at a 500 foot resolution. It is one grid of a geomodel that consists of eleven...

  20. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  1. Performance Recognition for Sulphur Flotation Process Based on Froth Texture Unit Distribution

    Directory of Open Access Journals (Sweden)

    Mingfang He

    2013-01-01

    Full Text Available As an important indicator of flotation performance, froth texture is believed to be related to operational condition in sulphur flotation process. A novel fault detection method based on froth texture unit distribution (TUD is proposed to recognize the fault condition of sulphur flotation in real time. The froth texture unit number is calculated based on texture spectrum, and the probability density function (PDF of froth texture unit number is defined as texture unit distribution, which can describe the actual textual feature more accurately than the grey level dependence matrix approach. As the type of the froth TUD is unknown, a nonparametric kernel estimation method based on the fixed kernel basis is proposed, which can overcome the difficulty when comparing different TUDs under various conditions is impossible using the traditional varying kernel basis. Through transforming nonparametric description into dynamic kernel weight vectors, a principle component analysis (PCA model is established to reduce the dimensionality of the vectors. Then a threshold criterion determined by the TQ statistic based on the PCA model is proposed to realize the performance recognition. The industrial application results show that the accurate performance recognition of froth flotation can be achieved by using the proposed method.

  2. Exponential Models of Legislative Turnover. [and] The Dynamics of Political Mobilization, I: A Model of the Mobilization Process, II: Deductive Consequences and Empirical Application of the Model. Applications of Calculus to American Politics. [and] Public Support for Presidents. Applications of Algebra to American Politics. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 296-300.

    Science.gov (United States)

    Casstevens, Thomas W.; And Others

    This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…

  3. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  4. Process Correlation Analysis Model for Process Improvement Identification

    Directory of Open Access Journals (Sweden)

    Su-jin Choi

    2014-01-01

    software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  5. A Process Model for Establishing Business Process Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Nguyen Hoang Thuan

    2017-06-01

    Full Text Available Crowdsourcing can be an organisational strategy to distribute work to Internet users and harness innovation, information, capacities, and variety of business endeavours. As crowdsourcing is different from other business strategies, organisations are often unsure as to how to best structure different crowdsourcing activities and integrate them with other organisational business processes. To manage this problem, we design a process model guiding how to establish business process crowdsourcing. The model consists of seven components covering the main activities of crowdsourcing processes, which are drawn from a knowledge base incorporating diverse knowledge sources in the domain. The built model is evaluated using case studies, suggesting the adequateness and utility of the model.

  6. Personalised modelling of facial action unit intensity

    NARCIS (Netherlands)

    Yang, Shuang; Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2014-01-01

    Facial expressions depend greatly on facial morphology and expressiveness of the observed person. Recent studies have shown great improvement of the personalized over non-personalized models in variety of facial expression related tasks, such as face and emotion recognition. However, in the context

  7. Unit root modeling for trending stock market series

    Directory of Open Access Journals (Sweden)

    Afees A. Salisu

    2016-06-01

    Full Text Available In this paper, we examine how the unit root for stock market series should be modeled. We employ the Narayan and Liu (2015 trend GARCH-based unit root and its variants in order to more carefully capture the inherent statistical behavior of the series. We utilize daily, weekly and monthly data covering nineteen countries across the regions of America, Asia and Europe. We find that the nature of data frequency matters for unit root testing when dealing with stock market data. Our evidence also suggests that stock market data is better modeled in the presence of structural breaks, conditional heteroscedasticity and time trend.

  8. CONVERGENCE TO PROCESS ORGANIZATION BY MODEL OF PROCESS MATURITY

    Directory of Open Access Journals (Sweden)

    Blaženka Piuković Babičković

    2015-06-01

    Full Text Available With modern business process orientation binds primarily, process of thinking and process organizational structure. Although the business processes are increasingly a matter of writing and speaking, it is a major problem among the business world, especially in countries in transition, where it has been found that there is a lack of understanding of the concept of business process management. The aim of this paper is to give a specific contribution to overcoming the identified problem, by pointing out the significance of the concept of business process management, as well as the representation of the model for review of process maturity and tools that are recommended for use in process management.

  9. Simulation of abrasive water jet cutting process: Part 1. Unit event approach

    Science.gov (United States)

    Lebar, Andrej; Junkar, Mihael

    2004-11-01

    Abrasive water jet (AWJ) machined surfaces exhibit the texture typical of machining with high energy density beam processing technologies. It has a superior surface quality in the upper region and rough surface in the lower zone with pronounced texture marks called striations. The nature of the mechanisms involved in the domain of AWJ machining is still not well understood but is essential for AWJ control improvement. In this paper, the development of an AWJ machining simulation is reported on. It is based on an AWJ process unit event, which in this case represents the impact of a particular abrasive grain. The geometrical characteristics of the unit event are measured on a physical model of the AWJ process. The measured dependences and the proposed model relations are then implemented in the AWJ machining process simulation. The obtained results are in good agreement in the engraving regime of AWJ machining. To expand the validity of the simulation further, a cellular automata approach is explored in the second part of the paper.

  10. Event-driven process execution model for process virtual machine

    Institute of Scientific and Technical Information of China (English)

    WU Dong-yao; WEI Jun; GAO Chu-shu; DOU Wen-shen

    2012-01-01

    Current orchestration and choreography process engines only serve with dedicate process languages. To solve these problems, an Even~driven Process Execution Model (EPEM) was developed. Formalization and map- ping principles of the model were presented to guarantee the correctness and efficiency for process transformation. As a case study, the EPEM descriptions of Web Services Business Process Execution Language (WS~BPEL) were represented and a Process Virtual Machine (PVM)-OncePVM was implemented in compliance with the EPEM.

  11. Analog modelling of obduction processes

    Science.gov (United States)

    Agard, P.; Zuo, X.; Funiciello, F.; Bellahsen, N.; Faccenna, C.; Savva, D.

    2012-04-01

    Obduction corresponds to one of plate tectonics oddities, whereby dense, oceanic rocks (ophiolites) are presumably 'thrust' on top of light, continental ones, as for the short-lived, almost synchronous Peri-Arabic obduction (which took place along thousands of km from Turkey to Oman in c. 5-10 Ma). Analog modelling experiments were performed to study the mechanisms of obduction initiation and test various triggering hypotheses (i.e., plate acceleration, slab hitting the 660 km discontinuity, ridge subduction; Agard et al., 2007). The experimental setup comprises (1) an upper mantle, modelled as a low-viscosity transparent Newtonian glucose syrup filling a rigid Plexiglas tank and (2) high-viscosity silicone plates (Rhodrosil Gomme with PDMS iron fillers to reproduce densities of continental or oceanic plates), located at the centre of the tank above the syrup to simulate the subducting and the overriding plates - and avoid friction on the sides of the tank. Convergence is simulated by pushing on a piston at one end of the model with velocities comparable to those of plate tectonics (i.e., in the range 1-10 cm/yr). The reference set-up includes, from one end to the other (~60 cm): (i) the piston, (ii) a continental margin containing a transition zone to the adjacent oceanic plate, (iii) a weakness zone with variable resistance and dip (W), (iv) an oceanic plate - with or without a spreading ridge, (v) a subduction zone (S) dipping away from the piston and (vi) an upper, active continental margin, below which the oceanic plate is being subducted at the start of the experiment (as is known to have been the case in Oman). Several configurations were tested and over thirty different parametric tests were performed. Special emphasis was placed on comparing different types of weakness zone (W) and the extent of mechanical coupling across them, particularly when plates were accelerated. Displacements, together with along-strike and across-strike internal deformation in all

  12. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  13. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  14. Modeling low impact development potential with hydrological response units.

    Science.gov (United States)

    Eric, Marija; Fan, Celia; Joksimovic, Darko; Li, James Y

    2013-01-01

    Evaluations of benefits of implementing low impact development (LID) stormwater management techniques can extend up to a watershed scale. This presents a challenge for representing them in watershed models, since they are typically orders of magnitude smaller in size. This paper presents an approach that is focused on trying to evaluate the benefits of implementing LIDs on a lot level. The methodology uses the concept of urban hydrological response Unit and results in developing and applying performance curves that are a function of lot properties to estimate the potential benefit of large-scale LID implementation. Lot properties are determined using a municipal geographic information system database and processed to determine groups of lots with similar properties. A representative lot from each group is modeled over a typical rainfall year using USEPA Stormwater Management Model to develop performance functions that relate the lot properties and the change in annual runoff volume and corresponding phosphorus loading with different LIDs implemented. The results of applying performance functions on all urban areas provide the potential locations, benefit and cost of implementation of all LID techniques, guiding future decisions for LID implementation by watershed area municipalities.

  15. Reproducibility of Mammography Units, Film Processing and Quality Imaging

    Science.gov (United States)

    Gaona, Enrique

    2003-09-01

    The purpose of this study was to carry out an exploratory survey of the problems of quality control in mammography and processors units as a diagnosis of the current situation of mammography facilities. Measurements of reproducibility, optical density, optical difference and gamma index are included. Breast cancer is the most frequently diagnosed cancer and is the second leading cause of cancer death among women in the Mexican Republic. Mammography is a radiographic examination specially designed for detecting breast pathology. We found that the problems of reproducibility of AEC are smaller than the problems of processors units because almost all processors fall outside of the acceptable variation limits and they can affect the mammography quality image and the dose to breast. Only four mammography units agree with the minimum score established by ACR and FDA for the phantom image.

  16. Fast Monte Carlo simulations of ultrasound-modulated light using a graphics processing unit.

    Science.gov (United States)

    Leung, Terence S; Powell, Samuel

    2010-01-01

    Ultrasound-modulated optical tomography (UOT) is based on "tagging" light in turbid media with focused ultrasound. In comparison to diffuse optical imaging, UOT can potentially offer a better spatial resolution. The existing Monte Carlo (MC) model for simulating ultrasound-modulated light is central processing unit (CPU) based and has been employed in several UOT related studies. We reimplemented the MC model with a graphics processing unit [(GPU), Nvidia GeForce 9800] that can execute the algorithm up to 125 times faster than its CPU (Intel Core Quad) counterpart for a particular set of optical and acoustic parameters. We also show that the incorporation of ultrasound propagation in photon migration modeling increases the computational time considerably, by a factor of at least 6, in one case, even with a GPU. With slight adjustment to the code, MC simulations were also performed to demonstrate the effect of ultrasonic modulation on the speckle pattern generated by the light model (available as animation). This was computed in 4 s with our GPU implementation as compared to 290 s using the CPU.

  17. Multi-Conditional Latent Variable Model for Joint Facial Action Unit Detection

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    We propose a novel multi-conditional latent variable model for simultaneous facial feature fusion and detection of facial action units. In our approach we exploit the structure-discovery capabilities of generative models such as Gaussian processes, and the discriminative power of classifiers such as

  18. Lipid Processing Technology: Building a Multilevel Modeling Network

    DEFF Research Database (Denmark)

    Díaz Tovar, Carlos Axel; Mustaffa, Azizul Azri; Mukkerikar, Amol

    2011-01-01

    in design and analysis of unit operations; iv) the information and models developed are used as building blocks in the development of methods and tools for computer-aided synthesis and design of process flowsheets (CAFD). The applicability of this methodology is highlighted in each level of modeling through......The aim of this work is to present the development of a computer aided multilevel modeling network for the systematic design and analysis of processes employing lipid technologies. This is achieved by decomposing the problem into four levels of modeling: i) pure component property modeling...... and a lipid-database of collected experimental data from industry and generated data from validated predictive property models, as well as modeling tools for fast adoption-analysis of property prediction models; ii) modeling of phase behavior of relevant lipid mixtures using the UNIFACCI model, development...

  19. Co-occurrence of Photochemical and Microbiological Transformation Processes in Open-Water Unit Process Wetlands.

    Science.gov (United States)

    Prasse, Carsten; Wenk, Jannis; Jasper, Justin T; Ternes, Thomas A; Sedlak, David L

    2015-12-15

    The fate of anthropogenic trace organic contaminants in surface waters can be complex due to the occurrence of multiple parallel and consecutive transformation processes. In this study, the removal of five antiviral drugs (abacavir, acyclovir, emtricitabine, lamivudine and zidovudine) via both bio- and phototransformation processes, was investigated in laboratory microcosm experiments simulating an open-water unit process wetland receiving municipal wastewater effluent. Phototransformation was the main removal mechanism for abacavir, zidovudine, and emtricitabine, with half-lives (t1/2,photo) in wetland water of 1.6, 7.6, and 25 h, respectively. In contrast, removal of acyclovir and lamivudine was mainly attributable to slower microbial processes (t1/2,bio = 74 and 120 h, respectively). Identification of transformation products revealed that bio- and phototransformation reactions took place at different moieties. For abacavir and zidovudine, rapid transformation was attributable to high reactivity of the cyclopropylamine and azido moieties, respectively. Despite substantial differences in kinetics of different antiviral drugs, biotransformation reactions mainly involved oxidation of hydroxyl groups to the corresponding carboxylic acids. Phototransformation rates of parent antiviral drugs and their biotransformation products were similar, indicating that prior exposure to microorganisms (e.g., in a wastewater treatment plant or a vegetated wetland) would not affect the rate of transformation of the part of the molecule susceptible to phototransformation. However, phototransformation strongly affected the rates of biotransformation of the hydroxyl groups, which in some cases resulted in greater persistence of phototransformation products.

  20. Unit Operation Experiment Linking Classroom with Industrial Processing

    Science.gov (United States)

    Benson, Tracy J.; Richmond, Peyton C.; LeBlanc, Weldon

    2013-01-01

    An industrial-type distillation column, including appropriate pumps, heat exchangers, and automation, was used as a unit operations experiment to provide a link between classroom teaching and real-world applications. Students were presented with an open-ended experiment where they defined the testing parameters to solve a generalized problem. The…

  1. Effect of energetic dissipation processes on the friction unit tribological

    Directory of Open Access Journals (Sweden)

    Moving V. V.

    2007-01-01

    Full Text Available In article presented temperature influence on reological and fric-tion unit coefficients cast iron elements. It has been found that surface layer formed in the temperature friction has good rub off resistance. The surface layer structural hardening and capacity stress relaxation make up.

  2. Dynamic simulation model for ultra supercritical 1 000 MW unit boilers%Dynamic simulation model for ultra supercritical 1000 MW unit boilers

    Institute of Scientific and Technical Information of China (English)

    XU Hui; XU Ershu

    2013-01-01

    On the basis of heat transfer characteristics of working fluid at different pressures inside the water wall tube and structure of the ultra supercritical 1 000 MW unit once through boiler in Jianbi Power Plant,the varying phase transformation point method was adopted to establish the moving-boundary dynamic simulation model of water wall in ultra supercritical once through boilers,especially the length variation of hot water section,evaporation section and superheat section against the load changing.On this basis,the real-time dynamic simulation model for ultra-supercritical 1 000 MW unit boiler in Jianbi Power Plant was built on the STAR-90 simulation platform.The dynamic and static characteristics test showed that,this model can simulate the unit's startup/shutdown process and some typical fault conditions accurately,and had good dynamic and static performance.

  3. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  4. Uncontracted Rys Quadrature Implementation of up to G Functions on Graphical Processing Units.

    Science.gov (United States)

    Asadchev, Andrey; Allada, Veerendra; Felder, Jacob; Bode, Brett M; Gordon, Mark S; Windus, Theresa L

    2010-03-09

    An implementation is presented of an uncontracted Rys quadrature algorithm for electron repulsion integrals, including up to g functions on graphical processing units (GPUs). The general GPU programming model, the challenges associated with implementing the Rys quadrature on these highly parallel emerging architectures, and a new approach to implementing the quadrature are outlined. The performance of the implementation is evaluated for single and double precision on two different types of GPU devices. The performance obtained is on par with the matrix-vector routine from the CUDA basic linear algebra subroutines (CUBLAS) library.

  5. Accelerated 3D Monte Carlo light dosimetry using a graphics processing unit (GPU) cluster

    Science.gov (United States)

    Lo, William Chun Yip; Lilge, Lothar

    2010-11-01

    This paper presents a basic computational framework for real-time, 3-D light dosimetry on graphics processing unit (GPU) clusters. The GPU-based approach offers a direct solution to overcome the long computation time preventing Monte Carlo simulations from being used in complex optimization problems such as treatment planning, particularly if simulated annealing is employed as the optimization algorithm. The current multi- GPU implementation is validated using a commercial light modelling software (ASAP from Breault Research Organization). It also supports the latest Fermi GPU architecture and features an interactive 3-D visualization interface. The software is available for download at http://code.google.com/p/gpu3d.

  6. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA

    2017-04-01

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  7. Properties of spatial Cox process models

    DEFF Research Database (Denmark)

    Møller, Jesper

    Probabilistic properties of Cox processes of relevance for statistical modelling and inference are studied. Particularly, we study the most important classes of Cox processes, including log Gaussian Cox processes, shot noise Cox processes, and permanent Cox processes. We consider moment propertie...

  8. The United States Military Entrance Processing Command (USMEPCOM) Uses Six Sigma Process to Develop and Improve Data Quality

    Science.gov (United States)

    2007-06-01

    mecpom.army.mil Original title on 712 A/B: The United States Military Entrance Processing Command (USMEPCOM) uses Six Sigma process to develop and...Entrance Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...Processing Command (USMEPCOM) uses Six Sigma Process to Develop and Improve Data Quality 3 • USMEPCOM Overview/History • Purpose • Define: What is Important

  9. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process

  10. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process improv

  11. GRAPHICAL MODELS OF THE AIRCRAFT MAINTENANCE PROCESS

    Directory of Open Access Journals (Sweden)

    Stanislav Vladimirovich Daletskiy

    2017-01-01

    Full Text Available The aircraft maintenance is realized by a rapid sequence of maintenance organizational and technical states, its re- search and analysis are carried out by statistical methods. The maintenance process concludes aircraft technical states con- nected with the objective patterns of technical qualities changes of the aircraft as a maintenance object and organizational states which determine the subjective organization and planning process of aircraft using. The objective maintenance pro- cess is realized in Maintenance and Repair System which does not include maintenance organization and planning and is a set of related elements: aircraft, Maintenance and Repair measures, executors and documentation that sets rules of their interaction for maintaining of the aircraft reliability and readiness for flight. The aircraft organizational and technical states are considered, their characteristics and heuristic estimates of connection in knots and arcs of graphs and of aircraft organi- zational states during regular maintenance and at technical state failure are given. It is shown that in real conditions of air- craft maintenance, planned aircraft technical state control and maintenance control through it, is only defined by Mainte- nance and Repair conditions at a given Maintenance and Repair type and form structures, and correspondingly by setting principles of Maintenance and Repair work types to the execution, due to maintenance, by aircraft and all its units mainte- nance and reconstruction strategies. The realization of planned Maintenance and Repair process determines the one of the constant maintenance component. The proposed graphical models allow to reveal quantitative correlations between graph knots to improve maintenance processes by statistical research methods, what reduces manning, timetable and expenses for providing safe civil aviation aircraft maintenance.

  12. Context Based Reasoning in Business Process Models

    OpenAIRE

    Balabko, Pavel; Wegmann, Alain

    2003-01-01

    Modeling approaches often are not adapted to human reasoning: models are ambiguous and imprecise. A same model element may have multiple meanings in different functional roles of a system. Existing modeling approaches do not relate explicitly these functional roles with model elements. A principle that can solve this problem is that model elements should be defined in a context. We believe that the explicit modeling of context is especially useful in Business Process Modeling (BPM) where the ...

  13. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Indian Academy of Sciences (India)

    M. K. Griffiths; V. Fedun; R.Erdélyi

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1–3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  14. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  15. Birth/death process model

    Science.gov (United States)

    Solloway, C. B.; Wakeland, W.

    1976-01-01

    First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.

  16. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  17. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  18. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  19. Parametric time delay modeling for floating point units

    Science.gov (United States)

    Fahmy, Hossam A. H.; Liddicoat, Albert A.; Flynn, Michael J.

    2002-12-01

    A parametric time delay model to compare floating point unit implementations is proposed. This model is used to compare a previously proposed floating point adder using a redundant number representation with other high-performance implementations. The operand width, the fan-in of the logic gates and the radix of the redundant format are used as parameters to the model. The comparison is done over a range of operand widths, fan-in and radices to show the merits of each implementation.

  20. 76 FR 13973 - United States Warehouse Act; Processed Agricultural Products Licensing Agreement

    Science.gov (United States)

    2011-03-15

    ... Farm Service Agency United States Warehouse Act; Processed Agricultural Products Licensing Agreement... warehouse licenses may be issued under the United States Warehouse Act (USWA). Through this notice, FSA is... processed agricultural products that are stored in climate controlled, cooler, and freezer warehouses....

  1. Modeling Events with Cascades of Poisson Processes

    CERN Document Server

    Simma, Aleksandr

    2012-01-01

    We present a probabilistic model of events in continuous time in which each event triggers a Poisson process of successor events. The ensemble of observed events is thereby modeled as a superposition of Poisson processes. Efficient inference is feasible under this model with an EM algorithm. Moreover, the EM algorithm can be implemented as a distributed algorithm, permitting the model to be applied to very large datasets. We apply these techniques to the modeling of Twitter messages and the revision history of Wikipedia.

  2. The Computer-Aided Analytic Process Model. Operations Handbook for the Analytic Process Model Demonstration Package

    Science.gov (United States)

    1986-01-01

    Research Note 86-06 THE COMPUTER-AIDED ANALYTIC PROCESS MODEL : OPERATIONS HANDBOOK FOR THE ANALYTIC PROCESS MODEL DE ONSTRATION PACKAGE Ronald G...ic Process Model ; Operations Handbook; Tutorial; Apple; Systems Taxonomy Mod--l; Training System; Bradl1ey infantry Fighting * Vehicle; BIFV...8217. . . . . . . .. . . . . . . . . . . . . . . . * - ~ . - - * m- .. . . . . . . item 20. Abstract -continued companion volume-- "The Analytic Process Model for

  3. Object-Oriented Approach to Modeling Units of Pneumatic Systems

    Directory of Open Access Journals (Sweden)

    Yu. V. Kyurdzhiev

    2014-01-01

    Full Text Available The article shows the relevance of the approaches to the object-oriented programming when modeling the pneumatic units (PU.Based on the analysis of the calculation schemes of aggregates pneumatic systems two basic objects, namely a cavity flow and a material point were highlighted.Basic interactions of objects are defined. Cavity-cavity interaction: ex-change of matter and energy with the flows of mass. Cavity-point interaction: force interaction, exchange of energy in the form of operation. Point-point in-teraction: force interaction, elastic interaction, inelastic interaction, and inter-vals of displacement.The authors have developed mathematical models of basic objects and interactions. Models and interaction of elements are implemented in the object-oriented programming.Mathematical models of elements of PU design scheme are implemented in derived from the base class. These classes implement the models of flow cavity, piston, diaphragm, short channel, diaphragm to be open by a given law, spring, bellows, elastic collision, inelastic collision, friction, PU stages with a limited movement, etc.A numerical integration of differential equations for the mathematical models of PU design scheme elements is based on the Runge-Kutta method of the fourth order. On request each class performs a tact of integration i.e. calcu-lation of the coefficient method.The paper presents an integration algorithm of the system of differential equations. All objects of the PU design scheme are placed in a unidirectional class list. Iterator loop cycle initiates the integration tact of all the objects in the list. One in four iteration makes a transition to the next step of integration. Calculation process stops when any object shows a shutdowns flag.The proposed approach was tested in the calculation of a number of PU designs. With regard to traditional approaches to modeling, the authors-proposed method features in easy enhancement, code reuse, high reliability

  4. Reactive-Separator Process Unit for Lunar Regolith Project

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's plans for a lunar habitation outpost call out for process technologies to separate hydrogen sulfide and sulfur dioxide gases from regolith product gas...

  5. Forecasting Models for Hydropower Unit Stability Using LS-SVM

    Directory of Open Access Journals (Sweden)

    Liangliang Qiao

    2015-01-01

    Full Text Available This paper discusses a least square support vector machine (LS-SVM approach for forecasting stability parameters of Francis turbine unit. To achieve training and testing data for the models, four field tests were presented, especially for the vibration in Y-direction of lower generator bearing (LGB and pressure in draft tube (DT. A heuristic method such as a neural network using Backpropagation (NNBP is introduced as a comparison model to examine the feasibility of forecasting performance. In the experimental results, LS-SVM showed superior forecasting accuracies and performances to the NNBP, which is of significant importance to better monitor the unit safety and potential faults diagnosis.

  6. Simulation of operational processes in hospital emergency units as lean healthcare tool

    Directory of Open Access Journals (Sweden)

    Andreia Macedo Gomes

    2017-07-01

    Full Text Available Recently, the Lean philosophy is gaining importance due to a competitive environment, which increases the need to reduce costs. Lean practices and tools have been applied to manufacturing, services, supply chain, startups and, the next frontier is healthcare. Most lean techniques can be easily adapted to health organizations. Therefore, this paper intends to summarize Lean practices and tools that are already being applied in health organizations. Among the numerous techniques and lean tools used, this research highlights the Simulation. Therefore, in order to understand the use of Simulation as a Lean Healthcare tool, this research aims to analyze, through the simulation technique, the operational dynamics of the service process of a fictitious hospital emergency unit. Initially a systematic review of the literature on the practices and tools of Lean Healthcare was carried out, in order to identify the main techniques practiced. The research highlighted Simulation as the sixth most cited tool in the literature. Subsequently, a simulation of a service model of an emergency unit was performed through the Arena software. As a main result, it can be highlighted that the attendants of the built model presented a degree of idleness, thus, they are able to atend a greater demand. As a last conclusion, it was verified that the emergency room is the process with longer service time and greater overload.

  7. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  8. Ada COCOMO and the Ada Process Model

    Science.gov (United States)

    1989-01-01

    language, the use of incremental development, and the use of the Ada process model capitalizing on the strengths of Ada to improve the efficiency of software...development. This paper presents the portions of the revised Ada COCOMO dealing with the effects of Ada and the Ada process model . The remainder of...this section of the paper discusses the objectives of Ada COCOMO. Section 2 describes the Ada Process Model and its overall effects on software

  9. Simulation Modeling of Software Development Processes

    Science.gov (United States)

    Calavaro, G. F.; Basili, V. R.; Iazeolla, G.

    1996-01-01

    A simulation modeling approach is proposed for the prediction of software process productivity indices, such as cost and time-to-market, and the sensitivity analysis of such indices to changes in the organization parameters and user requirements. The approach uses a timed Petri Net and Object Oriented top-down model specification. Results demonstrate the model representativeness, and its usefulness in verifying process conformance to expectations, and in performing continuous process improvement and optimization.

  10. Branching process models of cancer

    CERN Document Server

    Durrett, Richard

    2015-01-01

    This volume develops results on continuous time branching processes and applies them to study rate of tumor growth, extending classic work on the Luria-Delbruck distribution. As a consequence, the authors calculate the probability that mutations that confer resistance to treatment are present at detection and quantify the extent of tumor heterogeneity. As applications, the authors evaluate ovarian cancer screening strategies and give rigorous proofs for results of Heano and Michor concerning tumor metastasis. These notes should be accessible to students who are familiar with Poisson processes and continuous time. Richard Durrett is mathematics professor at Duke University, USA. He is the author of 8 books, over 200 journal articles, and has supervised more than 40 Ph.D. students. Most of his current research concerns the applications of probability to biology: ecology, genetics, and most recently cancer.

  11. A Process Model for Establishing Business Process Crowdsourcing

    OpenAIRE

    Nguyen Hoang Thuan; Pedro Antunes; David Johnstone

    2017-01-01

    Crowdsourcing can be an organisational strategy to distribute work to Internet users and harness innovation, information, capacities, and variety of business endeavours. As crowdsourcing is different from other business strategies, organisations are often unsure as to how to best structure different crowdsourcing activities and integrate them with other organisational business processes. To manage this problem, we design a process model guiding how to establish business process crowdsourcing....

  12. Total Ship Design Process Modeling

    Science.gov (United States)

    2012-04-30

    Microsoft Project® or Primavera ®, and perform process simulations that can investigate risk, cost, and schedule trade-offs. Prior efforts to capture...planning in the face of disruption, delay, and late‐changing  requirements. ADePT is interfaced with  PrimaVera , the AEC  industry favorite program

  13. An unit commitment model for hydrothermal systems; Um modelo de unit commitment para sistemas hidrotermicos

    Energy Technology Data Exchange (ETDEWEB)

    Franca, Thiago de Paula; Luciano, Edson Jose Rezende; Nepomuceno, Leonardo [Universidade Estadual Paulista (UNESP), Bauru, SP (Brazil). Dept. de Engenharia Eletrica], Emails: ra611191@feb.unesp.br, edson.joserl@uol.com.br, leo@feb.unesp.br

    2009-07-01

    A model of Unit Commitment to hydrothermal systems that includes the costs of start/stop of generators is proposed. These costs has been neglected in a good part of the programming models for operation of hydrothermal systems (pre-dispatch). The impact of the representation of costs in total production costs is evaluated. The proposed model is solved by a hybrid methodology, which involves the use of genetic algorithms (to solve the entire part of the problem) and sequential quadratic programming methods. This methodology is applied to the solution of an IEEE test system. The results emphasize the importance of representation of the start/stop in the generation schedule.

  14. Productivity Gap and Asymmetric Trade Relations: The Canada-United States of America Integration Process

    Directory of Open Access Journals (Sweden)

    Germán H. Gonzalez

    2014-08-01

    Full Text Available The usefulness of the European model of integration is currently subject to debate and the North American integration process has been largely ignored as a comparative framework. The asymmetrical relationship between Canada and the United States began a long time before NAFTA, and the study of this process could shed light on the usual problems faced by Latin American countries. This article attempts to encourage discussion about this topic. Particularly,there is evidence for a substantial and positive change in Canadian productivity at the time of the Canada-US Free Trade Agreement (CUFTA. However, the enactment of the North American Free Trade Agreement (NAFTA does not seem to have had the same effect as the earlier treaty.

  15. Silicon-Carbide Power MOSFET Performance in High Efficiency Boost Power Processing Unit for Extreme Environments

    Science.gov (United States)

    Ikpe, Stanley A.; Lauenstein, Jean-Marie; Carr, Gregory A.; Hunter, Don; Ludwig, Lawrence L.; Wood, William; Del Castillo, Linda Y.; Fitzpatrick, Fred; Chen, Yuan

    2016-01-01

    Silicon-Carbide device technology has generated much interest in recent years. With superior thermal performance, power ratings and potential switching frequencies over its Silicon counterpart, Silicon-Carbide offers a greater possibility for high powered switching applications in extreme environment. In particular, Silicon-Carbide Metal-Oxide- Semiconductor Field-Effect Transistors' (MOSFETs) maturing process technology has produced a plethora of commercially available power dense, low on-state resistance devices capable of switching at high frequencies. A novel hard-switched power processing unit (PPU) is implemented utilizing Silicon-Carbide power devices. Accelerated life data is captured and assessed in conjunction with a damage accumulation model of gate oxide and drain-source junction lifetime to evaluate potential system performance at high temperature environments.

  16. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    Science.gov (United States)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  17. Nitrogen deposition to the United States: distribution, sources, and processes

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-01-01

    Full Text Available We simulate nitrogen deposition over the US in 2006–2008 by using the GEOS-Chem global chemical transport model at 1/2° × 2/3° horizontal resolution over North America and adjacent oceans. US emissions of NOx and NH3 in the model are 6.7 and 2.9 Tg N a−1 respectively, including a 20% natural contribution for each. Ammonia emissions are a factor of 3 lower in winter than summer, providing a good match to US network observations of NHx (≡NH3 gas + ammonium aerosol and ammonium wet deposition fluxes. Model comparisons to observed deposition fluxes and surface air concentrations of oxidized nitrogen species (NOy show overall good agreement but excessive wintertime HNO3 production over the US Midwest and Northeast. This suggests a model overestimate N2O5 hydrolysis in aerosols, and a possible factor is inhibition by aerosol nitrate. Model results indicate a total nitrogen deposition flux of 6.5 Tg N a−1 over the contiguous US, including 4.2 as NOy and 2.3 as NHx. Domestic anthropogenic, foreign anthropogenic, and natural sources contribute respectively 78%, 6%, and 16% of total nitrogen deposition over the contiguous US in the model. The domestic anthropogenic contribution generally exceeds 70% in the east and in populated areas of the west, and is typically 50–70% in remote areas of the west. Total nitrogen deposition in the model exceeds 10 kg N ha−1 a−1 over 35% of the contiguous US.

  18. Nitrogen deposition to the United States: distribution, sources, and processes

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We simulate nitrogen deposition over the US in 2006–2008 by using the GEOS-Chem global chemical transport model at 1/2°×2/3° horizontal resolution over North America and adjacent oceans. US emissions of NOx and NH3 in the model are 6.7 and 2.9 Tg N a−1 respectively, including a 20% natural contribution for each. Ammonia emissions are a factor of 3 lower in winter than summer, providing a good match to US network observations of NHx (≡NH3 gas + ammonium aerosol and ammonium wet deposition fluxes. Model comparisons to observed deposition fluxes and surface air concentrations of oxidized nitrogen species (NOy show overall good agreement but excessive wintertime HNO3 production over the US Midwest and Northeast. This suggests a model overestimate N2O5 hydrolysis in aerosols, and a possible factor is inhibition by aerosol nitrate. Model results indicate a total nitrogen deposition flux of 6.5 Tg N a−1 over the contiguous US, including 4.2 as NOy and 2.3 as NHx. Domestic anthropogenic, foreign anthropogenic, and natural sources contribute respectively 78%, 6%, and 16% of total nitrogen deposition over the contiguous US in the model. The domestic anthropogenic contribution generally exceeds 70% in the east and in populated areas of the west, and is typically 50–70% in remote areas of the west. Total nitrogen deposition in the model exceeds 10 kg N ha−1 a−1 over 35% of the contiguous US.

  19. Modeling, Learning, and Processing of Text Technological Data Structures

    CERN Document Server

    Kühnberger, Kai-Uwe; Lobin, Henning; Lüngen, Harald; Storrer, Angelika; Witt, Andreas

    2012-01-01

    Researchers in many disciplines have been concerned with modeling textual data in order to account for texts as the primary information unit of written communication. The book “Modelling, Learning and Processing of Text-Technological Data Structures” deals with this challenging information unit. It focuses on theoretical foundations of representing natural language texts as well as on concrete operations of automatic text processing. Following this integrated approach, the present volume includes contributions to a wide range of topics in the context of processing of textual data. This relates to the learning of ontologies from natural language texts, the annotation and automatic parsing of texts as well as the detection and tracking of topics in texts and hypertexts. In this way, the book brings together a wide range of approaches to procedural aspects of text technology as an emerging scientific discipline.

  20. Modeling and simulation of membrane process

    Science.gov (United States)

    Staszak, Maciej

    2017-06-01

    The article presents the different approaches to polymer membrane mathematical modeling. Traditional models based on experimental physicochemical correlations and balance models are presented in the first part. Quantum and molecular mechanics models are presented as they are more popular for polymer membranes in fuel cells. The initial part is enclosed by neural network models which found their use for different types of processes in polymer membranes. The second part is devoted to models of fluid dynamics. The computational fluid dynamics technique can be divided into solving of Navier-Stokes equations and into Boltzmann lattice models. Both approaches are presented focusing on membrane processes.

  1. GPstuff: Bayesian Modeling with Gaussian Processes

    NARCIS (Netherlands)

    Vanhatalo, J.; Riihimaki, J.; Hartikainen, J.; Jylänki, P.P.; Tolvanen, V.; Vehtari, A.

    2013-01-01

    The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

  2. Online Rule Generation Software Process Model

    National Research Council Canada - National Science Library

    Sudeep Marwaha; Alka Aroa; Satma M C; Rajni Jain; R C Goyal

    2013-01-01

    .... The software process model for rule generation using decision tree classifier refers to the various steps required to be executed for the development of a web based software model for decision rule generation...

  3. Modeling pellet impact drilling process

    OpenAIRE

    Kovalev, Artem Vladimirovich; Ryabchikov, Sergey Yakovlevich; Isaev, Evgeniy Dmitrievich; Ulyanova, Oksana Sergeevna

    2016-01-01

    The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rocks. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The experiments conducted has allowed modeling t...

  4. Accelerating Image Reconstruction in Three-Dimensional Optoacoustic Tomography on Graphics Processing Units

    CERN Document Server

    Wang, Kun; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A; 10.1118/1.4774361

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional (2D) imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphic processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer-simulation and experimental studies are conducted to investigate the computational efficiency and numerical a...

  5. Harnessing graphics processing units for improved neuroimaging statistics.

    Science.gov (United States)

    Eklund, Anders; Villani, Mattias; Laconte, Stephen M

    2013-09-01

    Simple models and algorithms based on restrictive assumptions are often used in the field of neuroimaging for studies involving functional magnetic resonance imaging, voxel based morphometry, and diffusion tensor imaging. Nonparametric statistical methods or flexible Bayesian models can be applied rather easily to yield more trustworthy results. The spatial normalization step required for multisubject studies can also be improved by taking advantage of more robust algorithms for image registration. A common drawback of algorithms based on weaker assumptions, however, is the increase in computational complexity. In this short overview, we will therefore present some examples of how inexpensive PC graphics hardware, normally used for demanding computer games, can be used to enable practical use of more realistic models and accurate algorithms, such that the outcome of neuroimaging studies really can be trusted.

  6. Interpretive and Formal Models of Discourse Processing.

    Science.gov (United States)

    Bulcock, Jeffrey W.; Beebe, Mona J.

    Distinguishing between interpretive and formal models of discourse processing and between qualitative and quantitative research, this paper argues that formal models are the analogues of interpretive models, and that the two are complementary. It observes that interpretive models of reading are being increasingly derived from qualitative research…

  7. The Cilium: Cellular Antenna and Central Processing Unit

    OpenAIRE

    Malicki, Jarema J.; Johnson, Colin A.

    2017-01-01

    Cilia mediate an astonishing diversity of processes. Recent advances provide unexpected insights into the regulatory mechanisms of cilium formation, and reveal diverse regulatory inputs that are related to the cell cycle, cytoskeleton, proteostasis, and cilia-mediated signaling itself. Ciliogenesis and cilia maintenance are regulated by reciprocal antagonistic or synergistic influences, often acting in parallel to each other. By receiving parallel inputs, cilia appear to integrate multiple si...

  8. Modeled Top of the Overburden Geomodel Unit (obtop_f)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The obtop_f grid represents the modeled elevation of the top of the Overburden geomodel unit at a 500 foot resolution. It is one grid of a geomodel that consists of...

  9. Social Innovation using the Best Practice Unit model

    NARCIS (Netherlands)

    Wilken, Jean Pierre

    2013-01-01

    The model of the Best Practice Unit (BPU) is a specific form of practice based research. It is a variation of the Community of Practice (CoP) as developed by Wenger, McDermott and Snyder (2002) with the specific aim to innovate a professional practice by combining learning, development and research.

  10. Model United Nations and Deep Learning: Theoretical and Professional Learning

    Science.gov (United States)

    Engel, Susan; Pallas, Josh; Lambert, Sarah

    2017-01-01

    This article demonstrates that the purposeful subject design, incorporating a Model United Nations (MUN), facilitated deep learning and professional skills attainment in the field of International Relations. Deep learning was promoted in subject design by linking learning objectives to Anderson and Krathwohl's (2001) four levels of knowledge or…

  11. Modeling the effect of short stay units on patient admissions

    NARCIS (Netherlands)

    Zonderland, Maartje E.; Boucherie, Richard J.; Carter, Michael W.; Stanford, David A.

    2015-01-01

    Two purposes of Short Stay Units (SSU) are the reduction of Emergency Department crowding and increased urgent patient admissions. At an SSU urgent patients are temporarily held until they either can go home or transferred to an inpatient ward. In this paper we present an overflow model to evaluate

  12. League of Our Own: Creating a Model United Nations Scrimmage Conference

    Science.gov (United States)

    Ripley, Brian; Carter, Neal; Grove, Andrea K.

    2009-01-01

    Model United Nations (MUN) provides a great forum for students to learn about global issues and political processes, while also practicing communication and negotiation skills that will serve them well for a lifetime. Intercollegiate MUN conferences can be problematic, however, in terms of logistics, budgets, and student participation. In order to…

  13. League of Our Own: Creating a Model United Nations Scrimmage Conference

    Science.gov (United States)

    Ripley, Brian; Carter, Neal; Grove, Andrea K.

    2009-01-01

    Model United Nations (MUN) provides a great forum for students to learn about global issues and political processes, while also practicing communication and negotiation skills that will serve them well for a lifetime. Intercollegiate MUN conferences can be problematic, however, in terms of logistics, budgets, and student participation. In order to…

  14. Modelling income processes with lots of heterogeneity

    DEFF Research Database (Denmark)

    Browning, Martin; Ejrnæs, Mette; Alvarez, Javier

    2010-01-01

    this observable homogeneity, we find more latent heterogeneity than previous investigators. We show that allowance for heterogeneity makes substantial differences to estimates of model parameters and to outcomes of interest. Additionally, we find strong evidence against the hypothesis that any worker has a unit...

  15. Graphics processing unit (GPU)-accelerated particle filter framework for positron emission tomography image reconstruction.

    Science.gov (United States)

    Yu, Fengchao; Liu, Huafeng; Hu, Zhenghui; Shi, Pengcheng

    2012-04-01

    As a consequence of the random nature of photon emissions and detections, the data collected by a positron emission tomography (PET) imaging system can be shown to be Poisson distributed. Meanwhile, there have been considerable efforts within the tracer kinetic modeling communities aimed at establishing the relationship between the PET data and physiological parameters that affect the uptake and metabolism of the tracer. Both statistical and physiological models are important to PET reconstruction. The majority of previous efforts are based on simplified, nonphysical mathematical expression, such as Poisson modeling of the measured data, which is, on the whole, completed without consideration of the underlying physiology. In this paper, we proposed a graphics processing unit (GPU)-accelerated reconstruction strategy that can take both statistical model and physiological model into consideration with the aid of state-space evolution equations. The proposed strategy formulates the organ activity distribution through tracer kinetics models and the photon-counting measurements through observation equations, thus making it possible to unify these two constraints into a general framework. In order to accelerate reconstruction, GPU-based parallel computing is introduced. Experiments of Zubal-thorax-phantom data, Monte Carlo simulated phantom data, and real phantom data show the power of the method. Furthermore, thanks to the computing power of the GPU, the reconstruction time is practical for clinical application.

  16. Seismic interpretation using Support Vector Machines implemented on Graphics Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Kuzma, H A; Rector, J W; Bremer, D

    2006-06-22

    Support Vector Machines (SVMs) estimate lithologic properties of rock formations from seismic data by interpolating between known models using synthetically generated model/data pairs. SVMs are related to kriging and radial basis function neural networks. In our study, we train an SVM to approximate an inverse to the Zoeppritz equations. Training models are sampled from distributions constructed from well-log statistics. Training data is computed via a physically realistic forward modeling algorithm. In our experiments, each training data vector is a set of seismic traces similar to a 2-d image. The SVM returns a model given by a weighted comparison of the new data to each training data vector. The method of comparison is given by a kernel function which implicitly transforms data into a high-dimensional feature space and performs a dot-product. The feature space of a Gaussian kernel is made up of sines and cosines and so is appropriate for band-limited seismic problems. Training an SVM involves estimating a set of weights from the training model/data pairs. It is designed to be an easy problem; at worst it is a quadratic programming problem on the order of the size of the training set. By implementing the slowest part of our SVM algorithm on a graphics processing unit (GPU), we improve the speed of the algorithm by two orders of magnitude. Our SVM/GPU combination achieves results that are similar to those of conventional iterative inversion in fractions of the time.

  17. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  18. Analysis of Unit Process Cost for an Engineering-Scale Pyroprocess Facility Using a Process Costing Method in Korea

    Directory of Open Access Journals (Sweden)

    Sungki Kim

    2015-08-01

    Full Text Available Pyroprocessing, which is a dry recycling method, converts spent nuclear fuel into U (Uranium/TRU (TRansUranium metal ingots in a high-temperature molten salt phase. This paper provides the unit process cost of a pyroprocess facility that can process up to 10 tons of pyroprocessing product per year by utilizing the process costing method. Toward this end, the pyroprocess was classified into four kinds of unit processes: pretreatment, electrochemical reduction, electrorefining and electrowinning. The unit process cost was calculated by classifying the cost consumed at each process into raw material and conversion costs. The unit process costs of the pretreatment, electrochemical reduction, electrorefining and electrowinning were calculated as 195 US$/kgU-TRU, 310 US$/kgU-TRU, 215 US$/kgU-TRU and 231 US$/kgU-TRU, respectively. Finally the total pyroprocess cost was calculated as 951 US$/kgU-TRU. In addition, the cost driver for the raw material cost was identified as the cost for Li3PO4, needed for the LiCl-KCl purification process, and platinum as an anode electrode in the electrochemical reduction process.

  19. COSTS AND PROFITABILITY IN FOOD PROCESSING: PASTRY TYPE UNITS

    Directory of Open Access Journals (Sweden)

    DUMITRANA MIHAELA

    2013-08-01

    Full Text Available For each company, profitability, products quality and customer satisfaction are the most importanttargets. To attaint these targets, managers need to know all about costs that are used in decision making. Whatkind of costs? How these costs are calculated for a specific sector such as food processing? These are only a fewquestions with answers in our paper. We consider that a case study for this sector may be relevant for all peoplethat are interested to increase the profitability of this specific activity sector.

  20. Ultra-processed food consumption in children from a Basic Health Unit.

    Science.gov (United States)

    Sparrenberger, Karen; Friedrich, Roberta Roggia; Schiffner, Mariana Dihl; Schuch, Ilaine; Wagner, Mário Bernardes

    2015-01-01

    To evaluate the contribution of ultra-processed food (UPF) on the dietary consumption of children treated at a Basic Health Unit and the associated factors. Cross-sectional study carried out with a convenience sample of 204 children, aged 2-10 years old, in Southern Brazil. Children's food intake was assessed using a 24-h recall questionnaire. Food items were classified as minimally processed, processed for culinary use, and ultra-processed. A semi-structured questionnaire was applied to collect socio-demographic and anthropometric variables. Overweight in children was classified using a Z score >2 for children younger than 5 and Z score >+1 for those aged between 5 and 10 years, using the body mass index for age. Overweight frequency was 34% (95% CI: 28-41%). Mean energy consumption was 1672.3 kcal/day, with 47% (95% CI: 45-49%) coming from ultra-processed food. In the multiple linear regression model, maternal education (r=0.23; p=0.001) and child age (r=0.40; pfactors associated with a greater percentage of UPF in the diet (r=0.42; pfactor for the consumption of such products. Copyright © 2015 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.

  1. An Extension to the Weibull Process Model

    Science.gov (United States)

    1981-11-01

    Subt5l . TYPE OF REPORT & PERIOD COVERED AN EXTENSION+TO THE WEIBULL PROCESS MODEL 6. PERFORMING O’G. REPORT NUMBER I. AuTHOR() S. CONTRACT OR GRANT...indicatinq its imrportance to applications. 4 AN EXTENSION TO TE WEIBULL PROCESS MODEL 1. INTRODUCTION Recent papers by Bain and Engelhardt (1980)1 and Crow

  2. ENTREPRENEURIAL OPPORTUNITIES IN FOOD PROCESSING UNITS (WITH SPECIAL REFERENCES TO BYADGI RED CHILLI COLD STORAGE UNITS IN THE KARNATAKA STATE

    Directory of Open Access Journals (Sweden)

    P. ISHWARA

    2010-01-01

    Full Text Available After the green revolution, we are now ushering in the evergreen revolution in the country; food processing is an evergreen activity. It is the key to the agricultural sector. In this paper an attempt has been made to study the workings of food processing units with special references to Red Chilli Cold Storage units in the Byadgi district of Karnataka State. Byadgi has been famous for Red Chilli since the days it’s of antiquity. The vast and extensive market yard in Byadagi taluk is famous as the second largest Red Chilli dealing market in the country. However, the most common and recurring problem faced by the farmer is inability to store enough red chilli from one harvest to another. Red chilli that was locally abundant for only a short period of time had to be stored against times of scarcity. In recent years, due to Oleoresin, demand for Red Chilli has grow from other countries like Sri Lanka, Bangladesh, America, Europe, Nepal, Indonesia, Mexico etc. The study reveals that all the cold storage units of the study area have been using vapour compression refrigeration system or method. All entrepreneurs have satisfied with their turnover and profit and they are in a good economic position. Even though the average turnover and profits are increased, few units have shown negligible amount of decrease in turnover and profit. This is due to the competition from increasing number of cold storages and early established units. The cold storages of the study area have been storing Red chilli, Chilli seeds, Chilli powder, Tamarind, Jeera, Dania, Turmeric, Sunflower, Zinger, Channa, Flower seeds etc,. But the 80 per cent of the each cold storage is filled by the red chilli this is due to the existence of vast and extensivered chilli market yard in the Byadgi. There is no business without problems. In the same way the entrepreneurs who are chosen for the study are facing a few problems in their business like skilled labour, technical and management

  3. Hybrid modelling of anaerobic wastewater treatment processes.

    Science.gov (United States)

    Karama, A; Bernard, O; Genovesi, A; Dochain, D; Benhammou, A; Steyer, J P

    2001-01-01

    This paper presents a hybrid approach for the modelling of an anaerobic digestion process. The hybrid model combines a feed-forward network, describing the bacterial kinetics, and the a priori knowledge based on the mass balances of the process components. We have considered an architecture which incorporates the neural network as a static model of unmeasured process parameters (kinetic growth rate) and an integrator for the dynamic representation of the process using a set of dynamic differential equations. The paper contains a description of the neural network component training procedure. The performance of this approach is illustrated with experimental data.

  4. Quantification of Cell-Free DNA in Red Blood Cell Units in Different Whole Blood Processing Methods

    Directory of Open Access Journals (Sweden)

    Andrew W. Shih

    2016-01-01

    Full Text Available Background. Whole blood donations in Canada are processed by either the red cell filtration (RCF or whole blood filtration (WBF methods, where leukoreduction is potentially delayed in WBF. Fresh WBF red blood cells (RBCs have been associated with increased in-hospital mortality after transfusion. Cell-free DNA (cfDNA is released by neutrophils prior to leukoreduction, degraded during RBC storage, and is associated with adverse patient outcomes. We explored cfDNA levels in RBCs prepared by RCF and WBF and different storage durations. Methods. Equal numbers of fresh (stored ≤14 days and older RBCs were sampled. cfDNA was quantified by spectrophotometry and PicoGreen. Separate regression models determined the association with processing method and storage duration and their interaction on cfDNA. Results. cfDNA in 120 RBC units (73 RCF, 47 WBF were measured. Using PicoGreen, WBF units overall had higher cfDNA than RCF units (p=0.0010; fresh WBF units had higher cfDNA than fresh RCF units (p=0.0093. Using spectrophotometry, fresh RBC units overall had higher cfDNA than older units (p=0.0031; fresh WBF RBCs had higher cfDNA than older RCF RBCs (p=0.024. Conclusion. Higher cfDNA in fresh WBF was observed compared to older RCF blood. Further study is required for association with patient outcomes.

  5. A Parallel Algebraic Multigrid Solver on Graphics Processing Units

    KAUST Repository

    Haase, Gundolf

    2010-01-01

    The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core. © 2010 Springer-Verlag.

  6. The Cilium: Cellular Antenna and Central Processing Unit.

    Science.gov (United States)

    Malicki, Jarema J; Johnson, Colin A

    2017-02-01

    Cilia mediate an astonishing diversity of processes. Recent advances provide unexpected insights into the regulatory mechanisms of cilium formation, and reveal diverse regulatory inputs that are related to the cell cycle, cytoskeleton, proteostasis, and cilia-mediated signaling itself. Ciliogenesis and cilia maintenance are regulated by reciprocal antagonistic or synergistic influences, often acting in parallel to each other. By receiving parallel inputs, cilia appear to integrate multiple signals into specific outputs and may have functions similar to logic gates of digital systems. Some combinations of input signals appear to impose higher hierarchical control related to the cell cycle. An integrated view of these regulatory inputs will be necessary to understand ciliogenesis and its wider relevance to human biology. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. VARTM Process Modeling of Aerospace Composite Structures

    Science.gov (United States)

    Song, Xiao-Lan; Grimsley, Brian W.; Hubert, Pascal; Cano, Roberto J.; Loos, Alfred C.

    2003-01-01

    A three-dimensional model was developed to simulate the VARTM composite manufacturing process. The model considers the two important mechanisms that occur during the process: resin flow, and compaction and relaxation of the preform. The model was used to simulate infiltration of a carbon preform with an epoxy resin by the VARTM process. The model predicted flow patterns and preform thickness changes agreed qualitatively with the measured values. However, the predicted total infiltration times were much longer than measured most likely due to the inaccurate preform permeability values used in the simulation.

  8. Bioinspired decision architectures containing host and microbiome processing units.

    Science.gov (United States)

    Heyde, K C; Gallagher, P W; Ruder, W C

    2016-09-27

    Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.

  9. Declarative business process modelling: principles and modelling languages

    Science.gov (United States)

    Goedertier, Stijn; Vanthienen, Jan; Caron, Filip

    2015-02-01

    The business process literature has proposed a multitude of business process modelling approaches or paradigms, each in response to a different business process type with a unique set of requirements. Two polar paradigms, i.e. the imperative and the declarative paradigm, appear to define the extreme positions on the paradigm spectrum. While imperative approaches focus on explicitly defining how an organisational goal should be reached, the declarative approaches focus on the directives, policies and regulations restricting the potential ways to achieve the organisational goal. In between, a variety of hybrid-paradigms can be distinguished, e.g. the advanced and adaptive case management. This article focuses on the less-exposed declarative approach on process modelling. An outline of the declarative process modelling and the modelling approaches is presented, followed by an overview of the observed declarative process modelling principles and an evaluation of the declarative process modelling approaches.

  10. Process and Context in Choice Models

    DEFF Research Database (Denmark)

    Ben-Akiva, Moshe; Palma, André de; McFadden, Daniel

    2012-01-01

    We develop a general framework that extends choice models by including an explicit representation of the process and context of decision making. Process refers to the steps involved in decision making. Context refers to factors affecting the process, focusing in this paper on social networks. The...

  11. Will Rule based BPM obliterate Process Models?

    NARCIS (Netherlands)

    Joosten, S.; Joosten, H.J.M.

    2007-01-01

    Business rules can be used directly for controlling business processes, without reference to a business process model. In this paper we propose to use business rules to specify both business processes and the software that supports them. Business rules expressed in smart mathematical notations bring

  12. Low cost solar array project production process and equipment task. A Module Experimental Process System Development Unit (MEPSDU)

    Science.gov (United States)

    1981-01-01

    Technical readiness for the production of photovoltaic modules using single crystal silicon dendritic web sheet material is demonstrated by: (1) selection, design and implementation of solar cell and photovoltaic module process sequence in a Module Experimental Process System Development Unit; (2) demonstration runs; (3) passing of acceptance and qualification tests; and (4) achievement of a cost effective module.

  13. A MELCOR model of Fukushima Daiichi Unit 3 accident

    Energy Technology Data Exchange (ETDEWEB)

    Sevón, Tuomo, E-mail: tuomo.sevon@vtt.fi

    2015-04-01

    Highlights: • A MELCOR model of the Fukushima Unit 3 accident was developed. • The MELCOR input file is published as electronic supplementary data with this paper. • Reactor pressure vessel lower head failed about 53 h after the earthquake. • 70% of fuel was discharged from reactor to containment. • 0.95% of cesium inventory was released to the environment. - Abstract: A MELCOR model of the Fukushima Daiichi Unit 3 accident was developed. The model is based on publicly available information, and the MELCOR input file is published as electronic supplementary data with this paper. According to the calculation, the reactor pressure vessel lower head failed about 53 h after the earthquake. At the end of the calculation, 30% of the fuel was still inside the reactor and 70% had been discharged to the containment. Almost all of the radioactive noble gases and 0.95% of the cesium inventory were released to the environment during the accident.

  14. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  15. Development of Wolsong Unit 2 Containment Analysis Model

    Energy Technology Data Exchange (ETDEWEB)

    Hoon, Choi [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of); Jin, Ko Bong; Chan, Park Young [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    To be prepared for the full scope safety analysis of Wolsong unit 2 with modified fuel, input decks for the various objectives, which can be read by GOTHIC 7.2b(QA), are developed and tested for the steady state simulation. A detailed nodalization of 39 control volumes and 92 flow paths is constructed to determine the differential pressure across internal walls or hydrogen concentration and distribution inside containment. A lumped model with 15 control volumes and 74 flow paths has also been developed to reduce the computer run time for the assessments in which the analysis results are not sensitive to detailed thermal hydraulic distribution inside containment such as peak pressure, pressure dependent signal and radionuclide release. The input data files provide simplified representations of the geometric layout of the containment building (volumes, dimensions, flow paths, doors, panels, etc.) and the performance characteristics of the various containment subsystems. The parameter values are based on best estimate or design values for that parameter. The analysis values are determined by conservatism depending on the analysis objective and may be different for various analysis objectives. Basic input decks of Wolsong unit 2 were developed for the various analysis purposes with GOTHIC 7.2b(QA). Depend on the analysis objective, two types of models are prepared. Detailed model models each confined room in the containment as a separate node. All of the geometric data are based on the drawings of Wolsong unit 2. Developed containment models are simulating the steady state well to the designated initial condition. These base models will be used for Wolsong unit 2 in case of safety analysis of full scope is needed.

  16. Modeling of percolation process in hemicellulose hydrolysis.

    Science.gov (United States)

    Cahela, D R; Lee, Y Y; Chambers, R P

    1983-01-01

    A mathematical model was developed for a percolation reactor in connection with consecutive first-order reactions. The model was designed to simulated acid-catalyzed cellulose or hemicellulose hydrolysis. The modeling process resulted in an analytically derived reactor equation, including mass-transfer effects, which was found to be useful in process desing and reactor optimization. The modedl was verified by experimental data obtained from hemicellulose hydrolysis.

  17. Hybrid Sludge Modeling in Water Treatment Processes

    OpenAIRE

    Brenda, Marian

    2015-01-01

    Sludge occurs in many waste water and drinking water treatment processes. The numeric modeling of sludge is therefore crucial for developing and optimizing water treatment processes. Numeric single-phase sludge models mainly include settling and viscoplastic behavior. Even though many investigators emphasize the importance of modeling the rheology of sludge for good simulation results, it is difficult to measure, because of settling and the viscoplastic behavior. In this thesis, a new method ...

  18. On the computational modeling of FSW processes

    OpenAIRE

    Agelet de Saracibar Bosch, Carlos; Chiumenti, Michèle; Santiago, Diego de; Cervera Ruiz, Miguel; Dialami, Narges; Lombera, Guillermo

    2010-01-01

    This work deals with the computational modeling and numerical simulation of Friction Stir Welding (FSW) processes. Here a quasi-static, transient, mixed stabilized Eulerian formulation is used. Norton-Hoff and Sheppard-Wright rigid thermoplastic material models have been considered. A product formula algorithm, leading to a staggered solution scheme, has been used. The model has been implemented into the in-house developed FE code COMET. Results obtained in the simulation of FSW process are c...

  19. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    Science.gov (United States)

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  20. Generation unit selection via capital asset pricing model for generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Romy Cahyadi; K. Jo Min; Chung-Hsiao Wang; Nick Abi-Samra [College of Engineering, Ames, IA (USA)

    2003-11-01

    The USA's electric power industry is undergoing substantial regulatory and organizational changes. Such changes introduce substantial financial risk in generation planning. In order to incorporate the financial risk into the capital investment decision process of generation planning, this paper develops and analyses a generation unit selection process via the capital asset pricing model (CAPM). In particular, utilizing realistic data on gas-fired, coal-fired, and wind power generation units, the authors show which and how concrete steps can be taken for generation planning purposes. It is hoped that the generation unit selection process will help utilities in the area of effective and efficient generation planning when financial risks are considered. 20 refs., 14 tabs.

  1. Generation unit selection via capital asset pricing model for generation planning

    Energy Technology Data Exchange (ETDEWEB)

    Cahyadi, Romy; Jo Min, K. [College of Engineering, Ames, IA (United States); Chunghsiao Wang [LG and E Energy Corp., Louisville, KY (United States); Abi-Samra, Nick [Electric Power Research Inst., Palo Alto, CA (United States)

    2003-07-01

    The electric power industry in many parts of U.S.A. is undergoing substantial regulatory and organizational changes. Such changes introduce substantial financial risk in generation planning. In order to incorporate the financial risk into the capital investment decision process of generation planning, in this paper, we develop and analyse a generation unit selection process via the capital asset pricing model (CAPM). In particular, utilizing realistic data on gas-fired, coal-fired, and wind power generation units, we show which and how concrete steps can be taken for generation planning purposes. It is hoped that the generation unit selection process developed in this paper will help utilities in the area of effective and efficient generation planning when financial risks are considered. (Author)

  2. A software architecture for multi-cellular system simulations on graphics processing units.

    Science.gov (United States)

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2013-09-01

    The first aim of simulation in virtual environment is to help biologists to have a better understanding of the simulated system. The cost of such simulation is significantly reduced compared to that of in vivo simulation. However, the inherent complexity of biological system makes it hard to simulate these systems on non-parallel architectures: models might be made of sub-models and take several scales into account; the number of simulated entities may be quite large. Today, graphics cards are used for general purpose computing which has been made easier thanks to frameworks like CUDA or OpenCL. Parallelization of models may however not be easy: parallel computer programing skills are often required; several hardware architectures may be used to execute models. In this paper, we present the software architecture we built in order to implement various models able to simulate multi-cellular system. This architecture is modular and it implements data structures adapted for graphics processing units architectures. It allows efficient simulation of biological mechanisms.

  3. The Aluminum Deep Processing Project of North United Aluminum Landed in Qijiang

    Institute of Scientific and Technical Information of China (English)

    2014-01-01

    <正>On April 10,North United Aluminum Company respectively signed investment cooperation agreements with Qijiang Industrial Park and Qineng Electricity&Aluminum Co.,Ltd,signifying the landing of North United Aluminum’s aluminum deep processing project in Qijiang.

  4. Spatial resolution recovery utilizing multi-ray tracing and graphic processing unit in PET image reconstruction.

    Science.gov (United States)

    Liang, Yicheng; Peng, Hao

    2015-02-07

    Depth-of-interaction (DOI) poses a major challenge for a PET system to achieve uniform spatial resolution across the field-of-view, particularly for small animal and organ-dedicated PET systems. In this work, we implemented an analytical method to model system matrix for resolution recovery, which was then incorporated in PET image reconstruction on a graphical processing unit platform, due to its parallel processing capacity. The method utilizes the concepts of virtual DOI layers and multi-ray tracing to calculate the coincidence detection response function for a given line-of-response. The accuracy of the proposed method was validated for a small-bore PET insert to be used for simultaneous PET/MR breast imaging. In addition, the performance comparisons were studied among the following three cases: 1) no physical DOI and no resolution modeling; 2) two physical DOI layers and no resolution modeling; and 3) no physical DOI design but with a different number of virtual DOI layers. The image quality was quantitatively evaluated in terms of spatial resolution (full-width-half-maximum and position offset), contrast recovery coefficient and noise. The results indicate that the proposed method has the potential to be used as an alternative to other physical DOI designs and achieve comparable imaging performances, while reducing detector/system design cost and complexity.

  5. Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).

    Science.gov (United States)

    Yang, Owen; Choi, Bernard

    2013-01-01

    To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.

  6. Fast ray-tracing of human eye optics on Graphics Processing Units.

    Science.gov (United States)

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Extending Model Checking to Object Process Validation

    NARCIS (Netherlands)

    Rein, van H.

    2002-01-01

    Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent models

  8. Landform Evolution Modeling of Specific Fluvially Eroded Physiographic Units on Titan

    Science.gov (United States)

    Moore, J. M.; Howard, A. D.; Schenk, P. M.

    2015-01-01

    Several recent studies have proposed certain terrain types (i.e., physiographic units) on Titan thought to be formed by fluvial processes acting on local uplands of bedrock or in some cases sediment. We have earlier used our landform evolution models to make general comparisons between Titan and other ice world landscapes (principally those of the Galilean satellites) that we have modeled the action of fluvial processes. Here we give examples of specific landscapes that, subsequent to modeled fluvial work acting on the surfaces, produce landscapes which resemble mapped terrain types on Titan.

  9. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  10. Advanced In-Space Propulsion (AISP): High Temperature Boost Power Processing Unit (PPU) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The task is to investigate the technology path to develop a 10kW modular Silicon Carbide (SiC) based power processing unit (PPU). The PPU utilizes the high...

  11. Silicon Carbide (SiC) Power Processing Unit (PPU) for Hall Effect Thrusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this SBIR project, APEI, Inc. is proposing to develop a high efficiency, rad-hard 3.8 kW silicon carbide (SiC) Power Processing Unit (PPU) for Hall Effect...

  12. Preform Characterization in VARTM Process Model Development

    Science.gov (United States)

    Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal; Loos, Alfred C.; Kellen, Charles B.; Jensen, Brian J.

    2004-01-01

    Vacuum-Assisted Resin Transfer Molding (VARTM) is a Liquid Composite Molding (LCM) process where both resin injection and fiber compaction are achieved under pressures of 101.3 kPa or less. Originally developed over a decade ago for marine composite fabrication, VARTM is now considered a viable process for the fabrication of aerospace composites (1,2). In order to optimize and further improve the process, a finite element analysis (FEA) process model is being developed to include the coupled phenomenon of resin flow, preform compaction and resin cure. The model input parameters are obtained from resin and fiber-preform characterization tests. In this study, the compaction behavior and the Darcy permeability of a commercially available carbon fabric are characterized. The resulting empirical model equations are input to the 3- Dimensional Infiltration, version 5 (3DINFILv.5) process model to simulate infiltration of a composite panel.

  13. Baseline groundwater model update for p-area groundwater operable unit, NBN

    Energy Technology Data Exchange (ETDEWEB)

    Ross, J. [Savannah River Site (SRS), Aiken, SC (United States); Amidon, M. [Savannah River Site (SRS), Aiken, SC (United States)

    2015-09-01

    This report documents the development of a numerical groundwater flow and transport model of the hydrogeologic system of the P-Area Reactor Groundwater Operable Unit at the Savannah River Site (SRS) (Figure 1-1). The P-Area model provides a tool to aid in understanding the hydrologic and geochemical processes that control the development and migration of the current tritium, tetrachloroethene (PCE), and trichloroethene (TCE) plumes in this region.

  14. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  15. Research on the pyrolysis of hardwood in an entrained bed process development unit

    Energy Technology Data Exchange (ETDEWEB)

    Kovac, R.J.; Gorton, C.W.; Knight, J.A.; Newman, C.J.; O' Neil, D.J. (Georgia Inst. of Tech., Atlanta, GA (United States). Research Inst.)

    1991-08-01

    An atmospheric flash pyrolysis process, the Georgia Tech Entrained Flow Pyrolysis Process, for the production of liquid biofuels from oak hardwood is described. The development of the process began with bench-scale studies and a conceptual design in the 1978--1981 timeframe. Its development and successful demonstration through research on the pyrolysis of hardwood in an entrained bed process development unit (PDU), in the period of 1982--1989, is presented. Oil yields (dry basis) up to 60% were achieved in the 1.5 ton-per-day PDU, far exceeding the initial target/forecast of 40% oil yields. Experimental data, based on over forty runs under steady-state conditions, supported by material and energy balances of near-100% closures, have been used to establish a process model which indicates that oil yields well in excess of 60% (dry basis) can be achieved in a commercial reactor. Experimental results demonstrate a gross product thermal efficiency of 94% and a net product thermal efficiency of 72% or more; the highest values yet achieved with a large-scale biomass liquefaction process. A conceptual manufacturing process and an economic analysis for liquid biofuel production at 60% oil yield from a 200-TPD commercial plant is reported. The plant appears to be profitable at contemporary fuel costs of $21/barrel oil-equivalent. Total capital investment is estimated at under $2.5 million. A rate-of-return on investment of 39.4% and a pay-out period of 2.1 years has been estimated. The manufacturing cost of the combustible pyrolysis oil is $2.70 per gigajoule. 20 figs., 87 tabs.

  16. A model-data based systems approach to process intensification

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    In recent years process intensification (PI) has attracted much interest as a potential means of process improvement to meet the demands, such as, for sustainable production. A variety of intensified equipment are being developed that potentially creates options to meet these demands...... for focused validation of only the promising candidates in the second-stage. This approach, however, would be limited to intensification based on “known” unit operations, unless the PI process synthesis/design is considered at a lower level of aggregation, namely the phenomena level. That is, the model....... Here, established procedures for computer aided molecular design is adopted since combination of phenomena to form unit operations with desired objectives is, in principle, similar to combining atoms to form molecules with desired properties. The concept of the phenomena-based synthesis/design method...

  17. Research on the railway construction process quality model based on the state space method

    Institute of Scientific and Technical Information of China (English)

    Lu Shoudong; Zhou Guohua; Chen Haifeng; Zhao Guotang

    2013-01-01

    In the ISO9000“input-output”process model,the concept of process quality is difficult to get the cor-rect interpretation. From the“white box”theory of process,this paper puts forward the scientific meaning of the concept of process quality and the process quality model by taking the basic operating unit of 6M1E in the railway construction project for example. The basic operating unit system consists of technological natural process,opera-tion process and management process;the process quality of the basic operating unit system depends on the interre-lation and interaction among those three sub-processes,and also subjects to the impact of the external disturbance input factors. Finally,the cast-in-situ prestressed concrete continuous box girder construction process is utilized to elaborate the specific application of this theory in the quality management of the railway construction project.

  18. Software-Engineering Process Simulation (SEPS) model

    Science.gov (United States)

    Lin, C. Y.; Abdel-Hamid, T.; Sherif, J. S.

    1992-01-01

    The Software Engineering Process Simulation (SEPS) model is described which was developed at JPL. SEPS is a dynamic simulation model of the software project development process. It uses the feedback principles of system dynamics to simulate the dynamic interactions among various software life cycle development activities and management decision making processes. The model is designed to be a planning tool to examine tradeoffs of cost, schedule, and functionality, and to test the implications of different managerial policies on a project's outcome. Furthermore, SEPS will enable software managers to gain a better understanding of the dynamics of software project development and perform postmodern assessments.

  19. Job Aiding/Training Decision Process Model

    Science.gov (United States)

    1992-09-01

    I[ -, . 1’, oo Ii AL-CR-i1992-0004 AD-A256 947lEE = IIEI ifl ll 1l I JOB AIDING/TRAINING DECISION PROCESS MODEL A R M John P. Zenyuh DTIC S Phillip C...March 1990 - April 1990 4. TITLE AND SUBTITLE S. FUNDING NUMBERS C - F33615-86-C-0545 Job Aiding/Training Decision Process Model PE - 62205F PR - 1121 6...Components to Process Model Decision and Selection Points ........... 32 13. Summary of Subject Recommendations for Aiding Approaches

  20. AN APPROACH TO EFFICIENT FEM SIMULATIONS ON GRAPHICS PROCESSING UNITS USING CUDA

    Directory of Open Access Journals (Sweden)

    Björn Nutti

    2014-04-01

    Full Text Available The paper presents a highly efficient way of simulating the dynamic behavior of deformable objects by means of the finite element method (FEM with computations performed on Graphics Processing Units (GPU. The presented implementation reduces bottlenecks related to memory accesses by grouping the necessary data per node pairs, in contrast to the classical way done per element. This strategy reduces the memory access patterns that are not suitable for the GPU memory architecture. Furthermore, the presented implementation takes advantage of the underlying sparse-block-matrix structure, and it has been demonstrated how to avoid potential bottlenecks in the algorithm. To achieve plausible deformational behavior for large local rotations, the objects are modeled by means of a simplified co-rotational FEM formulation.

  1. Fuel Conditioning Facility Electrorefiner Process Model

    Energy Technology Data Exchange (ETDEWEB)

    DeeEarl Vaden

    2005-10-01

    The Fuel Conditioning Facility at the Idaho National Laboratory processes spent nuclear fuel from the Experimental Breeder Reactor II using electro-metallurgical treatment. To process fuel without waiting for periodic sample analyses to assess process conditions, an electrorefiner process model predicts the composition of the electrorefiner inventory and effluent streams. For the chemical equilibrium portion of the model, the two common methods for solving chemical equilibrium problems, stoichiometric and non stoichiometric, were investigated. In conclusion, the stoichiometric method produced equilibrium compositions close to the measured results whereas the non stoichiometric method did not.

  2. MULTI-SCALE GAUSSIAN PROCESSES MODEL

    Institute of Scientific and Technical Information of China (English)

    Zhou Yatong; Zhang Taiyi; Li Xiaohe

    2006-01-01

    A novel model named Multi-scale Gaussian Processes (MGP) is proposed. Motivated by the ideas of multi-scale representations in the wavelet theory, in the new model, a Gaussian process is represented at a scale by a linear basis that is composed of a scale function and its different translations. Finally the distribution of the targets of the given samples can be obtained at different scales. Compared with the standard Gaussian Processes (GP) model, the MGP model can control its complexity conveniently just by adjusting the scale parameter. So it can trade-off the generalization ability and the empirical risk rapidly. Experiments verify the feasibility of the MGP model, and exhibit that its performance is superior to the GP model if appropriate scales are chosen.

  3. Hybrid modelling of a sugar boiling process

    CERN Document Server

    Lauret, Alfred Jean Philippe; Gatina, Jean Claude

    2012-01-01

    The first and maybe the most important step in designing a model-based predictive controller is to develop a model that is as accurate as possible and that is valid under a wide range of operating conditions. The sugar boiling process is a strongly nonlinear and nonstationary process. The main process nonlinearities are represented by the crystal growth rate. This paper addresses the development of the crystal growth rate model according to two approaches. The first approach is classical and consists of determining the parameters of the empirical expressions of the growth rate through the use of a nonlinear programming optimization technique. The second is a novel modeling strategy that combines an artificial neural network (ANN) as an approximator of the growth rate with prior knowledge represented by the mass balance of sucrose crystals. The first results show that the first type of model performs local fitting while the second offers a greater flexibility. The two models were developed with industrial data...

  4. Probabilistic models of language processing and acquisition.

    Science.gov (United States)

    Chater, Nick; Manning, Christopher D

    2006-07-01

    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.

  5. Exploring Graphics Processing Unit (GPU Resource Sharing Efficiency for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Teng Li

    2013-11-01

    Full Text Available The increasing incorporation of Graphics Processing Units (GPUs as accelerators has been one of the forefront High Performance Computing (HPC trends and provides unprecedented performance; however, the prevalent adoption of the Single-Program Multiple-Data (SPMD programming model brings with it challenges of resource underutilization. In other words, under SPMD, every CPU needs GPU capability available to it. However, since CPUs generally outnumber GPUs, the asymmetric resource distribution gives rise to overall computing resource underutilization. In this paper, we propose to efficiently share the GPU under SPMD and formally define a series of GPU sharing scenarios. We provide performance-modeling analysis for each sharing scenario with accurate experimentation validation. With the modeling basis, we further conduct experimental studies to explore potential GPU sharing efficiency improvements from multiple perspectives. Both further theoretical and experimental GPU sharing performance analysis and results are presented. Our results not only demonstrate the significant performance gain for SPMD programs with the proposed efficient GPU sharing, but also the further improved sharing efficiency with the optimization techniques based on our accurate modeling.

  6. High-speed nonlinear finite element analysis for surgical simulation using graphics processing units.

    Science.gov (United States)

    Taylor, Z A; Cheng, M; Ourselin, S

    2008-05-01

    The use of biomechanical modelling, especially in conjunction with finite element analysis, has become common in many areas of medical image analysis and surgical simulation. Clinical employment of such techniques is hindered by conflicting requirements for high fidelity in the modelling approach, and fast solution speeds. We report the development of techniques for high-speed nonlinear finite element analysis for surgical simulation. We use a fully nonlinear total Lagrangian explicit finite element formulation which offers significant computational advantages for soft tissue simulation. However, the key contribution of the work is the presentation of a fast graphics processing unit (GPU) solution scheme for the finite element equations. To the best of our knowledge, this represents the first GPU implementation of a nonlinear finite element solver. We show that the present explicit finite element scheme is well suited to solution via highly parallel graphics hardware, and that even a midrange GPU allows significant solution speed gains (up to 16.8 x) compared with equivalent CPU implementations. For the models tested the scheme allows real-time solution of models with up to 16,000 tetrahedral elements. The use of GPUs for such purposes offers a cost-effective high-performance alternative to expensive multi-CPU machines, and may have important applications in medical image analysis and surgical simulation.

  7. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-07-01

    Full Text Available Background: Competitive intelligence (CI provides actionable intelligence, which provides a competitive edge in enterprises. However, without proper process, it is difficult to develop actionable intelligence. There are disagreements about how the CI process should be structured. For CI professionals to focus on producing actionable intelligence, and to do so with simplicity, they need a common CI process model.Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model.Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used.Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase.Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.

  8. Towards a universal competitive intelligence process model

    Directory of Open Access Journals (Sweden)

    Rene Pellissier

    2013-08-01

    Full Text Available Background: Competitive intelligence (CI provides actionable intelligence, which provides a competitive edge in enterprises. However, without proper process, it is difficult to develop actionable intelligence. There are disagreements about how the CI process should be structured. For CI professionals to focus on producing actionable intelligence, and to do so with simplicity, they need a common CI process model.Objectives: The purpose of this research is to review the current literature on CI, to look at the aims of identifying and analysing CI process models, and finally to propose a universal CI process model.Method: The study was qualitative in nature and content analysis was conducted on all identified sources establishing and analysing CI process models. To identify relevant literature, academic databases and search engines were used. Moreover, a review of references in related studies led to more relevant sources, the references of which were further reviewed and analysed. To ensure reliability, only peer-reviewed articles were used.Results: The findings reveal that the majority of scholars view the CI process as a cycle of interrelated phases. The output of one phase is the input of the next phase.Conclusion: The CI process is a cycle of interrelated phases. The output of one phase is the input of the next phase. These phases are influenced by the following factors: decision makers, process and structure, organisational awareness and culture, and feedback.

  9. Online Rule Generation Software Process Model

    Directory of Open Access Journals (Sweden)

    Sudeep Marwaha

    2013-07-01

    Full Text Available For production systems like expert systems, a rule generation software can facilitate the faster deployment. The software process model for rule generation using decision tree classifier refers to the various steps required to be executed for the development of a web based software model for decision rule generation. The Royce’s final waterfall model has been used in this paper to explain the software development process. The paper presents the specific output of various steps of modified waterfall model for decision rules generation.

  10. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    Science.gov (United States)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  11. Divergent projections of future land use in the United States arising from different models and scenarios

    Science.gov (United States)

    Sohl, Terry L.; Wimberly, Michael; Radeloff, Volker C.; Theobald, David M.; Sleeter, Benjamin M.

    2016-01-01

    A variety of land-use and land-cover (LULC) models operating at scales from local to global have been developed in recent years, including a number of models that provide spatially explicit, multi-class LULC projections for the conterminous United States. This diversity of modeling approaches raises the question: how consistent are their projections of future land use? We compared projections from six LULC modeling applications for the United States and assessed quantitative, spatial, and conceptual inconsistencies. Each set of projections provided multiple scenarios covering a period from roughly 2000 to 2050. Given the unique spatial, thematic, and temporal characteristics of each set of projections, individual projections were aggregated to a common set of basic, generalized LULC classes (i.e., cropland, pasture, forest, range, and urban) and summarized at the county level across the conterminous United States. We found very little agreement in projected future LULC trends and patterns among the different models. Variability among scenarios for a given model was generally lower than variability among different models, in terms of both trends in the amounts of basic LULC classes and their projected spatial patterns. Even when different models assessed the same purported scenario, model projections varied substantially. Projections of agricultural trends were often far above the maximum historical amounts, raising concerns about the realism of the projections. Comparisons among models were hindered by major discrepancies in categorical definitions, and suggest a need for standardization of historical LULC data sources. To capture a broader range of uncertainties, ensemble modeling approaches are also recommended. However, the vast inconsistencies among LULC models raise questions about the theoretical and conceptual underpinnings of current modeling approaches. Given the substantial effects that land-use change can have on ecological and societal processes, there

  12. Modeling and design of a combined transverse and axial flow threshing unit for rice harvesters

    Directory of Open Access Journals (Sweden)

    Zhong Tang

    2014-11-01

    Full Text Available The thorough investigation of both grain threshing and grain separating processes is a crucial consideration for effective structural design and variable optimization of the tangential flow threshing cylinder and longitudinal axial flow threshing cylinder composite units (TLFC unit of small and medium-sized (SME combine harvesters. The objective of this paper was to obtain the structural variables of a TLFC unit by theoretical modeling and experimentation on a tangential flow threshing cylinder unit (TFC unit and longitudinal axial flow threshing cylinder unit (LFC unit. Threshing and separation equations for five types of threshing teeth (knife bar, trapezoidal tooth, spike tooth, rasp bar, and rectangular bar, were obtained using probability theory. Results demonstrate that the threshing and separation capacity of the knife bar TFC unit was stronger than the other threshing teeth. The length of the LFC unit was divided into four sections, with helical blades on the first section (0-0.17 m, the spike tooth on the second section (0.17-1.48 m, the trapezoidal tooth on the third section (1.48-2.91 m, and the discharge plate on the fourth section (2.91-3.35 m. Test results showed an un-threshed grain rate of 0.243%, un-separated grain rate of 0.346%, and broken grain rate of 0.184%. Evidenced by these results, threshing and separation performance is significantly improved by analyzing and optimizing the structure and variables of a TLFC unit. The results of this research can be used to successfully design the TLFC unit of small and medium-sized (SME combine harvesters.

  13. Models and Modelling Tools for Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    2016-01-01

    The design, development and reliability of a chemical product and the process to manufacture it, need to be consistent with the end-use characteristics of the desired product. One of the common ways to match the desired product-process characteristics is through trial and error based experiments......-based framework is that in the design, development and/or manufacturing of a chemical product-process, the knowledge of the applied phenomena together with the product-process design details can be provided with diverse degrees of abstractions and details. This would allow the experimental resources......, are the needed models for such a framework available? Or, are modelling tools that can help to develop the needed models available? Can such a model-based framework provide the needed model-based work-flows matching the requirements of the specific chemical product-process design problems? What types of models...

  14. MODELLING PURCHASING PROCESSES FROM QUALITY ASPECTS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-12-01

    Full Text Available Management has a fundamental task to identify and direct primary and specific processes within purchasing function, applying the up-to-date information infrastructure. ISO 9001:2000 defines a process as a number of interrelated or interactive activities transforming inputs and outputs, and the "process approach" as a systematic identification in management processes employed with the organization and particularly - relationships among the processes. To direct a quality management system using process approach, the organization is to determine the map of its general (basic processes. Primary processes are determined on the grounds of their interrelationship and impact on satisfying customers' needs. To make a proper choice of general business processes, it is necessary to determine the entire business flow, beginning with the customer demand up to the delivery of products or service provided. In the next step the process model is to be converted into data model which is essential for implementation of the information system enabling automation, monitoring, measuring, inspection, analysis and improvement of key purchase processes. In this paper are given methodology and some results of investigation of development of IS for purchasing process from aspects of quality.

  15. Mathematical modelling of the calcination process | Olayiwola ...

    African Journals Online (AJOL)

    Mathematical modelling of the calcination process. ... High quality lime is an essential raw material for Electric Arc Furnaces and Basic Oxygen Furnaces ... From the numerical simulation, it is observed that the gas temperature increases as the ...

  16. Three-dimensional model analysis and processing

    CERN Document Server

    Yu, Faxin; Luo, Hao; Wang, Pinghui

    2011-01-01

    This book focuses on five hot research directions in 3D model analysis and processing in computer science:  compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.

  17. Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation.

    Science.gov (United States)

    Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter

    2017-01-01

    Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences-and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions.

  18. Modeling of Dielectric Heating within Lyophilization Process

    Directory of Open Access Journals (Sweden)

    Jan Kyncl

    2014-01-01

    Full Text Available A process of lyophilization of paper books is modeled. The process of drying is controlled by a dielectric heating system. From the physical viewpoint, the task represents a 2D coupled problem described by two partial differential equations for the electric and temperature fields. The material parameters are supposed to be temperature-dependent functions. The continuous mathematical model is solved numerically. The methodology is illustrated with some examples whose results are discussed.

  19. A Process Model of Quantum Mechanics

    OpenAIRE

    Sulis, William

    2014-01-01

    A process model of quantum mechanics utilizes a combinatorial game to generate a discrete and finite causal space upon which can be defined a self-consistent quantum mechanics. An emergent space-time M and continuous wave function arise through a non-uniform interpolation process. Standard non-relativistic quantum mechanics emerges under the limit of infinite information (the causal space grows to infinity) and infinitesimal scale (the separation between points goes to zero). The model has th...

  20. A Procedural Model for Process Improvement Projects

    OpenAIRE

    Kreimeyer, Matthias;Daniilidis, Charampos;Lindemann, Udo

    2017-01-01

    Process improvement projects are of a complex nature. It is therefore necessary to use experience and knowledge gained in previous projects when executing a new project. Yet, there are few pragmatic planning aids, and transferring the institutional knowledge from one project to the next is difficult. This paper proposes a procedural model that extends common models for project planning to enable staff on a process improvement project to adequately plan their projects, enabling them to documen...

  1. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension...

  2. Automated processing of whole blood units: operational value and in vitro quality of final blood components

    Science.gov (United States)

    Jurado, Marisa; Algora, Manuel; Garcia-Sanchez, Félix; Vico, Santiago; Rodriguez, Eva; Perez, Sonia; Barbolla, Luz

    2012-01-01

    Background The Community Transfusion Centre in Madrid currently processes whole blood using a conventional procedure (Compomat, Fresenius) followed by automated processing of buffy coats with the OrbiSac system (CaridianBCT). The Atreus 3C system (CaridianBCT) automates the production of red blood cells, plasma and an interim platelet unit from a whole blood unit. Interim platelet unit are pooled to produce a transfusable platelet unit. In this study the Atreus 3C system was evaluated and compared to the routine method with regards to product quality and operational value. Materials and methods Over a 5-week period 810 whole blood units were processed using the Atreus 3C system. The attributes of the automated process were compared to those of the routine method by assessing productivity, space, equipment and staffing requirements. The data obtained were evaluated in order to estimate the impact of implementing the Atreus 3C system in the routine setting of the blood centre. Yield and in vitro quality of the final blood components processed with the two systems were evaluated and compared. Results The Atreus 3C system enabled higher throughput while requiring less space and employee time by decreasing the amount of equipment and processing time per unit of whole blood processed. Whole blood units processed on the Atreus 3C system gave a higher platelet yield, a similar amount of red blood cells and a smaller volume of plasma. Discussion These results support the conclusion that the Atreus 3C system produces blood components meeting quality requirements while providing a high operational efficiency. Implementation of the Atreus 3C system could result in a large organisational improvement. PMID:22044958

  3. Accelerating Wright-Fisher Forward Simulations on the Graphics Processing Unit.

    Science.gov (United States)

    Lawrie, David S

    2017-09-07

    Forward Wright-Fisher simulations are powerful in their ability to model complex demography and selection scenarios, but suffer from slow execution on the Central Processor Unit (CPU), thus limiting their usefulness. However, the single-locus Wright-Fisher forward algorithm is exceedingly parallelizable, with many steps that are so-called "embarrassingly parallel," consisting of a vast number of individual computations that are all independent of each other and thus capable of being performed concurrently. The rise of modern Graphics Processing Units (GPUs) and programming languages designed to leverage the inherent parallel nature of these processors have allowed researchers to dramatically speed up many programs that have such high arithmetic intensity and intrinsic concurrency. The presented GPU Optimized Wright-Fisher simulation, or "GO Fish" for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. With simulations so accelerated, one can not only do quick parametric bootstrapping of previously estimated parameters, but also use simulated results to calculate the likelihoods and summary statistics of demographic and selection models against real polymorphism data, all without restricting the demographic and selection scenarios that can be modeled or requiring approximations to the single-locus forward algorithm for efficiency. Further, as many of the parallel programming techniques used in this simulation can be applied to other computationally intensive algorithms important in population genetics, GO Fish serves as an exciting template for future research into accelerating computation in evolution. GO Fish is part of the Parallel PopGen Package available at: http://dl42.github.io/ParallelPopGen/. Copyright © 2017 Lawrie.

  4. Flux Analysis in Process Models via Causality

    CERN Document Server

    Kahramanoğullari, Ozan

    2010-01-01

    We present an approach for flux analysis in process algebra models of biological systems. We perceive flux as the flow of resources in stochastic simulations. We resort to an established correspondence between event structures, a broadly recognised model of concurrency, and state transitions of process models, seen as Petri nets. We show that we can this way extract the causal resource dependencies in simulations between individual state transitions as partial orders of events. We propose transformations on the partial orders that provide means for further analysis, and introduce a software tool, which implements these ideas. By means of an example of a published model of the Rho GTP-binding proteins, we argue that this approach can provide the substitute for flux analysis techniques on ordinary differential equation models within the stochastic setting of process algebras.

  5. Cost Models for MMC Manufacturing Processes

    Science.gov (United States)

    Elzey, Dana M.; Wadley, Haydn N. G.

    1996-01-01

    Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.

  6. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Li, Q; Okamura, N; Stelzer, T

    2013-01-01

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well assthe program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudess(FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet srocesses at the LHC associated with production of single and double weak bosonss a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multisle Higgs bosons via weak-boson fusion, where all the heavy particles are allowes to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those comsuted by HELAS within the expected numerical accuracy, and the cross sections obsained by gBASES, a GPU version of the Monte Carlo integration program, agree wish those obt...

  7. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  8. Business Process Modeling Notation - An Overview

    Directory of Open Access Journals (Sweden)

    Alexandra Fortiş

    2006-01-01

    Full Text Available BPMN represents an industrial standard created to offer a common and user friendly notation to all the participants to a business process. The present paper aims to briefly present the main features of this notation as well as an interpretation of some of the main patterns characterizing a business process modeled by the working fluxes.

  9. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  10. [The model of adaptive primary image processing].

    Science.gov (United States)

    Dudkin, K N; Mironov, S V; Dudkin, A K; Chikhman, V N

    1998-07-01

    A computer model of adaptive segmentation of the 2D visual objects was developed. Primary image descriptions are realised via spatial frequency filters and feature detectors performing as self-organised mechanisms. Simulation of the control processes related to attention, lateral, frequency-selective and cross-orientation inhibition, determines the adaptive image processing.

  11. Laser surface processing and model studies

    CERN Document Server

    Yilbas, Bekir Sami

    2013-01-01

    This book introduces model studies associated with laser surface processing such as conduction limited heating, surface re-melting, Marangoni flow and its effects on the temperature field, re-melting of multi-layered surfaces, laser shock processing, and practical applications. The book provides insight into the physical processes involved with laser surface heating and phase change in laser irradiated region. It is written for engineers and researchers working on laser surface engineering.

  12. Enzymatic corn wet milling: engineering process and cost model

    Directory of Open Access Journals (Sweden)

    McAloon Andrew J

    2009-01-01

    Full Text Available Abstract Background Enzymatic corn wet milling (E-milling is a process derived from conventional wet milling for the recovery and purification of starch and co-products using proteases to eliminate the need for sulfites and decrease the steeping time. In 2006, the total starch production in USA by conventional wet milling equaled 23 billion kilograms, including modified starches and starches used for sweeteners and ethanol production 1. Process engineering and cost models for an E-milling process have been developed for a processing plant with a capacity of 2.54 million kg of corn per day (100,000 bu/day. These models are based on the previously published models for a traditional wet milling plant with the same capacity. The E-milling process includes grain cleaning, pretreatment, enzymatic treatment, germ separation and recovery, fiber separation and recovery, gluten separation and recovery and starch separation. Information for the development of the conventional models was obtained from a variety of technical sources including commercial wet milling companies, industry experts and equipment suppliers. Additional information for the present models was obtained from our own experience with the development of the E-milling process and trials in the laboratory and at the pilot plant scale. The models were developed using process and cost simulation software (SuperPro Designer® and include processing information such as composition and flow rates of the various process streams, descriptions of the various unit operations and detailed breakdowns of the operating and capital cost of the facility. Results Based on the information from the model, we can estimate the cost of production per kilogram of starch using the input prices for corn, enzyme and other wet milling co-products. The work presented here describes the E-milling process and compares the process, the operation and costs with the conventional process. Conclusion The E-milling process

  13. The (mathematical modelling process in biosciences

    Directory of Open Access Journals (Sweden)

    Nestor V. Torres

    2015-12-01

    Full Text Available In this communication we introduce a general framework and discussion on the role of models and the modelling process within the scientific activity in the biosciences realm. The objective is sum up the common procedure during the formalization and analysis of a biological problem under the foundations of Systems Biology, which approach the study of biological systems as a whole.We begin by presenting the definitions of (biological system and model. Particular attention is given to the meaning of mathematical model within the context of the biology. Then, we present the modelization and analysis process of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model.All along this presentation the main features and shortcomings of the process are developed together with a set of rules that could help in the modelling endeavour of any biological system. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that the modelling are currently posing to the current biology.

  14. Modeling aerosol processes at the local scale

    Energy Technology Data Exchange (ETDEWEB)

    Lazaridis, M.; Isukapalli, S.S.; Georgopoulos, P.G. [Environmental and Occupational Health Sciences Inst., NJ (United States)

    1998-12-31

    This work presents an approach for modeling photochemical gaseous and aerosol phase processes in subgrid plumes from major localized (e.g. point) sources (plume-in-grid modeling), thus improving the ability to quantify the relationship between emission source activity and ambient air quality. This approach employs the Reactive Plume Model (RPM-AERO) which extends the regulatory model RPM-IV by incorporating aerosol processes and heterogeneous chemistry. The physics and chemistry of elemental carbon, organic carbon, sulfate, sodium, chloride and crustal material of aerosols are treated and attributed to the PM size distribution. A modified version of the Carbon Bond IV chemical mechanism is included to model the formation of organic aerosol, and the inorganic multicomponent atmospheric aerosol equilibrium model, SEQUILIB is used for calculating the amounts of inorganic species in particulate matter. Aerosol dynamics modeled include mechanisms of nucleation, condensation and gas/particle partitioning of organic matter. An integrated trajectory-in-grid modeling system, UAM/RPM-AERO, is under continuing development for extracting boundary and initial conditions from the mesoscale photochemical/aerosol model UAM-AERO. The RPM-AERO is applied here to case studies involving emissions from point sources to study sulfate particle formation in plumes. Model calculations show that homogeneous nucleation is an efficient process for new particle formation in plumes, in agreement with previous field studies and theoretical predictions.

  15. Analysis and evaluation of collaborative modeling processes

    NARCIS (Netherlands)

    Ssebuggwawo, D.

    2012-01-01

    Analysis and evaluation of collaborative modeling processes is confronted with many challenges. On the one hand, many systems design and re-engineering projects require collaborative modeling approaches that can enhance their productivity. But, such collaborative efforts, which often consist of the

  16. Model checking Quasi Birth Death processes

    NARCIS (Netherlands)

    Remke, A.K.I.

    2004-01-01

    Quasi-Birth Death processes (QBDs) are a special class of infinite state CTMCs that combines a large degree of modeling expressiveness with efficient solution methods. This work adapts the well-known stochastic logic CSL for use on QBDs as CSL and presents model checking algorithms for so-called lev

  17. FDA 2011 process validation guidance: lifecycle compliance model.

    Science.gov (United States)

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  18. Bayesian Network Based XP Process Modelling

    Directory of Open Access Journals (Sweden)

    Mohamed Abouelela

    2010-07-01

    Full Text Available A Bayesian Network based mathematical model has been used for modelling Extreme Programmingsoftware development process. The model is capable of predicting the expected finish time and theexpected defect rate for each XP release. Therefore, it can be used to determine the success/failure of anyXP Project. The model takes into account the effect of three XP practices, namely: Pair Programming,Test Driven Development and Onsite Customer practices. The model’s predictions were validated againsttwo case studies. Results show the precision of our model especially in predicting the project finish time.

  19. Presence and numbers of Campylobacter, Escherichia coli and Salmonella determined in boiler carcass rinses from United States processing plants in the hazard analysis and critical control point-based inspection models project

    Science.gov (United States)

    In 1999, the USDA-Food Safety Inspection Service introduced an inspection system called The HACCP-Based Inspection Models Project (HIMP). HIMP varies from traditional inspection in that more emphasis is placed on system inspection and verification. Each carcass is still visually inspected but some...

  20. THE ASYMPTOTIC PROPERTIES OF SUPERCRITICAL BISEXUAL GALTON-WATSON BRANCHING PROCESSES WITH IMMIGRATION OF MATING UNITS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In this article the supercritical bisexual Galton-Watson branching processes with the immigration of mating units is considered. A necessary condition for the almost sure convergence, and a sufficient condition for the L1 convergence are given for the process with the suitably normed condition.

  1. A Framework for Smart Distribution of Bio-signal Processing Units in M-Health

    NARCIS (Netherlands)

    Mei, Hailiang; Widya, Ing; Broens, Tom; Pawar, Pravin; Halteren, van Aart; Shishkov, Boris; Sinderen, van Marten

    2007-01-01

    This paper introduces the Bio-Signal Processing Unit (BSPU) as a functional component that hosts (part of ) the bio-signal information processing algorithms that are needed for an m-health application. With our approach, the BSPUs can be dynamically assigned to available nodes between the bio-signal

  2. Optimization models of the supply of power structures’ organizational units with centralized procurement

    Directory of Open Access Journals (Sweden)

    Sysoiev Volodymyr

    2013-01-01

    Full Text Available Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. This article presents optimization models of the supply of state power structures’ organizational units with centralized procurement, for different levels of simulated materiel and technical support processes. The models allow us to find the most profitable options for state power structures’ organizational supply units in a centre-oriented logistics system in conditions of the changing needs, volume of allocated funds, and logistics costs that accompany the process of supply, by maximizing the provision level of organizational units with necessary material and technical resources for the entire planning period of supply by minimizing the total logistical costs, taking into account the diverse nature and the different priorities of organizational units and material and technical resources.

  3. Process model patterns for collaborative work

    OpenAIRE

    Lonchamp, Jacques

    1998-01-01

    Colloque avec actes et comité de lecture.; As most real work is collaborative in nature, process model developers have to model collaborative situations. This paper defines generic collaborative patterns, ie, pragmatic and abstract building blocks for modelling recurrent situations. The first part specifies the graphical notation for the solution description. The second part gives some current patterns for the collaborative production of a single document in isolation and for the synchronizat...

  4. Characterization of suspended bacteria from processing units in an advanced drinking water treatment plant of China.

    Science.gov (United States)

    Wang, Feng; Li, Weiying; Zhang, Junpeng; Qi, Wanqi; Zhou, Yanyan; Xiang, Yuan; Shi, Nuo

    2017-05-01

    For the drinking water treatment plant (DWTP), the organic pollutant removal was the primary focus, while the suspended bacterial was always neglected. In this study, the suspended bacteria from each processing unit in a DWTP employing an ozone-biological activated carbon process was mainly characterized by using heterotrophic plate counts (HPCs), a flow cytometer, and 454-pyrosequencing methods. The results showed that an adverse changing tendency of HPC and total cell counts was observed in the sand filtration tank (SFT), where the cultivability of suspended bacteria increased to 34%. However, the cultivability level of other units stayed below 3% except for ozone contact tank (OCT, 13.5%) and activated carbon filtration tank (ACFT, 34.39%). It meant that filtration processes promoted the increase in cultivability of suspended bacteria remarkably, which indicated biodegrading capability. In the unit of OCT, microbial diversity indexes declined drastically, and the dominant bacteria were affiliated to Proteobacteria phylum (99.9%) and Betaproteobacteria class (86.3%), which were also the dominant bacteria in the effluent of other units. Besides, the primary genus was Limnohabitans in the effluents of SFT (17.4%) as well as ACFT (25.6%), which was inferred to be the crucial contributors for the biodegradable function in the filtration units. Overall, this paper provided an overview of community composition of each processing units in a DWTP as well as reference for better developing microbial function for drinking water treatment in the future.

  5. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    Science.gov (United States)

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-01

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  6. Using Perspective to Model Complex Processes

    Energy Technology Data Exchange (ETDEWEB)

    Kelsey, R.L.; Bisset, K.R.

    1999-04-04

    The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.

  7. Filament winding cylinders. I - Process model

    Science.gov (United States)

    Lee, Soo-Yong; Springer, George S.

    1990-01-01

    A model was developed which describes the filament winding process of composite cylinders. The model relates the significant process variables such as winding speed, fiber tension, and applied temperature to the thermal, chemical and mechanical behavior of the composite cylinder and the mandrel. Based on the model, a user friendly code was written which can be used to calculate (1) the temperature in the cylinder and the mandrel, (2) the degree of cure and viscosity in the cylinder, (3) the fiber tensions and fiber positions, (4) the stresses and strains in the cylinder and in the mandrel, and (5) the void diameters in the cylinder.

  8. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  9. Development and implementation of an interface control-process and of additional models in the simulator of combined cycle units; Desarrollo e implantacion de una interfaz control-proceso y de modelos adicionales en el simulador de unidades de ciclo combinado

    Energy Technology Data Exchange (ETDEWEB)

    Martinez R, Rogelio E; Ramirez G, Miguel; Melgar G, Jose L; Codero C, Juan C; Romero J, Guillermo [Instituto de Investigaciones Electricas, Cuernavaca, Morelos (Mexico)

    2001-07-01

    In this article are described the design and implementation of an interface control-process and the formulation of the process models for the simulation of the vibration amplitudes of the steam and gas turbines and of the monitoring system of gas discharges, which comprise a simulator of total reach of combined cycle units. These three systems had to be developed and implemented in the simulator of combined cycle units, that the Instituto de Investigaciones Electricas (IIE) developed for the Comision Federal de Electricidad (CFE), with the purpose of solving different problematic caused by the use of a platform of commercial software for the construction of simulators. The problematic presented by the platform of software is briefly described, as well as the solutions contributed with respect to the interconnection of signals control-process, and to the lack of models of the mechanical part of the steam and gas turbines, and of the monitoring system of polluting emissions. [Spanish] En este articulo se describen el diseno e implantacion de una interfaz control-proceso y la formulacion de los modelos de proceso para la simulacion de las amplitudes de vibracion de las turbinas de gas y de vapor y del sistema de monitoreo de emisiones de gases, los cuales forman parte de un simulador de alcance total de unidades de ciclo combinado. Estos tres sistemas tuvieron que ser desarrollados e implementados en el simulador de unidades de ciclo combinado, que el Instituto de Investigaciones electricas (IIE) desarrollo para la Comision Federal de Electricidad (CFE), con el fin de resolver diferentes problematicas ocasionadas por la utilizacion de una plataforma de software comercial para la construccion de simuladores. Se describen brevemente las problematicas presentadas por la plataforma de software, asi como las soluciones aportadas en lo relativo a la interconexion de senales control-proceso, y a la falta de modelos de la parte mecanica de las turbinas de gas y de vapor, y

  10. Modelling of aerosol processes in plumes

    Energy Technology Data Exchange (ETDEWEB)

    Lazaridis, M.; Isukapalli, S.S.; Georgopoulos, P.G. [Norwegian Institute of Air Research, Kjeller (Norway)

    2001-07-01

    A modelling platform for studying photochemical gaseous and aerosol phase processes from localized (e.g., point) sources has been presented. The current approach employs a reactive plume model which extends the regulatory model RPM-IV by incorporating aerosol processes and heterogeneous chemistry. The physics and chemistry of elemental carbon, organic carbon, sulfate, nitrate, ammonium material of aerosols are treated and attributed to the PM size distribution. A modified version of the carbon bond IV chemical mechanism is included to model the formation of organic aerosol. Aerosol dynamics modeled include mechanisms of nucleation, condensation, dry deposition and gas/particle partitioning of organic matter. The model is first applied to a number of case studies involving emissions from point sources and sulfate particle formation in plumes. Model calculations show that homogeneous nucleation is an efficient process for new particle formation in plumes, in agreement with previous field studies and theoretical predictions. In addition, the model is compared with field data from power plant plumes with satisfactory predictions against gaseous species and total sulphate mass measurements. Finally, the plume model is applied to study secondary organic matter formation due to various emission categories such as vehicles and the oil production sector.

  11. Modeling Multivariate Volatility Processes: Theory and Evidence

    Directory of Open Access Journals (Sweden)

    Jelena Z. Minovic

    2009-05-01

    Full Text Available This article presents theoretical and empirical methodology for estimation and modeling of multivariate volatility processes. It surveys the model specifications and the estimation methods. Multivariate GARCH models covered are VEC (initially due to Bollerslev, Engle and Wooldridge, 1988, diagonal VEC (DVEC, BEKK (named after Baba, Engle, Kraft and Kroner, 1995, Constant Conditional Correlation Model (CCC, Bollerslev, 1990, Dynamic Conditional Correlation Model (DCC models of Tse and Tsui, 2002, and Engle, 2002. I illustrate approach by applying it to daily data from the Belgrade stock exchange, I examine two pairs of daily log returns for stocks and index, report the results obtained, and compare them with the restricted version of BEKK, DVEC and CCC representations. The methods for estimation parameters used are maximum log-likehood (in BEKK and DVEC models and twostep approach (in CCC model.

  12. Modeling the VARTM Composite Manufacturing Process

    Science.gov (United States)

    Song, Xiao-Lan; Loos, Alfred C.; Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal

    2004-01-01

    A comprehensive simulation model of the Vacuum Assisted Resin Transfer Modeling (VARTM) composite manufacturing process has been developed. For isothermal resin infiltration, the model incorporates submodels which describe cure of the resin and changes in resin viscosity due to cure, resin flow through the reinforcement preform and distribution medium and compaction of the preform during the infiltration. The accuracy of the model was validated by measuring the flow patterns during resin infiltration of flat preforms. The modeling software was used to evaluate the effects of the distribution medium on resin infiltration of a flat preform. Different distribution medium configurations were examined using the model and the results were compared with data collected during resin infiltration of a carbon fabric preform. The results of the simulations show that the approach used to model the distribution medium can significantly effect the predicted resin infiltration times. Resin infiltration into the preform can be accurately predicted only when the distribution medium is modeled correctly.

  13. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  14. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  15. Ancestral process and diffusion model with selection

    CERN Document Server

    Mano, Shuhei

    2008-01-01

    The ancestral selection graph in population genetics introduced by Krone and Neuhauser (1997) is an analogue to the coalescent genealogy. The number of ancestral particles, backward in time, of a sample of genes is an ancestral process, which is a birth and death process with quadratic death and linear birth rate. In this paper an explicit form of the number of ancestral particle is obtained, by using the density of the allele frequency in the corresponding diffusion model obtained by Kimura (1955). It is shown that fixation is convergence of the ancestral process to the stationary measure. The time to fixation of an allele is studied in terms of the ancestral process.

  16. Mathematical modeling of biomass fuels formation process.

    Science.gov (United States)

    Gaska, Krzysztof; Wandrasz, Andrzej J

    2008-01-01

    The increasing demand for thermal and electric energy in many branches of industry and municipal management accounts for a drastic diminishing of natural resources (fossil fuels). Meanwhile, in numerous technical processes, a huge mass of wastes is produced. A segregated and converted combustible fraction of the wastes, with relatively high calorific value, may be used as a component of formed fuels. The utilization of the formed fuel components from segregated groups of waste in associated processes of co-combustion with conventional fuels causes significant savings resulting from partial replacement of fossil fuels, and reduction of environmental pollution resulting directly from the limitation of waste migration to the environment (soil, atmospheric air, surface and underground water). The realization of technological processes with the utilization of formed fuel in associated thermal systems should be qualified by technical criteria, which means that elementary processes as well as factors of sustainable development, from a global viewpoint, must not be disturbed. The utilization of post-process waste should be preceded by detailed technical, ecological and economic analyses. In order to optimize the mixing process of fuel components, a mathematical model of the forming process was created. The model is defined as a group of data structures which uniquely identify a real process and conversion of this data in algorithms based on a problem of linear programming. The paper also presents the optimization of parameters in the process of forming fuels using a modified simplex algorithm with a polynomial worktime. This model is a datum-point in the numerical modeling of real processes, allowing a precise determination of the optimal elementary composition of formed fuels components, with assumed constraints and decision variables of the task.

  17. From Business Value Model to Coordination Process Model

    Science.gov (United States)

    Fatemi, Hassan; van Sinderen, Marten; Wieringa, Roel

    The increased complexity of business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business and process perspectives, and for mutually aligning these perspectives. Business value modeling and coordination process modeling both are necessary for a good e-business design, but these activities have different goals and use different concepts. Nevertheless, the resulting models should be consistent with each other because they refer to the same system from different perspectives. Hence, checking the consistency between these models or producing one based on the other would be of high value. In this paper we discuss the issue of achieving consistency in multi-level e-business design and give guidelines to produce consistent coordination process models from business value models in a stepwise manner.

  18. Command decoder unit. [performance tests of data processing terminals and data converters for space shuttle orbiters

    Science.gov (United States)

    1976-01-01

    The design and testing of laboratory hardware (a command decoder unit) used in evaluating space shuttle instrumentation, data processing, and ground check-out operations is described. The hardware was a modification of another similar instrumentation system. A data bus coupler was designed and tested to interface the equipment to a central bus controller (computer). A serial digital data transfer mechanism was also designed. Redundant power supplies and overhead modules were provided to minimize the probability of a single component failure causing a catastrophic failure. The command decoder unit is packaged in a modular configuration to allow maximum user flexibility in configuring a system. Test procedures and special test equipment for use in testing the hardware are described. Results indicate that the unit will allow NASA to evaluate future software systems for use in space shuttles. The units were delivered to NASA and appear to be adequately performing their intended function. Engineering sketches and photographs of the command decoder unit are included.

  19. Activation process in excitable systems with multiple noise sources: Large number of units

    CERN Document Server

    Franović, Igor; Todorović, Kristina; Kostić, Srđan; Burić, Nikola

    2015-01-01

    We study the activation process in large assemblies of type II excitable units whose dynamics is influenced by two independent noise terms. The mean-field approach is applied to explicitly demonstrate that the assembly of excitable units can itself exhibit macroscopic excitable behavior. In order to facilitate the comparison between the excitable dynamics of a single unit and an assembly, we introduce three distinct formulations of the assembly activation event. Each formulation treats different aspects of the relevant phenomena, including the threshold-like behavior and the role of coherence of individual spikes. Statistical properties of the assembly activation process, such as the mean time-to-first pulse and the associated coefficient of variation, are found to be qualitatively analogous for all three formulations, as well as to resemble the results for a single unit. These analogies are shown to derive from the fact that global variables undergo a stochastic bifurcation from the stochastically stable fix...

  20. Unit Process Wetlands for Removal of Trace Organic Contaminants and Pathogens from Municipal Wastewater Effluents

    Science.gov (United States)

    Jasper, Justin T.; Nguyen, Mi T.; Jones, Zackary L.; Ismail, Niveen S.; Sedlak, David L.; Sharp, Jonathan O.; Luthy, Richard G.; Horne, Alex J.; Nelson, Kara L.

    2013-01-01

    Abstract Treatment wetlands have become an attractive option for the removal of nutrients from municipal wastewater effluents due to their low energy requirements and operational costs, as well as the ancillary benefits they provide, including creating aesthetically appealing spaces and wildlife habitats. Treatment wetlands also hold promise as a means of removing other wastewater-derived contaminants, such as trace organic contaminants and pathogens. However, concerns about variations in treatment efficacy of these pollutants, coupled with an incomplete mechanistic understanding of their removal in wetlands, hinder the widespread adoption of constructed wetlands for these two classes of contaminants. A better understanding is needed so that wetlands as a unit process can be designed for their removal, with individual wetland cells optimized for the removal of specific contaminants, and connected in series or integrated with other engineered or natural treatment processes. In this article, removal mechanisms of trace organic contaminants and pathogens are reviewed, including sorption and sedimentation, biotransformation and predation, photolysis and photoinactivation, and remaining knowledge gaps are identified. In addition, suggestions are provided for how these treatment mechanisms can be enhanced in commonly employed unit process wetland cells or how they might be harnessed in novel unit process cells. It is hoped that application of the unit process concept to a wider range of contaminants will lead to more widespread application of wetland treatment trains as components of urban water infrastructure in the United States and around the globe. PMID:23983451

  1. Applying the Extended Parallel Process Model to workplace safety messages.

    Science.gov (United States)

    Basil, Michael; Basil, Debra; Deshpande, Sameer; Lavack, Anne M

    2013-01-01

    The extended parallel process model (EPPM) proposes fear appeals are most effective when they combine threat and efficacy. Three studies conducted in the workplace safety context examine the use of various EPPM factors and their effects, especially multiplicative effects. Study 1 was a content analysis examining the use of EPPM factors in actual workplace safety messages. Study 2 experimentally tested these messages with 212 construction trainees. Study 3 replicated this experiment with 1,802 men across four English-speaking countries-Australia, Canada, the United Kingdom, and the United States. The results of these three studies (1) demonstrate the inconsistent use of EPPM components in real-world work safety communications, (2) support the necessity of self-efficacy for the effective use of threat, (3) show a multiplicative effect where communication effectiveness is maximized when all model components are present (severity, susceptibility, and efficacy), and (4) validate these findings with gory appeals across four English-speaking countries.

  2. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings.

    Science.gov (United States)

    Boubour, Jean; Jenson, Katherine; Richter, Hannah; Yarbrough, Josiah; Oden, Z Maria; Schuler, Douglas A

    2016-01-01

    Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed "the sterile box," is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings.

  3. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings.

    Directory of Open Access Journals (Sweden)

    Jean Boubour

    Full Text Available Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed "the sterile box," is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings.

  4. High Power Silicon Carbide (SiC) Power Processing Unit Development

    Science.gov (United States)

    Scheidegger, Robert J.; Santiago, Walter; Bozak, Karin E.; Pinero, Luis R.; Birchenough, Arthur G.

    2015-01-01

    NASA GRC successfully designed, built and tested a technology-push power processing unit for electric propulsion applications that utilizes high voltage silicon carbide (SiC) technology. The development specifically addresses the need for high power electronics to enable electric propulsion systems in the 100s of kilowatts. This unit demonstrated how high voltage combined with superior semiconductor components resulted in exceptional converter performance.

  5. Software Engineering Laboratory (SEL) cleanroom process model

    Science.gov (United States)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  6. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.......A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated...... the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension...

  7. Experience in design and startup of distillation towers in primary crude oil processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Lebedev, Y.N.; D' yakov, V.G.; Mamontov, G.V.; Sheinman, V.A.; Ukhin, V.V.

    1985-11-01

    This paper describes a refinery in the city of Mathura, India, with a capacity of 7 million metric tons of crude per year, designed and constructed to include the following units: AVT for primary crude oil processing; catalytic cracking; visbreaking; asphalt; and other units. A diagram of the atmospheric tower with stripping sections is shown, and the stabilizer tower is illustrated. The startup and operation of the AVT and visbreaking units are described, and they demonstrate the high reliability and efficiency of the equipment.

  8. Program note: applying the UN process indicators for emergency obstetric care to the United States.

    Science.gov (United States)

    Lobis, S; Fry, D; Paxton, A

    2005-02-01

    The United Nations Process Indicators for emergency obstetric care (EmOC) have been used extensively in countries with high maternal mortality ratios (MMR) to assess the availability, utilization and quality of EmOC services. To compare the situation in high MMR countries to that of a low MMR country, data from the United States were used to determine EmOC service availability, utilization and quality. As was expected, the United States was found to have an adequate amount of good-quality EmOC services that are used by the majority of women with life-threatening obstetric complications.

  9. Modeling Forest Succession among Ecological Land Units in Northern Minnesota

    Directory of Open Access Journals (Sweden)

    George Host

    1998-12-01

    Full Text Available Field and modeling studies were used to quantify potential successional pathways among fine-scale ecological classification units within two geomorphic regions of north-central Minnesota. Soil and overstory data were collected on plots stratified across low-relief ground moraines and undulating sand dunes. Each geomorphic feature was sampled across gradients of topography or soil texture. Overstory conditions were sampled using five variable-radius point samples per plot; soil samples were analyzed for carbon and nitrogen content. Climatic, forest composition, and soil data were used to parameterize the sample plots for use with LINKAGES, a forest growth model that simulates changes in composition and soil characteristics over time. Forest composition and soil properties varied within and among geomorphic features. LINKAGES simulations were using "bare ground" and the current overstory as starting conditions. Northern hardwoods or pines dominated the late-successional communities of morainal and dune landforms, respectively. The morainal landforms were dominated by yellow birch and sugar maple; yellow birch reached its maximum abundance in intermediate landscape positions. On the dune sites, pine was most abundant in drier landscape positions, with white spruce increasing in abundance with increasing soil moisture and N content. The differences in measured soil properties and predicted late-successional composition indicate that ecological land units incorporate some of the key variables that govern forest composition and structure. They further show the value of ecological classification and modeling for developing forest management strategies that incorporate the spatial and temporal dynamics of forest ecosystems.

  10. Causally nonseparable processes admitting a causal model

    Science.gov (United States)

    Feix, Adrien; Araújo, Mateus; Brukner, Časlav

    2016-08-01

    A recent framework of quantum theory with no global causal order predicts the existence of ‘causally nonseparable’ processes. Some of these processes produce correlations incompatible with any causal order (they violate so-called ‘causal inequalities’ analogous to Bell inequalities) while others do not (they admit a ‘causal model’ analogous to a local model). Here we show for the first time that bipartite causally nonseparable processes with a causal model exist, and give evidence that they have no clear physical interpretation. We also provide an algorithm to generate processes of this kind and show that they have nonzero measure in the set of all processes. We demonstrate the existence of processes which stop violating causal inequalities but are still causally nonseparable when mixed with a certain amount of ‘white noise’. This is reminiscent of the behavior of Werner states in the context of entanglement and nonlocality. Finally, we provide numerical evidence for the existence of causally nonseparable processes which have a causal model even when extended with an entangled state shared among the parties.

  11. Stochastic differential equation model to Prendiville processes

    Energy Technology Data Exchange (ETDEWEB)

    Granita, E-mail: granitafc@gmail.com [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); Bahar, Arifah [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); UTM Center for Industrial & Applied Mathematics (UTM-CIAM) (Malaysia)

    2015-10-22

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.

  12. Stochastic differential equation model to Prendiville processes

    Science.gov (United States)

    Granita, Bahar, Arifah

    2015-10-01

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.

  13. Lumped Parameter Modeling for Rapid Vibration Response Prototyping and Test Correlation for Electronic Units

    Science.gov (United States)

    Van Dyke, Michael B.

    2013-01-01

    Present preliminary work using lumped parameter models to approximate dynamic response of electronic units to random vibration; Derive a general N-DOF model for application to electronic units; Illustrate parametric influence of model parameters; Implication of coupled dynamics for unit/board design; Demonstrate use of model to infer printed wiring board (PWB) dynamics from external chassis test measurement.

  14. The Probability Model of Expectation Disconfirmation Process

    Directory of Open Access Journals (Sweden)

    Hui-Hsin HUANG

    2015-06-01

    Full Text Available This paper proposes a probability model to explore the dynamic process of customer’s satisfaction. Bases on expectation disconfirmation theory, the satisfaction is constructed with customer’s expectation before buying behavior and the perceived performance after purchase. The experiment method is designed to measure expectation disconfirmation effects and we also use the collection data to estimate the overall satisfaction and model calibration. The results show good fitness between the model and the real data. This model has application for business marketing areas in order to manage relationship satisfaction.

  15. Chain binomial models and binomial autoregressive processes.

    Science.gov (United States)

    Weiss, Christian H; Pollett, Philip K

    2012-09-01

    We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation.

  16. 21st Century Parent-Child Sex Communication in the United States: A Process Review.

    Science.gov (United States)

    Flores, Dalmacio; Barroso, Julie

    2017-01-06

    Parent-child sex communication results in the transmission of family expectations, societal values, and role modeling of sexual health risk-reduction strategies. Parent-child sex communication's potential to curb negative sexual health outcomes has sustained a multidisciplinary effort to better understand the process and its impact on the development of healthy sexual attitudes and behaviors among adolescents. This review advances what is known about the process of sex communication in the United States by reviewing studies published from 2003 to 2015. We used the Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, SocINDEX, and PubMed, and the key terms "parent child" AND "sex education" for the initial query; we included 116 original articles for analysis. Our review underscores long-established factors that prevent parents from effectively broaching and sustaining talks about sex with their children and has also identified emerging concerns unique to today's parenting landscape. Parental factors salient to sex communication are established long before individuals become parents and are acted upon by influences beyond the home. Child-focused communication factors likewise describe a maturing audience that is far from captive. The identification of both enduring and emerging factors that affect how sex communication occurs will inform subsequent work that will result in more positive sexual health outcomes for adolescents.

  17. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    Science.gov (United States)

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.

    2017-01-01

    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  18. Modeling heterogeneous chemical processes on aerosol surface

    Institute of Scientific and Technical Information of China (English)

    Junjun Deng; Tijian Wang; Li Liu; Fei Jiang

    2010-01-01

    To explore the possible impact of heterogeneous chemical processes on atmospheric trace components,a coupled box model including gas-phase chemical processes,aerosol thermodynamic equilibrium processes,and heterogeneous chemical processes on the surface of dust,black carbon(BC)and sea salt is set up to simulate the effects of heterogeneous chemistry on the aerosol surface,and analyze the primary factors affecting the heterogeneous processes.Results indicate that heterogeneous chemical processes on the aerosol surface in the atmosphere will affect the concentrations of trace gases such as H2O2,HO2,O3,NO2,NO3,HNO3 and SO2,and aerosols such as SO42-,NO3-and NH4+.Sensitivity tests suggest that the magnitude of the impact of heterogeneous processes strongly depends on aerosol concentration and the surface uptake coefficients used in the box model.However,the impact of temperature on heterogeneous chemical processes is considerably less.The"renoxification"of HNO3 will affect the components of the troposphere such as nitrogen oxide and ozone.

  19. Incorporating evolutionary processes into population viability models.

    Science.gov (United States)

    Pierson, Jennifer C; Beissinger, Steven R; Bragg, Jason G; Coates, David J; Oostermeijer, J Gerard B; Sunnucks, Paul; Schumaker, Nathan H; Trotter, Meredith V; Young, Andrew G

    2015-06-01

    We examined how ecological and evolutionary (eco-evo) processes in population dynamics could be better integrated into population viability analysis (PVA). Complementary advances in computation and population genomics can be combined into an eco-evo PVA to offer powerful new approaches to understand the influence of evolutionary processes on population persistence. We developed the mechanistic basis of an eco-evo PVA using individual-based models with individual-level genotype tracking and dynamic genotype-phenotype mapping to model emergent population-level effects, such as local adaptation and genetic rescue. We then outline how genomics can allow or improve parameter estimation for PVA models by providing genotypic information at large numbers of loci for neutral and functional genome regions. As climate change and other threatening processes increase in rate and scale, eco-evo PVAs will become essential research tools to evaluate the effects of adaptive potential, evolutionary rescue, and locally adapted traits on persistence.

  20. Quantitative Modeling of Earth Surface Processes

    Science.gov (United States)

    Pelletier, Jon D.

    This textbook describes some of the most effective and straightforward quantitative techniques for modeling Earth surface processes. By emphasizing a core set of equations and solution techniques, the book presents state-of-the-art models currently employed in Earth surface process research, as well as a set of simple but practical research tools. Detailed case studies demonstrate application of the methods to a wide variety of processes including hillslope, fluvial, aeolian, glacial, tectonic, and climatic systems. Exercises at the end of each chapter begin with simple calculations and then progress to more sophisticated problems that require computer programming. All the necessary computer codes are available online at www.cambridge.org/9780521855976. Assuming some knowledge of calculus and basic programming experience, this quantitative textbook is designed for advanced geomorphology courses and as a reference book for professional researchers in Earth and planetary science looking for a quantitative approach to Earth surface processes. More details...

  1. A process algebra model of QED

    Science.gov (United States)

    Sulis, William

    2016-03-01

    The process algebra approach to quantum mechanics posits a finite, discrete, determinate ontology of primitive events which are generated by processes (in the sense of Whitehead). In this ontology, primitive events serve as elements of an emergent space-time and of emergent fundamental particles and fields. Each process generates a set of primitive elements, using only local information, causally propagated as a discrete wave, forming a causal space termed a causal tapestry. Each causal tapestry forms a discrete and finite sampling of an emergent causal manifold (space-time) M and emergent wave function. Interactions between processes are described by a process algebra which possesses 8 commutative operations (sums and products) together with a non-commutative concatenation operator (transitions). The process algebra possesses a representation via nondeterministic combinatorial games. The process algebra connects to quantum mechanics through the set valued process and configuration space covering maps, which associate each causal tapestry with sets of wave functions over M. Probabilities emerge from interactions between processes. The process algebra model has been shown to reproduce many features of the theory of non-relativistic scalar particles to a high degree of accuracy, without paradox or divergences. This paper extends the approach to a semi-classical form of quantum electrodynamics.

  2. Parallel design of JPEG-LS encoder on graphics processing units

    Science.gov (United States)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  3. Development Status of Power Processing Unit for 250mN-Class Hall Thruster

    Science.gov (United States)

    Osuga, H.; Suzuki, K.; Ozaki, T.; Nakagawa, T.; Suga, I.; Tamida, T.; Akuzawa, Y.; Suzuki, H.; Soga, Y.; Furuichi, T.; Maki, S.; Matui, K.

    2008-09-01

    Institute for Unmanned Space Experiment Free Flyer (USEF) and Mitsubishi Electric Corporation (MELCO) are developing the next generation ion engine system under the sponsorship of Ministry of Economy, Trade and Industry (METI) within six years. The system requirement specifications are a thrust level of over 250mN and specific impulse of over 1500 sec with a less than 5kW electric power supply, and a lifetime of over 3,000 hours. These target specifications required the development of both a Hall Thruster and a Power Processing Unit (PPU). In the 2007 fiscal year, the PPU called Second Engineering Model (EM2) consist of all power supplies was a model for the Hall Thruster system. The EM2 PPU showed the discharge efficiency was over 96.2% for 250V and 350V at output power between 1.8kW to 4.5kW. And also the Hall Thruster could start up quickly and smoothly to control the discharge voltage, the inner magnet current, the outer magnet current and the xenon flow rate. This paper reports on the design and test results of the EM2 PPU.

  4. Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units.

    Science.gov (United States)

    Wang, Kun; Huang, Chao; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A

    2013-02-01

    Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphics processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer simulation and experimental studies are conducted to investigate the computational efficiency and numerical accuracy of the developed algorithms. The GPU implementations improve the computational efficiency by factors of 1000, 125, and 250 for the FBP algorithm and the two pairs of projection/backprojection operators, respectively. Accurate images are reconstructed by use of the FBP and iterative image reconstruction algorithms from both computer-simulated and experimental data. Parallelization strategies for 3D OAT image reconstruction are proposed for the first time. These GPU-based implementations significantly reduce the computational time for 3D image reconstruction, complementing our earlier work on 3D OAT iterative image reconstruction.

  5. Model-Free Adaptive Heating Process Control

    OpenAIRE

    Ivana LUKÁČOVÁ; Piteľ, Ján

    2009-01-01

    The aim of this paper is to analyze the dynamic behaviour of a Model-Free Adaptive (MFA) heating process control. The MFA controller is designed as three layer neural network with proportional element. The method of backward propagation of errors was used for neural network training. Visualization and training of the artificial neural network was executed by Netlab in Matlab environment. Simulation of the MFA heating process control with outdoor temperature compensation has proved better resu...

  6. HYDROLOGICAL PROCESSES MODELLING USING ADVANCED HYDROINFORMATIC TOOLS

    Directory of Open Access Journals (Sweden)

    BEILICCI ERIKA

    2014-03-01

    Full Text Available The water has an essential role in the functioning of ecosystems by integrating the complex physical, chemical, and biological processes that sustain life. Water is a key factor in determining the productivity of ecosystems, biodiversity and species composition. Water is also essential for humanity: water supply systems for population, agriculture, fisheries, industries, and hydroelectric power depend on water supplies. The modelling of hydrological processes is an important activity for water resources management, especially now, when the climate change is one of the major challenges of our century, with strong influence on hydrological processes dynamics. Climate change and needs for more knowledge in water resources require the use of advanced hydroinformatic tools in hydrological processes modelling. The rationale and purpose of advanced hydroinformatic tools is to develop a new relationship between the stakeholders and the users and suppliers of the systems: to offer the basis (systems which supply useable results, the validity of which cannot be put in reasonable doubt by any of the stakeholders involved. For a successful modelling of hydrological processes also need specialists well trained and able to use advanced hydro-informatics tools. Results of modelling can be a useful tool for decision makers to taking efficient measures in social, economical and ecological domain regarding water resources, for an integrated water resources management.

  7. SWOT Analysis of Software Development Process Models

    Directory of Open Access Journals (Sweden)

    Ashish B. Sasankar

    2011-09-01

    Full Text Available Software worth billions and trillions of dollars have gone waste in the past due to lack of proper techniques used for developing software resulting into software crisis. Historically , the processes of software development has played an important role in the software engineering. A number of life cycle models have been developed in last three decades. This paper is an attempt to Analyze the software process model using SWOT method. The objective is to identify Strength ,Weakness ,Opportunities and Threats of Waterfall, Spiral, Prototype etc.

  8. Fundamentals of Numerical Modelling of Casting Processes

    DEFF Research Database (Denmark)

    Pryds, Nini; Thorborg, Jesper; Lipinski, Marek;

    Fundamentals of Numerical Modelling of Casting Processes comprises a thorough presentation of the basic phenomena that need to be addressed in numerical simulation of casting processes. The main philosophy of the book is to present the topics in view of their physical meaning, whenever possible......, rather than relying strictly on mathematical formalism. The book, aimed both at the researcher and the practicing engineer, as well as the student, is naturally divided into four parts. Part I (Chapters 1-3) introduces the fundamentals of modelling in a 1-dimensional framework. Part II (Chapter 4...

  9. The Best Practice Unit: a model for learning, research and development

    Directory of Open Access Journals (Sweden)

    Jean Pierre Wilken

    2013-06-01

    Full Text Available The Best Practice Unit: a model for learning, research and development The Best Practice Unit (BPU model constitutes a unique form of practice-based research. A variant of the Community of Practice model developed by Wenger, McDermott and Snyder (2002, the BPU has the specific aim of improving professional practice by combining innovation and research. The model is used as a way of working by a group of professionals, researchers and other relevant individuals, who over a period of one to two years, work together towards a desired improvement. The model is characterized by interaction between individual and collective learning processes, the development of new or improved working methods, and the implementation of these methods in daily practice. Multiple knowledge resources are used, including experiential knowledge, professional knowledge and scientific knowledge. The research serves diverse purposes: articulating tacit knowledge, documenting learning and innovation processes, systematically describing the working methods that have been revealed or developed, and evaluating the efficacy of the new methods. Each BPU is supported by a facilitator, whose main task is to optimize learning processes. An analysis of ten different BPUs in different professional fields shows that this is a successful model. The article describes the methodology and results of this study. De Best Practice Unit: een model voor leren, onderzoek en ontwikkeling Het model van de Best Practice Unit (BPU is een unieke vorm van praktijkgericht onderzoek. De Best Practice Unit is een variant van de Community of Practice zoals ontwikkeld door Wenger, McDermott en Snyder (2002 met als specifiek doel om de professionele praktijk te verbeteren door innovatie en onderzoek te combineren. Het model wordt gebruikt om in een periode van 1-2 jaar met een groep professionals, onderzoekers en andere betrokkenen te werken aan een gewenste verbetering. Kenmerkend is de wisselwerking tussen

  10. A Mathematical Model of Cigarette Smoldering Process

    Directory of Open Access Journals (Sweden)

    Chen P

    2014-12-01

    Full Text Available A mathematical model for a smoldering cigarette has been proposed. In the analysis of the cigarette combustion and pyrolysis processes, a receding burning front is defined, which has a constant temperature (~450 °C and divides the cigarette into two zones, the burning zone and the pyrolysis zone. The char combustion processes in the burning zone and the pyrolysis of virgin tobacco and evaporation of water in the pyrolysis zone are included in the model. The hot gases flow from the burning zone, are assumed to go out as sidestream smoke during smoldering. The internal heat transport is characterized by effective thermal conductivities in each zone. Thermal conduction of cigarette paper and convective and radiative heat transfer at the outer surface were also considered. The governing partial differential equations were solved using an integral method. Model predictions of smoldering speed as well as temperature and density profiles in the pyrolysis zone for different kinds of cigarettes were found to agree with the experimental data. The model also predicts the coal length and the maximum coal temperatures during smoldering conditions. The model provides a relatively fast and efficient way to simulate the cigarette burning processes. It offers a practical tool for exploring important parameters for cigarette smoldering processes, such as tobacco components, properties of cigarette paper, and heat generation in the burning zone and its dependence on the mass burn rate.

  11. Processing and Modeling of Porous Copper Using Sintering Dissolution Process

    Science.gov (United States)

    Salih, Mustafa Abualgasim Abdalhakam

    The growth of porous metal has produced materials with improved properties as compared to non-metals and solid metals. Porous metal can be classified as either open cell or closed cell. Open cell allows a fluid media to pass through it. Closed cell is made up of adjacent sealed pores with shared cell walls. Metal foams offer higher strength to weight ratios, increased impact energy absorption, and a greater tolerance to high temperatures and adverse environmental conditions when compared to bulk materials. Copper and its alloys are examples of these, well known for high strength and good mechanical, thermal and electrical properties. In the present study, the porous Cu was made by a powder metallurgy process, using three different space holders, sodium chloride, sodium carbonate and potassium carbonate. Several different samples have been produced, using different ratios of volume fraction. The densities of the porous metals have been measured and compared to the theoretical density calculated using an equation developed for these foams. The porous structure was determined with the removal of spacer materials through sintering process. The sintering process of each spacer material depends on the melting point of the spacer material. Processing, characterization, and mechanical properties were completed. These tests include density measurements, compression tests, computed tomography (CT) and scanning electron microscopy (SEM). The captured morphological images are utilized to generate the object-oriented finite element (OOF) analysis for the porous copper. Porous copper was formed with porosities in the range of 40-66% with density ranges from 3 to 5.2 g/cm3. A study of two different methods to measure porosity was completed. OOF (Object Oriented Finite Elements) is a desktop software application for studying the relationship between the microstructure of a material and its overall mechanical, dielectric, or thermal properties using finite element models based on

  12. Grace: a Cross-platform Micromagnetic Simulator On Graphics Processing Units

    CERN Document Server

    Zhu, Ru

    2014-01-01

    A micromagnetic simulator running on graphics processing unit (GPU) is presented. It achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude for large input problems. Different from GPU implementations of other research groups, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform compatible. It runs on GPU from venders include NVidia, AMD and Intel, which paved the way for fast micromagnetic simulation on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics card. A copy of the simulator software is publicly available.

  13. Fast extended focused imaging in digital holography using a graphics processing unit.

    Science.gov (United States)

    Wang, Le; Zhao, Jianlin; Di, Jianglei; Jiang, Hongzhen

    2011-05-01

    We present a simple and effective method for reconstructing extended focused images in digital holography using a graphics processing unit (GPU). The Fresnel transform method is simplified by an algorithm named fast Fourier transform pruning with frequency shift. Then the pixel size consistency problem is solved by coordinate transformation and combining the subpixel resampling and the fast Fourier transform pruning with frequency shift. With the assistance of the GPU, we implemented an improved parallel version of this method, which obtained about a 300-500-fold speedup compared with central processing unit codes.

  14. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  15. Internet User Behaviour Model Discovery Process

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available The Academy of Economic Studies has more than 45000 students and about 5000 computers with Internet access which are connected to AES network. Students can access internet on these computers through a proxy server which stores information about the way the Internet is accessed. In this paper, we describe the process of discovering internet user behavior models by analyzing proxy server raw data and we emphasize the importance of such models for the e-learning environment.

  16. Internet User Behaviour Model Discovery Process

    OpenAIRE

    Dragos Marcel VESPAN

    2007-01-01

    The Academy of Economic Studies has more than 45000 students and about 5000 computers with Internet access which are connected to AES network. Students can access internet on these computers through a proxy server which stores information about the way the Internet is accessed. In this paper, we describe the process of discovering internet user behavior models by analyzing proxy server raw data and we emphasize the importance of such models for the e-learning environment.

  17. A convolution model of rock bed thermal storage units

    Science.gov (United States)

    Sowell, E. F.; Curry, R. L.

    1980-01-01

    A method is presented whereby a packed-bed thermal storage unit is dynamically modeled for bi-directional flow and arbitrary input flow stream temperature variations. The method is based on the principle of calculating the output temperature as the sum of earlier input temperatures, each multiplied by a predetermined 'response factor', i.e., discrete convolution. A computer implementation of the scheme, in the form of a subroutine for a widely used solar simulation program (TRNSYS) is described and numerical results compared with other models. Also, a method for efficient computation of the required response factors is described; this solution is for a triangular input pulse, previously unreported, although the solution method is also applicable for other input functions. This solution requires a single integration of a known function which is easily carried out numerically to the required precision.

  18. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  19. Improving the process of process modelling by the use of domain process patterns

    Science.gov (United States)

    Koschmider, Agnes; Reijers, Hajo A.

    2015-01-01

    The use of business process models has become prevalent in a wide area of enterprise applications. But while their popularity is expanding, concerns are growing with respect to their proper creation and maintenance. An obvious way to boost the efficiency of creating high-quality business process models would be to reuse relevant parts of existing models. At this point, however, limited support exists to guide process modellers towards the usage of appropriate model content. In this paper, a set of content-oriented patterns is presented, which is extracted from a large set of process models from the order management and manufacturing production domains. The patterns are derived using a newly proposed set of algorithms, which are being discussed in this paper. The authors demonstrate how such Domain Process Patterns, in combination with information on their historic usage, can support process modellers in generating new models. To support the wider dissemination and development of Domain Process Patterns within and beyond the studied domains, an accompanying website has been set up.

  20. Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).

    Science.gov (United States)

    Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco

    2013-10-01

    In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Process and structure: resource management and the development of sub-unit organisational structure.

    Science.gov (United States)

    Packwood, T; Keen, J; Buxton, M

    1992-03-01

    Resource Management (RM) requires hospital units to manage their work in new ways, and the new management processes affect, and are affected by, organisation structure. This paper is concerned with these effects, reporting on the basis of a three-year evaluation of the national RM experiment that was commissioned by the DH. After briefly indicating some of the major characteristics of the RM process, the two main types of unit structures existing in the pilot sites at the beginning of the experiment, unit disciplinary structure and clinical directorates, are analysed. At the end of the experiment, while clinical directorates had become more popular, another variant, clinical grouping, had replaced the unit disciplinary structure. Both types of structure represent a movement towards sub-unit organisation, bringing the work and interests of the service providers and unit managers closer together. Their properties are likewise analysed and their implications, particularly in terms of training and organisational development (OD), are then considered. The paper concludes by considering the causes for these structural changes, which, in the immediate time-scale, appear to owe as much to the NHS Review as to RM.

  2. In-Situ Statistical Analysis of Autotune Simulation Data using Graphical Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Ranjan, Niloo [ORNL; Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL

    2013-08-01

    Developing accurate building energy simulation models to assist energy efficiency at speed and scale is one of the research goals of the Whole-Building and Community Integration group, which is a part of Building Technologies Research and Integration Center (BTRIC) at Oak Ridge National Laboratory (ORNL). The aim of the Autotune project is to speed up the automated calibration of building energy models to match measured utility or sensor data. The workflow of this project takes input parameters and runs EnergyPlus simulations on Oak Ridge Leadership Computing Facility s (OLCF) computing resources such as Titan, the world s second fastest supercomputer. Multiple simulations run in parallel on nodes having 16 processors each and a Graphics Processing Unit (GPU). Each node produces a 5.7 GB output file comprising 256 files from 64 simulations. Four types of output data covering monthly, daily, hourly, and 15-minute time steps for each annual simulation is produced. A total of 270TB+ of data has been produced. In this project, the simulation data is statistically analyzed in-situ using GPUs while annual simulations are being computed on the traditional processors. Titan, with its recent addition of 18,688 Compute Unified Device Architecture (CUDA) capable NVIDIA GPUs, has greatly extended its capability for massively parallel data processing. CUDA is used along with C/MPI to calculate statistical metrics such as sum, mean, variance, and standard deviation leveraging GPU acceleration. The workflow developed in this project produces statistical summaries of the data which reduces by multiple orders of magnitude the time and amount of data that needs to be stored. These statistical capabilities are anticipated to be useful for sensitivity analysis of EnergyPlus simulations.

  3. [Applying graphics processing unit in real-time signal processing and visualization of ophthalmic Fourier-domain OCT system].

    Science.gov (United States)

    Liu, Qiaoyan; Li, Yuejie; Xu, Qiujing; Zhao, Jincheng; Wang, Liwei; Gao, Yonghe

    2013-01-01

    This investigation introduces GPU (Graphics Processing Unit)- based CUDA (Compute Unified Device Architecture) technology into signal processing of ophthalmic FD-OCT (Fourier-Domain Optical Coherence Tomography) imaging system, can realize parallel data processing, using CUDA to optimize relevant operations and algorithms, in order to solve the technical bottlenecks that currently affect ophthalmic real-time imaging in OCT system. Laboratory results showed that with GPU as a general parallel computing processor, the speed of imaging data processing using GPU+CPU mode is more than dozens times faster than traditional CPU platform based serial computing and imaging mode when executing the same data processing, which reaches the clinical requirements for two dimensional real-time imaging.

  4. Stochastic model of milk homogenization process using Markov's chain

    Directory of Open Access Journals (Sweden)

    A. A. Khvostov

    2016-01-01

    Full Text Available The process of development of a mathematical model of the process of homogenization of dairy products is considered in the work. The theory of Markov's chains was used in the development of the mathematical model, Markov's chain with discrete states and continuous parameter for which the homogenisation pressure is taken, being the basis for the model structure. Machine realization of the model is implemented in the medium of structural modeling MathWorks Simulink™. Identification of the model parameters was carried out by minimizing the standard deviation calculated from the experimental data for each fraction of dairy products fat phase. As the set of experimental data processing results of the micrographic images of fat globules of whole milk samples distribution which were subjected to homogenization at different pressures were used. Pattern Search method was used as optimization method with the Latin Hypercube search algorithm from Global Optimization Тoolbox library. The accuracy of calculations averaged over all fractions of 0.88% (the relative share of units, the maximum relative error was 3.7% with the homogenization pressure of 30 MPa, which may be due to the very abrupt change in properties from the original milk in the particle size distribution at the beginning of the homogenization process and the lack of experimental data at homogenization pressures of below the specified value. The mathematical model proposed allows to calculate the profile of volume and mass distribution of the fat phase (fat globules in the product, depending on the homogenization pressure and can be used in the laboratory and research of dairy products composition, as well as in the calculation, design and modeling of the process equipment of the dairy industry enterprises.

  5. Model-based internal wave processing

    Energy Technology Data Exchange (ETDEWEB)

    Candy, J.V.; Chambers, D.H.

    1995-06-09

    A model-based approach is proposed to solve the oceanic internal wave signal processing problem that is based on state-space representations of the normal-mode vertical velocity and plane wave horizontal velocity propagation models. It is shown that these representations can be utilized to spatially propagate the modal (dept) vertical velocity functions given the basic parameters (wave numbers, Brunt-Vaisala frequency profile etc.) developed from the solution of the associated boundary value problem as well as the horizontal velocity components. Based on this framework, investigations are made of model-based solutions to the signal enhancement problem for internal waves.

  6. Counting Processes for Retail Default Modeling

    DEFF Research Database (Denmark)

    Kiefer, Nicholas Maximilian; Larson, C. Erik

    in a discrete state space. In a simple case, the states could be default/non-default; in other models relevant for credit modeling the states could be credit scores or payment status (30 dpd, 60 dpd, etc.). Here we focus on the use of stochastic counting processes for mortgage default modeling, using data...... on high LTV mortgages. Borrowers seeking to finance more than 80% of a house's value with a mortgage usually either purchase mortgage insurance, allowing a first mortgage greater than 80% from many lenders, or use second mortgages. Are there differences in performance between loans financed...

  7. Integrated Process Model on Intercultural Competence

    Directory of Open Access Journals (Sweden)

    Diana Bebenova - Nikolova

    2016-08-01

    Full Text Available The paper proposes an integrated model on intercultural competence, which attempts to present intercultural communication and competence from the term point of the dialectical approach, described by Martin and Nakayama (2010. The suggested concept deploys from previously developed and accepted models, both structure-oriented and process-oriented. At the same time it replies to the principles of the “Theory of Models” as outlined by Balboni and Caon (2014. In the near future, the model will be applied to assess intercultural competence of cross-border project teams, working under the CBC program between Romania – Bulgaria 2007-2014.

  8. Hencky's model for elastomer forming process

    Science.gov (United States)

    Oleinikov, A. A.; Oleinikov, A. I.

    2016-08-01

    In the numerical simulation of elastomer forming process, Henckys isotropic hyperelastic material model can guarantee relatively accurate prediction of strain range in terms of large deformations. It is shown, that this material model prolongate Hooke's law from the area of infinitesimal strains to the area of moderate ones. New representation of the fourth-order elasticity tensor for Hencky's hyperelastic isotropic material is obtained, it possesses both minor symmetries, and the major symmetry. Constitutive relations of considered model is implemented into MSC.Marc code. By calculating and fitting curves, the polyurethane elastomer material constants are selected. Simulation of equipment for elastomer sheet forming are considered.

  9. Fuzzy model for Laser Assisted Bending Process

    Directory of Open Access Journals (Sweden)

    Giannini Oliviero

    2016-01-01

    Full Text Available In the present study, a fuzzy model was developed to predict the residual bending in a conventional metal bending process assisted by a high power diode laser. The study was focused on AA6082T6 aluminium thin sheets. In most dynamic sheet metal forming operations, the highly nonlinear deformation processes cause large amounts of elastic strain energy stored in the formed material. The novel hybrid forming process was thus aimed at inducing the local heating of the mechanically bent workpiece in order to decrease or eliminate the related springback phenomena. In particular, the influence on the extent of springback phenomena of laser process parameters such as source power, scan speed and starting elastic deformation of mechanically bent sheets, was experimentally assessed. Consistent trends in experimental response according to operational parameters were found. Accordingly, 3D process maps of the extent of the springback phenomena according to operational parameters were constructed. The effect of the inherent uncertainties on the predicted residual bending caused by the approximation in the model parameters was evaluated. In particular, a fuzzy-logic based approach was used to describe the model uncertainties and the transformation method was applied to propagate their effect on the residual bending.

  10. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  11. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  12. Aligning Grammatical Theories and Language Processing Models

    Science.gov (United States)

    Lewis, Shevaun; Phillips, Colin

    2015-01-01

    We address two important questions about the relationship between theoretical linguistics and psycholinguistics. First, do grammatical theories and language processing models describe separate cognitive systems, or are they accounts of different aspects of the same system? We argue that most evidence is consistent with the one-system view. Second,…

  13. Model Identification of Integrated ARMA Processes

    Science.gov (United States)

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  14. Modeling of Reaction Processes Controlled by Diffusion

    CERN Document Server

    Revelli, J

    2003-01-01

    Stochastic modeling is quite powerful in science and technology.The technics derived from this process have been used with great success in laser theory, biological systems and chemical reactions.Besides, they provide a theoretical framework for the analysis of experimental results on the field of particle's diffusion in ordered and disordered materials.In this work we analyze transport processes in one-dimensional fluctuating media, which are media that change their state in time.This fact induces changes in the movements of the particles giving rise to different phenomena and dynamics that will be described and analyzed in this work.We present some random walk models to describe these fluctuating media.These models include state transitions governed by different dynamical processes.We also analyze the trapping problem in a lattice by means of a simple model which predicts a resonance-like phenomenon.Also we study effective diffusion processes over surfaces due to random walks in the bulk.We consider differe...

  15. Kinetics and modeling of anaerobic digestion process

    DEFF Research Database (Denmark)

    2003-01-01

    Anaerobic digestion modeling started in the early 1970s when the need for design and efficient operation of anaerobic systems became evident. At that time not only was the knowledge about the complex process of anaerobic digestion inadequate but also there were computational limitations. Thus...

  16. Dynamic Process of Money Transfer Models

    CERN Document Server

    Wang, Y; Wang, Yougui; Ding, Ning

    2005-01-01

    We have studied numerically the statistical mechanics of the dynamic phenomena, including money circulation and economic mobility, in some transfer models. The models on which our investigations were performed are the basic model proposed by A. Dragulescu and V. Yakovenko [1], the model with uniform saving rate developed by A. Chakraborti and B.K. Chakrabarti [2], and its extended model with diverse saving rate [3]. The velocity of circulation is found to be inversely related with the average holding time of money. In order to check the nature of money transferring process in these models, we demonstrated the probability distributions of holding time. In the model with uniform saving rate, the distribution obeys exponential law, which indicates money transfer here is a kind of Poisson process. But when the saving rate is set diversely, the holding time distribution follows a power law. The velocity can also be deduced from a typical individual's optimal choice. In this way, an approach for building the micro-...

  17. Iron and steel industry process model

    Energy Technology Data Exchange (ETDEWEB)

    Sparrow, F.T.

    1978-07-01

    The model depicts expected energy-consumption characteristics of the iron and steel industry and ancillary industries for the next 25 years by means of a process model of the major steps in steelmaking from ore mining and scrap recycling to the final finishing of carbon, alloy, and stainless steel into steel products such as structural steel, slabs, plates, tubes, and bars. Two plant types are modelled: fully integrated mills and minimills. User-determined inputs into the model are: (a) projected energy materials prices for the horizon; (b) projected costs of capacity expansion and replacement; (c) energy conserving options - both operating modes and investments; (d) internal rate of return required on projects; and (e) growth in finished steel demand. Nominal input choices in the model are: DOE baseline projections for oil, gas, distillates, residuals, and electricity for energy, and 1975 actual prices for materials; actual 1975 costs; adding new technologies; 15% after taxes; and 1975 actual demand with 1.5% growth/year. Output of the model includes: energy use by type, by process, and by time period, both in total and intensity (Btu/ton); energy-conservation options chosen; and utilization rates for existing capacity, and the capacity expansion decisions of the model.

  18. Process Model for Friction Stir Welding

    Science.gov (United States)

    Adams, Glynn

    1996-01-01

    Friction stir welding (FSW) is a relatively new process being applied for joining of metal alloys. The process was initially developed by The Welding Institute (TWI) in Cambridge, UK. The FSW process is being investigated at NASA/MSEC as a repair/initial weld procedure for fabrication of the super-light-weight aluminum-lithium shuttle external tank. The FSW investigations at MSFC were conducted on a horizontal mill to produce butt welds of flat plate material. The weldment plates are butted together and fixed to a backing plate on the mill bed. A pin tool is placed into the tool holder of the mill spindle and rotated at approximately 400 rpm. The pin tool is then plunged into the plates such that the center of the probe lies at, one end of the line of contact, between the plates and the shoulder of the pin tool penetrates the top surface of the weldment. The weld is produced by traversing the tool along the line of contact between the plates. A lead angle allows the leading edge of the shoulder to remain above the top surface of the plate. The work presented here is the first attempt at modeling a complex phenomenon. The mechanical aspects of conducting the weld process are easily defined and the process itself is controlled by relatively few input parameters. However, in the region of the weld, plasticizing and forging of the parent material occurs. These are difficult processes to model. The model presented here addresses only variations in the radial dimension outward from the pin tool axis. Examinations of the grain structure of the weld reveal that a considerable amount of material deformation also occurs in the direction parallel to the pin tool axis of rotation, through the material thickness. In addition, measurements of the axial load on the pin tool demonstrate that the forging affect of the pin tool shoulder is an important process phenomenon. Therefore, the model needs to be expanded to account for the deformations through the material thickness and the

  19. Modeling Low-temperature Geochemical Processes

    Science.gov (United States)

    Nordstrom, D. K.

    2003-12-01

    Geochemical modeling has become a popular and useful tool for a wide number of applications from research on the fundamental processes of water-rock interactions to regulatory requirements and decisions regarding permits for industrial and hazardous wastes. In low-temperature environments, generally thought of as those in the temperature range of 0-100 °C and close to atmospheric pressure (1 atm=1.01325 bar=101,325 Pa), complex hydrobiogeochemical reactions participate in an array of interconnected processes that affect us, and that, in turn, we affect. Understanding these complex processes often requires tools that are sufficiently sophisticated to portray multicomponent, multiphase chemical reactions yet transparent enough to reveal the main driving forces. Geochemical models are such tools. The major processes that they are required to model include mineral dissolution and precipitation; aqueous inorganic speciation and complexation; solute adsorption and desorption; ion exchange; oxidation-reduction; or redox; transformations; gas uptake or production; organic matter speciation and complexation; evaporation; dilution; water mixing; reaction during fluid flow; reaction involving biotic interactions; and photoreaction. These processes occur in rain, snow, fog, dry atmosphere, soils, bedrock weathering, streams, rivers, lakes, groundwaters, estuaries, brines, and diagenetic environments. Geochemical modeling attempts to understand the redistribution of elements and compounds, through anthropogenic and natural means, for a large range of scale from nanometer to global. "Aqueous geochemistry" and "environmental geochemistry" are often used interchangeably with "low-temperature geochemistry" to emphasize hydrologic or environmental objectives.Recognition of the strategy or philosophy behind the use of geochemical modeling is not often discussed or explicitly described. Plummer (1984, 1992) and Parkhurst and Plummer (1993) compare and contrast two approaches for

  20. Managing risks in business model innovation processes

    DEFF Research Database (Denmark)

    Taran, Yariv; Boer, Harry; Lindgren, Peter

    2010-01-01

    ) innovation is a risky enterprise, many companies are still choosing not to apply any risk management in the BM innovation process. The objective of this paper is to develop a better understanding of how risks are handled in the practice of BM innovation. An analysis of the BM innovation experiences of two......Companies today, in some industries more than others, invest more capital and resources just to stay competitive, develop more diverse solutions, and increasingly start thinking more radically when considering their business models. However, despite the understanding that business model (BM...... industrial companies shows that both companies are experiencing high levels of uncertainty and complexity during their innovation processes and are, consequently, struggling to find new processes for handling the risks involved. Based on the two companies’ experiences, various testable propositions are put...