Yang, Jin; Hlavacek, William S
2011-10-01
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie's method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e. long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e. time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Detector Simulation: Data Treatment and Analysis Methods
Apostolakis, J
2011-01-01
Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...
Isogeometric methods for numerical simulation
Bordas, Stéphane
2015-01-01
The book presents the state of the art in isogeometric modeling and shows how the method has advantaged. First an introduction to geometric modeling with NURBS and T-splines is given followed by the implementation into computer software. The implementation in both the FEM and BEM is discussed.
Large-Eddy Simulation and Multigrid Methods
Energy Technology Data Exchange (ETDEWEB)
Falgout,R D; Naegle,S; Wittum,G
2001-06-18
A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.
2-d Simulations of Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm
2004-01-01
using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham......One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...
Rainfall Simulation: methods, research questions and challenges
Ries, J. B.; Iserloh, T.
2012-04-01
In erosion research, rainfall simulations are used for the improvement of process knowledge as well as in the field for the assessment of overland flow generation, infiltration, and erosion rates. In all these fields of research, rainfall experiments have become an indispensable part of the research methods. In this context, small portable rainfall simulators with small test-plot sizes of one square-meter or even less, and devices of low weight and water consumption are in demand. Accordingly, devices with manageable technical effort like nozzle-type simulators seem to prevail against larger simulators. The reasons are obvious: lower costs and less time consumption needed for mounting enable a higher repetition rate. Regarding the high number of research questions, of different fields of application, and not least also due to the great technical creativity of our research staff, a large number of different experimental setups is available. Each of the devices produces a different rainfall, leading to different kinetic energy amounts influencing the soil surface and accordingly, producing different erosion results. Hence, important questions contain the definition, the comparability, the measurement and the simulation of natural rainfall and the problem of comparability in general. Another important discussion topic will be the finding of an agreement on an appropriate calibration method for the simulated rainfalls, in order to enable a comparison of the results of different rainfall simulator set-ups. In most of the publications, only the following "nice" sentence can be read: "Our rainfall simulator generates a rainfall spectrum that is similar to natural rainfall!". The most substantial and critical properties of a simulated rainfall are the drop-size distribution, the fall velocities of the drops, and the spatial distribution of the rainfall on the plot-area. In a comparison of the most important methods, the Laser Distrometer turned out to be the most up
Matrix method for acoustic levitation simulation.
Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C
2011-08-01
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
A SIMULATION ON TEACHING VOLHARD METHOD
Directory of Open Access Journals (Sweden)
Celal BAYRAK
2009-07-01
Full Text Available Laboratories are important components of chemistry education. Virtual simulations allow students to repeat the experiments as many times as they want and give students the opportunity to learn in their own ways. In this study, a computer assisted teaching material has been developed for tertiary level. This material has been planned to use in Analytical Chemistry Course content in the subject of quantitative methods. This teaching material has been developed by using Flash program and consisted of animations and simulations related to the quantitative determination of chloride by Volhard Method. Even though the quantitative determination of chloride by Volhard Method could be conducted in the laboratory setting, this experiment has been prepared by using simulations to give students the opportunity to repeat the experiment steps when they want, to control the each step, observing the changes on the equivalence point better. Volhard method is one of the methods to be considered as important in chemistry courses and laboratories. It is an important practical experience for students in the laboratory. In this study, the presented simulation has been prepared by considering these harmful effects and insufficient laboratory conditions.
Hybrid Method Simulation of Slender Marine Structures
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye
This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... is that they deal with time domain simulation of slender marine structures such as mooring lines and flexible risers used in deep sea offshore installations. The first part of the thesis describes how neural networks can be designed and trained to cover a large number of different sea states. Neural networks can...... to simulate dynamic response of specific critical hot spots on a flexible riser. In the design of mooring lines only top tension forces are considered. These forces can easily be determined by a single neural network. Riser design, depending on the applied configuration, requires detailed analysis of several...
Simulation teaching method in Engineering Optics
Lu, Qieni; Wang, Yi; Li, Hongbin
2017-08-01
We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.
Method for Constructing Standardized Simulated Root Canals.
Schulz-Bongert, Udo; Weine, Franklin S.
1990-01-01
The construction of visual and manipulative aids, clear resin blocks with root-canal-like spaces, for simulation of root canals is explained. Time, materials, and techniques are discussed. The method allows for comparison of canals, creation of any configuration of canals, and easy presentation during instruction. (MSE)
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D.M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.
2015-01-07
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Twitter's tweet method modelling and simulation
Sarlis, Apostolos S.; Sakas, Damianos P.; Vlachos, D. S.
2015-02-01
This paper seeks to purpose the concept of Twitter marketing methods. The tools that Twitter provides are modelled and simulated using iThink in the context of a Twitter media-marketing agency. The paper has leveraged the system's dynamic paradigm to conduct Facebook marketing tools and methods modelling, using iThink™ system to implement them. It uses the design science research methodology for the proof of concept of the models and modelling processes. The following models have been developed for a twitter marketing agent/company and tested in real circumstances and with real numbers. These models were finalized through a number of revisions and iterators of the design, develop, simulate, test and evaluate. It also addresses these methods that suit most organized promotion through targeting, to the Twitter social media service. The validity and usefulness of these Twitter marketing methods models for the day-to-day decision making are authenticated by the management of the company organization. It implements system dynamics concepts of Twitter marketing methods modelling and produce models of various Twitter marketing situations. The Tweet method that Twitter provides can be adjusted, depending on the situation, in order to maximize the profit of the company/agent.
Multigrid methods with applications to reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Xiao, Shengyou [Univ. of Texas, Austin, TX (United States)
1994-05-01
Multigrid methods are studied for solving elliptic partial differential equations. Focus is on parallel multigrid methods and their use for reservoir simulation. Multicolor Fourier analysis is used to analyze the behavior of standard multigrid methods for problems in one and two dimensions. Relation between multicolor and standard Fourier analysis is established. Multiple coarse grid methods for solving model problems in 1 and 2 dimensions are considered; at each coarse grid level we use more than one coarse grid to improve convergence. For a given Dirichlet problem, a related extended problem is first constructed; a purification procedure can be used to obtain Moore-Penrose solutions of the singular systems encountered. For solving anisotropic equations, semicoarsening and line smoothing techniques are used with multiple coarse grid methods to improve convergence. Two-level convergence factors are estimated using multicolor. In the case where each operator has the same stencil on each grid point on one level, exact multilevel convergence factors can be obtained. For solving partial differential equations with discontinuous coefficients, interpolation and restriction operators should include information about the equation coefficients. Matrix-dependent interpolation and restriction operators based on the Schur complement can be used in nonsymmetric cases. A semicoarsening multigrid solver with these operators is used in UTCOMP, a 3-D, multiphase, multicomponent, compositional reservoir simulator. The numerical experiments are carried out on different computing systems. Results indicate that the multigrid methods are promising.
A SIMULATION ON TEACHING VOLHARD METHOD
BAYRAK, CELAL; Nilgün SECKEN; Funda OZCAN OZTURK; Evrim URAL ALSAN
2009-01-01
Laboratories are important components of chemistry education. Virtual simulations allow students to repeat the experiments as many times as they want and give students the opportunity to learn in their own ways. In this study, a computer assisted teaching material has been developed for tertiary level. This material has been planned to use in Analytical Chemistry Course content in the subject of quantitative methods. This teaching material has been developed by using Flash program and consist...
Electromagnetic simulation using the FDTD method
Sullivan, Dennis M
2013-01-01
A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp
Spectral Methods in Numerical Plasma Simulation
DEFF Research Database (Denmark)
Coutsias, E.A.; Hansen, F.R.; Huld, T.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...... in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed....
Simulating marine propellers with vortex particle method
Wang, Youjiang; Abdel-Maksoud, Moustafa; Song, Baowei
2017-01-01
The vortex particle method is applied to compute the open water characteristics of marine propellers. It is based on the large-eddy simulation technique, and the Smagorinsky-Lilly sub-grid scale model is implemented for the eddy viscosity. The vortex particle method is combined with the boundary element method, in the sense that the body is modelled with boundary elements and the slipstream is modelled with vortex particles. Rotational periodic boundaries are adopted, which leads to a cylindrical sector domain for the slipstream. The particle redistribution scheme and the fast multipole method are modified to consider the rotational periodic boundaries. Open water characteristics of three propellers with different skew angles are calculated with the proposed method. The results are compared with the ones obtained with boundary element method and experiments. It is found that the proposed method predicts the open water characteristics more accurately than the boundary element method, especially for high loading condition and high skew propeller. The influence of the Smagorinsky constant is also studied, which shows the results have a low sensitivity to it.
Combining building thermal simulation methods and LCA methods
DEFF Research Database (Denmark)
Pedersen, Frank; Hansen, Klaus; Wittchen, Kim Bjarne
2008-01-01
of buildings (as expressed in EU Directive 2002/91/EC), may in the future be supplemented by requirements to the environmental impact of buildings. This can be seen by the fact that EU recently has given EN mandate to prepare standards for environmental assessment of buildings (CEN/TC 350).......Thsi paper describes recent efforts made by the Danish Building Research Institute regarding the integration of a life cycle assessment (LCA) method into a whole building hygro-thermal simulation tool. The motivation for the work is that the increased requirements to the energy performance...
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Directory of Open Access Journals (Sweden)
Kaushikbhai C. Parmar
2017-04-01
Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.
Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo
Energy Technology Data Exchange (ETDEWEB)
Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)
2011-07-01
This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)
IMPROVING MANUFACTURING PROCESSES USING SIMULATION METHODS
Directory of Open Access Journals (Sweden)
Sławomir KŁOS
2016-12-01
Full Text Available The paper presents the results of simulation research on buffer space allocated in a flow line and operation times influence on the throughput of a manufacturing system. The production line in the study consists of four stages and is based on a real machining manufacturing system of a small production enterprise. Using Tecnomatix Plant Simulation software, a simulation model of the system was created and set of experiments was planned. Simulation experiments were prepared for different capacities of intermediate buffers located between manufacturing resources and operation times as input parameters, and the throughput per hour and average life span of products as the output.
Methods and evaluations for simulation debriefing in nursing education.
Waznonis, Annette R
2014-08-01
Debriefing is the most important aspect of simulated learning, but actual debriefing practices are not evidence based or widely known. Expert opinions on effective simulation debriefing have been widely published and likely guide debriefing in nursing education. However, various terms are used to discuss simulation debriefing, making it difficult to distinguish debriefing methods. Also, the means for evaluating simulation debriefing are lacking. The purpose of this review is to identify and examine methods and evaluations for simulation debriefing in the educational setting. Twenty-two methods and seven evaluations for simulation debriefing were found. Four areas of differences among methods-suggested uses, design features, supplemental resources, and means for evaluation-were demonstrated. This review offers nurse educators and researchers a comprehensive, practical examination of the methods and evaluations for simulation debriefing in the educational setting, clarifies terminology, and describes how the debriefing methods, phases, approaches, elements, and evaluations are interrelated. Copyright 2014, SLACK Incorporated.
Virtual Crowds Methods, Simulation, and Control
Pelechano, Nuria; Allbeck, Jan
2008-01-01
There are many applications of computer animation and simulation where it is necessary to model virtual crowds of autonomous agents. Some of these applications include site planning, education, entertainment, training, and human factors analysis for building evacuation. Other applications include simulations of scenarios where masses of people gather, flow, and disperse, such as transportation centers, sporting events, and concerts. Most crowd simulations include only basic locomotive behaviors possibly coupled with a few stochastic actions. Our goal in this survey is to establish a baseline o
Numerical methods in simulation of resistance welding
DEFF Research Database (Denmark)
Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi
2015-01-01
Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...
Simulation methods for bumper system development
Isaksson, Erik
2006-01-01
n development of bumper systems for the automotive industry, iterative Finite Element (FE) simulations are normally used to find a bumper design that meets the requirements of crash performance. The crash performance of a bumper system is normally verified by results from standardized low speed crash tests based on common crash situations. Consequently, these crash load cases are also used in the FE simulations during the development process. However, lack of data for the car under developmen...
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
Interactive methods for exploring particle simulation data
Energy Technology Data Exchange (ETDEWEB)
Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.
2004-05-01
In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.
Hospital Registration Process Reengineering Using Simulation Method
Directory of Open Access Journals (Sweden)
Qiang Su
2010-01-01
Full Text Available With increasing competition, many healthcare organizations have undergone tremendous reform in the last decade aiming to increase efficiency, decrease waste, and reshape the way that care is delivered. This study focuses on the operational efficiency improvement of hospital’s registration process. The operational efficiency related factors including the service process, queue strategy, and queue parameters were explored systematically and illustrated with a case study. Guided by the principle of business process reengineering (BPR, a simulation approach was employed for process redesign and performance optimization. As a result, the queue strategy is changed from multiple queues and multiple servers to single queue and multiple servers with a prepare queue. Furthermore, through a series of simulation experiments, the length of the prepare queue and the corresponding registration process efficiency was quantitatively evaluated and optimized.
TreePM Method for Two-Dimensional Cosmological Simulations
Indian Academy of Sciences (India)
TreePM Method for Two-Dimensional Cosmological Simulations. Suryadeep Ray ... Keywords. Gravitation; methods: numerical; cosmology: large scale structure of the universe. ... The 2d TreePM code is an accurate and efficient technique to carry out large two-dimensional N-body simulations in cosmology. This hybrid ...
Simulation of tunneling construction methods of the Cisumdawu toll road
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...
RuleMonkey: software for stochastic simulation of rule-based models.
Colvin, Joshua; Monine, Michael I; Gutenkunst, Ryan N; Hlavacek, William S; Von Hoff, Daniel D; Posner, Richard G
2010-07-30
The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. RuleMonkey enables the simulation of rule-based models for which the
RuleMonkey: software for stochastic simulation of rule-based models
Directory of Open Access Journals (Sweden)
Hlavacek William S
2010-07-01
Full Text Available Abstract Background The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL. A set of rules implicitly defines a (biochemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems. Results Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods. Conclusions Rule
Comparing three methods for participatory simulation of hospital work systems
DEFF Research Database (Denmark)
Broberg, Ole; Andersen, Simone Nyholm
Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... and why this influence took place. Research Objective / Question: How does the simulation object influence which elements of a work system are being evaluated in participatory simulation events? Methodology: Observation notes and video recordings of three types of simulation events using different objects...... simulation objects may to a certain degree influence what part of a work system is being addressed in participatory simulation events. For human factors practitioners in hospital design projects it is important to pay attention to this when planning and facilitating simulation events to evaluate different...
Using simulation methods for orthopaedic implant design.
Rickey, L
2009-09-01
New virtual test methods are being used to better understand the functional performance of implants within the musculoskeletal system. Developing validated virtual models and tests reduces prototyping costs and compresses product development cycles.
Cartesian Grid Method for Compressible Flow Simulation
Energy Technology Data Exchange (ETDEWEB)
Farooq, Muhammed Asif
2012-07-01
The Cartesian grid method is an alternative to the existing methods to solve a physical problem governed by partial differential equations (PDEs) computationally. Researchers are interested in this method due to its simplicity of grid generation, less computational effort and ease of implementation into a computer code. One of the other options to solve a physical PDE problem is by the body-fitted grid method. In the body-fitted grid method, the boundary points are grid points. This is not the case with the Cartesian grid method where the body wall is embedded as a boundary into a Cartesian grid resulting in irregular cells near the embedded boundary. These irregular cells near the embedded boundary are known as cut-cells. Instead of using special treatments of the cut-cells or enforcing the presence of the embedded boundary by adding source terms at the Cartesian grid points near the boundary, the kinematic and other boundary conditions can be introduced in the Cartesian grid method via ghost points. Those grid points which lie inside the embedded boundary and are also a part of computation are called ghost points. Inactive grid points inside the embedded boundary are referred to as solid points. In the present Cartesian grid method, based on a ghost point treatment, local symmetry conditions are imposed at the embedded wall boundary. The ghost point treatments available in the literature are difficult to implement due to complex procedures. We are introducing a new approach to approximate the kinematics of the embedded boundary by a very simple ghost point treatment called the simplified ghost point treatment. In this approach, we consider the grid lines in the x- and y- directions as approximations of the lines normal to the embedded boundary depending on whether the angle between the normal and the x- or y-directions is closer. For 1D hyperbolic nonlinear systems of conservation laws, we use the moving normal shock wave as a test case for the 1D compressible
IDEF method-based simulation model design and development framework
Directory of Open Access Journals (Sweden)
Ki-Young Jeong
2009-09-01
Full Text Available The purpose of this study is to provide an IDEF method-based integrated framework for a business process simulation model to reduce the model development time by increasing the communication and knowledge reusability during a simulation project. In this framework, simulation requirements are collected by a function modeling method (IDEF0 and a process modeling method (IDEF3. Based on these requirements, a common data model is constructed using the IDEF1X method. From this reusable data model, multiple simulation models are automatically generated using a database-driven simulation model development approach. The framework is claimed to help both requirement collection and experimentation phases during a simulation project by improving system knowledge, model reusability, and maintainability through the systematic use of three descriptive IDEF methods and the features of the relational database technologies. A complex semiconductor fabrication case study was used as a testbed to evaluate and illustrate the concepts and the framework. Two different simulation software products were used to develop and control the semiconductor model from the same knowledge base. The case study empirically showed that this framework could help improve the simulation project processes by using IDEF-based descriptive models and the relational database technology. Authors also concluded that this framework could be easily applied to other analytical model generation by separating the logic from the data.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Multi-pass Monte Carlo simulation method in nuclear transmutations.
Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M
2016-12-01
Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 1025 or 1026 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/1025. Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 1028 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.
New method of fast simulation for a hadron calorimeter response
Kulchitskii, Yu A; Tokar, S; Zenis, T
2003-01-01
In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data. (15 refs).
Advanced molecular dynamics simulation methods for kinase drug discovery.
Aci-Sèche, Samia; Ziada, Sonia; Braka, Abdennour; Arora, Rohit; Bonnet, Pascal
2016-04-01
Interest in the application of molecular dynamics (MD) simulations has increased in the field of protein kinase (PK) drug discovery. PKs belong to an important drug target class because they are directly involved in a number of diseases, including cancer. MD methods simulate dynamic biological and chemical events at an atomic level. This information can be combined with other in silico and experimental methods to efficiently target selected receptors. In this review, we present common and advanced methods of MD simulations and we focus on the recent applications of MD-based methodologies that provided significant insights into the elucidation of biological mechanisms involving PKs and into the discovery of novel kinase inhibitors.
A Software-Defined Radio Simulation Method using Observer Patterns
Moseley, N.A.; Slump, Cornelis H.
2005-01-01
A problem with object-oriented simulation models is that internal model states are hidden and cannot be monitored easily. Object-oriented models are essentially black-box models. This article describes a method to expose the internal states of an object-oriented simulation model. Exposure of the
A particle-based method for granular flow simulation
Chang, Yuanzhang
2012-03-16
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Two Dimensional Lattice Boltzmann Method for Cavity Flow Simulation
Panjit MUSIK; Krisanadej JAROENSUTASINEE
2004-01-01
This paper presents a simulation of incompressible viscous flow within a two-dimensional square cavity. The objective is to develop a method originated from Lattice Gas (cellular) Automata (LGA), which utilises discrete lattice as well as discrete time and can be parallelised easily. Lattice Boltzmann Method (LBM), known as discrete Lattice kinetics which provide an alternative for solving the Navier–Stokes equations and are generally used for fluid simulation, is chosen for the study. A spec...
Motion simulation of hydraulic driven safety rod using FSI method
Energy Technology Data Exchange (ETDEWEB)
Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-05-15
Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results.
An introduction to computer simulation methods applications to physical systems
Gould, Harvey; Christian, Wolfgang
2007-01-01
Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...
A simulation based engineering method to support HAZOP studies
DEFF Research Database (Denmark)
Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge
2012-01-01
HAZOP is the most commonly used process hazard analysis tool in industry, a systematic yet tedious and time consuming method. The aim of this study is to explore the feasibility of process dynamic simulations to facilitate the HAZOP studies. We propose a simulation-based methodology to complement...... the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...... model as case study....
A tool for simulating parallel branch-and-bound methods
Directory of Open Access Journals (Sweden)
Golubeva Yana
2016-01-01
Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Comparing Intravenous Insertion Instructional Methods with Haptic Simulators
Directory of Open Access Journals (Sweden)
Lenora A. McWilliams
2017-01-01
Full Text Available Objective. The objective of this review was to compare traditional intravenous (IV insertion instructional methods with the use of haptic IV simulators. Design. An integrative research design was used to analyze the current literature. Data Sources. A search was conducted using key words intravenous (IV insertion or cannulation or venipuncture and simulation from 2000 to 2015 in the English language. The databases included Academic Search Complete, CINAHL Complete, Education Resource Information Center, and Medline. Review Methods. Whittemore and Knafl’s (2005 strategies were used to critique the articles for themes and similarities. Results. Comparisons of outcomes between traditional IV instructional methods and the use of haptic IV simulators continue to show various results. Positive results indicate that the use of the haptic IV simulator decreases both band constriction and total procedure time. While students are satisfied with practicing on the haptic simulators, they still desire faculty involvement. Conclusion. Combining the haptic IV simulator with practical experience on the IV arm may be the best practice for learning IV insertion. Research employing active learning strategies while using a haptic IV simulator during the learning process may reduce cost and faculty time.
Comparison of EBSD patterns simulated by two multislice methods.
Liu, Q B; Cai, C Y; Zhou, G W; Wang, Y G
2016-10-01
The extraction of crystallography information from electron backscatter diffraction (EBSD) patterns can be facilitated by diffraction simulations based on the dynamical electron diffraction theory. In this work, the EBSD patterns are successfully simulated by two multislice methods, that is, the real space (RS) method and the revised real space (RRS) method. The calculation results by the two multislice methods are compared and analyzed in detail with respect to different accelerating voltages, Debye-Waller factors and aperture radii. It is found that the RRS method provides a larger view field of the EBSD patterns than that by the RS method under the same calculation conditions. Moreover, the Kikuchi bands of the EBSD patterns obtained by the RRS method have a better match with the experimental patterns than those by the RS method. Especially, the lattice parameters obtained by the RRS method are more accurate than those by the RS method. These results demonstrate that the RRS method is more accurate for simulating the EBSD patterns than the RS method within the accepted computation time. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Multilevel panel method for wind turbine rotor flow simulations
van Garrel, Arne
2016-01-01
Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering
The afforestation problem: a heuristic method based on simulated annealing
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
1992-01-01
This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....
Nonequilibrium relaxation method – An alternative simulation strategy
Indian Academy of Sciences (India)
Abstract. One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces ...
A simple method for potential flow simulation of cascades
Indian Academy of Sciences (India)
Department of Mechanical Engineering, Indian Institute of Science,. Bangalore 560 012 e-mail: raghu@mecheng.iisc.ernet.in. MS received 21 April 2009; revised 1 February 2010; accepted 23 August 2010. Abstract. A simple method using a combination of conformal mapping and vortex panel method to simulate potential ...
Steel Fibre Reinforced Concrete Simulation with the SPH Method
Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip
2017-10-01
Steel fibre reinforced concrete (SFRC) is very popular in many branches of civil engineering. Thanks to its increased ductility, it is able to resist various types of loading. When designing a structure, the mechanical behaviour of SFRC can be described by currently available material models (with equivalent material for example) and therefore no problems arise with numerical simulations. But in many scenarios, e.g. high speed loading, it would be a mistake to use such an equivalent material. Physical modelling of the steel fibres used in concrete is usually problematic, though. It is necessary to consider the fact that mesh-based methods are very unsuitable for high-speed simulations with regard to the issues that occur due to the effect of excessive mesh deformation. So-called meshfree methods are much more suitable for this purpose. The Smoothed Particle Hydrodynamics (SPH) method is currently the best choice, thanks to its advantages. However, a numerical defect known as tensile instability may appear when the SPH method is used. It causes the development of numerical (false) cracks, making simulations of ductile types of failure significantly more difficult to perform. The contribution therefore deals with the description of a procedure for avoiding this defect and successfully simulating the behaviour of SFRC with the SPH method. The essence of the problem lies in the choice of coordinates and the description of the integration domain derived from them – spatial (Eulerian kernel) or material coordinates (Lagrangian kernel). The contribution describes the behaviour of both formulations. Conclusions are drawn from the fundamental tasks, and the contribution additionally demonstrates the functionality of SFRC simulations. The random generation of steel fibres and their inclusion in simulations are also discussed. The functionality of the method is supported by the results of pressure test simulations which compare various levels of fibre reinforcement of SFRC
A nondissipative simulation method for the drift kinetic equation
Energy Technology Data Exchange (ETDEWEB)
Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya
2001-07-01
With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)
Rejection-free stochastic simulation of BNGL-encoded models
Energy Technology Data Exchange (ETDEWEB)
Hlavacek, William S [Los Alamos National Laboratory; Monine, Michael I [Los Alamos National Laboratory; Colvin, Joshua [TRANSLATIONAL GENOM; Posner, Richard G [NORTHERN ARIZONA UNIV.; Von Hoff, Daniel D [TRANSLATIONAL GENOMICS RESEARCH INSTIT.
2009-01-01
Formal rules encoded using the BioNetGen language (BNGL) can be used to represent the system-level dynamics of molecular interactions. Rules allow one to compactly and implicitly specify the reaction network implied by a set of molecules and their interactions. Typically, the reaction network implied by a set of rules is large, which makes generation of the underlying rule-defined network expensive. Moreover, the cost of conventional simulation methods typically depends on network size. Together these factors have limited application of the rule-based modeling approach. To overcome this limitation, several methods have recently been developed for determining the reaction dynamics implied by rules while avoiding the expensive step of network generation. The cost of these 'network-free' simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is needed for the analysis of rule-based models of biochemical systems. Here, we present a software tool called RuleMonkey that implements a network-free stochastic simulation method for rule-based models. The method is rejection free, unlike other network-free methods that introduce null events (i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated), and the software is capable of simulating models encoded in BNGL, a general-purpose model-specification language. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant general-purpose simulator for rule-based models, as well as various problem-specific codes that implement network-free simulation methods. RuleMonkey enables the simulation of models defined by rule sets that imply large-scale reaction networks. It is faster than DYNSTOC for stiff problems, although it requires the use of more computer memory. RuleMonkey is freely available for non-commercial use as a stand
Adaptive implicit method for thermal compositional reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)
2008-10-15
As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.
Geostatistic in Reservoir Characterization: from estimation to simulation methods
Mata Lima, H.
2005-01-01
In this article objective have been made to reviews different geostatistical methods available to estimate and simulate petrophysical properties (porosity and permeability) of the reservoir. Different geostatistical techniques that allow the combination of hard and soft data are taken into account and one refers the main reason to use the geostatistical simulation rather than estimation. Uncertainty in reservoir characterization due to variogram assumption, which is a strict mathematical equa...
Diagnostic method for induction motor using simplified motor simulator
Doumae, Yukihiro; Konishi, Masami; Imai, Jun; Asada, Hideki; Kitamura, Akira
2001-01-01
In this paper, an identification method of motor parameters for the diagnosis of rotor bar defects in the squirrel cage induction motor is proposed. It is difficult to distinguish the degree of deterioration by a conventional diagnostic method such as Fourier analysis. To overcome the difficulty, a motor simulator is used to identify the degree of deterioration of rotors in the squirrel cage induction motor. Using this method, the deterioration of rotor bars in the motor can be estimated quan...
Validation of Solution Methods for Building Energy Simulation
Crowley, Michael
2006-01-01
The most commonly applied mathematical solution techniques for building energy simulation are response function methods and finite difference methods. The accepted validation methodology in this domain has as its main elements empirical validation, analytical verification and inter-model comparison. Of these, only analytical verification tests the solution method exclusively; but the test examples used are too confined to be representative of the building energy problem. A discriminating and ...
Simulating Hair with the Loosely-Connected Particles Method
Soták, Šimon
2010-01-01
This thesis presents an implementation of the Loosely Connected Particles (LCP) method of hair animation proposed by Bando et al. We updated the method with several modern approaches. Firstly, we implemented two variations of parallel processing for the simulation which di er in work distribution among threads. The results indicate that work is distributed evenly, and thus dynamic distribution is not needed. Secondly, we applied the Deep Opacity Maps method of hair shadowing on the LCP and in...
Simulation of dry granular flows using discrete element methods
Martin, Hugo; Lefebvre, Aline; Maday, Yvon; Mangeney, Anne; Maury, Bertrand; Sainte-Marie, Jacques
2017-04-01
Granular flows are composed of interacting particles (for instance sand grains). While natural flow simulations at the field scale are generally based on continuum models, discrete element methods are very useful to get insight into the detailed contact interactions between the particles involved. We shall consider here both well known molecular dynamics (MD) and contact dynamics (CD) methods to simulate granular particle interaction. The difference between these methods is the linearisation of contact forces in MD. We are interested to compare these methods, and especially the effects of the linearisation in simulations. In the present work, we introduce a new rigid bodies model at the scale of the particles and its resolution by contact dynamics. The interesting aspect of our CD method is to treat the contacts in all the material system in one step without any iterative process required when the contacts are dealt with one after the other. All contacts are calculated here at the same time in just one iteration and the normal and tangential constraints are treated simultaneously. The present model follows from a convex optimization problem presented in [1] by B. Maury in which we add a frictional behaviour to the contact law between the particles. To analyse the behaviour of this model, we compare our results to analytical solutions when we can compute them and otherwise to simulations with molecular dynamics method. [1] A time-stepping scheme for inelastic collisions. Numerical handling of the nonoverlapping constraint, B. Maury, Numerische Mathematik, 17 january 2006.
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings
A General Simulation Method for Multiple Bodies in Proximate Flight
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Energy Technology Data Exchange (ETDEWEB)
HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK
2000-04-01
Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.
Simulation methods with extended stability for stiff biochemical Kinetics
Directory of Open Access Journals (Sweden)
Rué Pau
2010-08-01
Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang
2011-03-01
We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.
Event by event method for quantum interference simulation
Mutia Delina, M
2014-01-01
Event by event method is a simulation approach which is not based on the knowledge of the Schrödinger equation. This approach uses the classical wave theory and particle concept: we use particles, not waves. The data is obtained by counting the events that were detected by the detector, just as in
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
Development of new deactivation method for simulation of fluid ...
Indian Academy of Sciences (India)
Selection of a good catalyst is the easiest way to increase profitability of a fluid catalytic cracking (FCC) unit. During operation, these ... New rapid deactivation method has been developed to simulate plant equilibrium catalyst (E-Cat) by modifying metal impregnation, steaming and oxidation/reduction procedures. The E-Cat ...
Space-time multiscale methods for Large Eddy Simulation
Munts, E.A.
2006-01-01
The Variational Multiscale (VMS) method has appeared as a promising new approach to the Large Eddy Simulation (LES) of turbulent flows. The key advantage of the VMS approach is that it allows different subgrid-scale (SGS) modeling assumptions to be made at different ranges of the resolved scales.
Practical considerations for incomplete factorization methods in reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Behie, A.; Forsyth, P.A.
1983-11-01
Various incomplete factorization (ILU) methods coupled with ORTHOMIN acceleration are discussed. These include natural, D2 and D4 orderings with several degrees of factorization, the modified factorization (MILU) and the COMBINATIVE method. These techniques can also be used with the bordered systems resulting from fully-coupled, fully-implicit multi-block wells. Test results are reported for fully implicit black oil and fully implicit thermal simulations. Some results are also reported for vector and scalar modes on the CRAY.
A method of outdoor simulation of infrared radiance of targets
Song, Jiang-tao; Shen, Xiang-heng; Zhao, Ying-jie
2009-07-01
Current research on infrared simulation often focuses mainly on infrared imaging simulation by computer and pays little attention to outdoor simulation of infrared radiation characteristics of targets. In order to simulate infrared radiance of targets outdoors, in this paper we propose a new outdoor simulation method on the basis of heating the cloth by electricity. There are two major contributions in the paper. Firstly, uneven distributing of temperature field of the cloth surface is considered and the long-wave thermal imager is used as a link of the temperature control system. On the basis of many experiments, the expression is concluded about the relation of the temperature obtained by the lone-wave thermal imager and the temperature obtained by the temperature control system and the environmental temperature at the experimental scene. Secondly, the influence of the environment at the experimental scene on the infrared radiance of the cloth surface is thought over. Thanks to two measures above, simulation precision of infrared radiance is made much better. The results of many outdoor experiments demonstrate the performance of the proposed approach.
System and Method for Finite Element Simulation of Helicopter Turbulence
McFarland, R. E. (Inventor); Dulsenberg, Ken (Inventor)
1999-01-01
The present invention provides a turbulence model that has been developed for blade-element helicopter simulation. This model uses an innovative temporal and geometrical distribution algorithm that preserves the statistical characteristics of the turbulence spectra over the rotor disc, while providing velocity components in real time to each of five blade-element stations along each of four blades. for a total of twenty blade-element stations. The simulator system includes a software implementation of flight dynamics that adheres to the guidelines for turbulence set forth in military specifications. One of the features of the present simulator system is that it applies simulated turbulence to the rotor blades of the helicopter, rather than to its center of gravity. The simulator system accurately models the rotor penetration into a gust field. It includes time correlation between the front and rear of the main rotor, as well as between the side forces felt at the center of gravity and at the tail rotor. It also includes features for added realism, such as patchy turbulence and vertical gusts in to which the rotor disc penetrates. These features are realized by a unique real time implementation of the turbulence filters. The new simulator system uses two arrays one on either side of the main rotor to record the turbulence field and to produce time-correlation from the front to the rear of the rotor disc. The use of Gaussian Interpolation between the two arrays maintains the statistical properties of the turbulence across the rotor disc. The present simulator system and method may be used in future and existing real-time helicopter simulations with minimal increase in computational workload.
IC space radiation effects experimental simulation and estimation methods
Chumakov, A I; Telets, V A; Gerasimov, V F; Yanenko, A V; Sogoyan, A V
1999-01-01
Laboratory test simulation methods are developed for IC response prediction to space radiation. The minimum set of radiation simulators is proposed to investigate IC failures and upsets under space radiation. The accelerated test technique of MOS ICs degradation estimation are developed for low intensity irradiation taking into account temperature variations as well as latent degradation effects. Two-parameter cross section functions are adapted to describe the ion- and proton-induced single event upsets. Non-focused laser irradiation is found to be applicable for single event latchup threshold estimation.
Meshfree simulation of avalanches with the Finite Pointset Method (FPM)
Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios
2017-04-01
Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.
Numerical Simulation of Solitary Waves Using Smoothed Particle Hydrodynamics Method
Directory of Open Access Journals (Sweden)
Swapnadip De Chowdhury
2012-09-01
Full Text Available Understanding shallow water wave propagation is of major concern in any coastal mitigation effort. Many times, a solitary wave replicates a shallow water wave in its extreme sense which includes a tsunami wave. It is mainly due to known physical characteristics of such waves. Therefore, the study of propagation of solitary waves in the near shore waters is of equal importance in the context of non linear water waves. Owing to the significant growth in computational technologies in the last few decades, a significant number of numerical methods have emerged and applied to simulate nonlinear solitary wave propagation. In this study, one such method, the Smoothed Particle Hydrodynamics (SPH method has been described to simulate the solitary waves. The split-up of a single solitary wave while it crosses a continental kind of shelf has been simulated by the present model. Then SPH model is coupled with the Boussinesq model to predict the time interval between two successive solitary waves on landfall. It has also been shown to be equally efficient in simulating the wave breaking while a solitary wave propagates over a mild slope.
Reduction Method for Real-Time Simulations in Hybrid Testing
DEFF Research Database (Denmark)
Andersen, Sebastian; Poulsen, Peter Noe
2014-01-01
to reformulate kinematic nonlinear equations of motion into a sum of constant matrices each multiplied by a reduced coordinate decreasing the assembling time. Furthermore the method allows for cutting off some of the higher frequency content not representing real physics decreasing the stability requirement......Real-time hybrid testing combines testing of physical components with numerical simulations. The concept of the method requires that the numerical simulations should be executed in real time. However, for large numerical models including nonlinear behavior a combination of computationally costly...... of choosing a sufficient basis a composite beam and a cantilever beam including kinematic nonlinearities and exposed to harmonic loadings are analyzed. To reduce locking modes with higher order terms are included. From the analysis it is concluded that the method exhibits encouraging potential with respect...
Directory of Open Access Journals (Sweden)
Kai Liu
2016-06-01
Full Text Available Signals in long-distance pipes are complex due to flow-induced noise generated in special structure, and the computation of these noise sources is difficult and time-consuming. To address this problem, a hybrid method based on computational fluid dynamics and Lighthill’s acoustic analogy theory is proposed to simulate flow-induced noise, with the results showing that the method is sufficient for noise predictions. The proposed method computes the turbulent flow field using detached eddy simulation and then calculates turbulence-generated sound using the finite element acoustic analogy method, which solves acoustic sources as volume sources. The velocity field obtained in the detached eddy simulation computation provides the sound source through interpolation between the computational fluid dynamics and acoustic meshes. The hybrid method is validated and assessed by comparing data from the cavity in pipe and large eddy simulation results. The peak value of flow-induced noise calculated at the monitor point is in good agreement with experimental data available in the literature.
Numerical simulation of explosive welding using Smoothed Particle Hydrodynamics method
Directory of Open Access Journals (Sweden)
J Feng
2017-09-01
Full Text Available In order to investigate the mechanism of explosive welding and the influences of explosive welding parameters on the welding quality, this paper presents numerical simulation of the explosive welding of Al-Mg plates using Smoothed Particle Hydrodynamics method. The multi-physical phenomena of explosive welding, including acceleration of the flyer plate driven by explosive detonation, oblique collision of the flyer and base plates, jetting phenomenon and the formation of wavy interface can be reproduced in the simulation. The characteristics of explosive welding are analyzed based on the simulation results. The mechanism of wavy interface formation is mainly due to oscillation of the collision point on the bonding surfaces. In addition, the impact velocity and collision angle increase with the increase of the welding parameters, such as explosive thickness and standoff distance, resulting in enlargement of the interfacial waves.
Unstructured spectral element methods of simulation of turbulent flows
Energy Technology Data Exchange (ETDEWEB)
Henderson, R.D. [California Inst. of Technology, Pasadena, CA (United States); Karniadakis, G.E. [Brown Univ., Providence, RI (United States)
1995-12-01
In this paper we present a spectral element-Fourier algorithm for simulating incompressible turbulent flows in complex geometries using unstructured quadrilateral meshes. To this end, we compare two different interface formulations for extending the conforming spectral element method in order to allow for surgical mesh refinement and still retain spectral accuracy: the Zanolli iterative procedure and variational patching based on auxiliary {open_quotes}mortar{close_quotes} functions. We present an interpretation of the original mortar element method as a patching scheme and develop direct and iterative solution techniques that make the method efficient for simulations of turbulent flows. The properties of the new method are analyzed in detail by studying the eigenspectra of the advection and diffusion operators. We then present numerical results that illustrate the flexibility as well as the exponential convergence of the new algorithm for nonconforming discretizations. We conclude with simulation studies of the turbulent cylinder wake at Re = 1000 (external flow) and turbulent flow over riblets at Re = 3280 (internal flow). 36 refs., 29 figs., 7 tabs.
Assessing numerical methods for molecular and particle simulation.
Shang, Xiaocheng; Kröger, Martin; Leimkuhler, Benedict
2017-11-22
We discuss the design of state-of-the-art numerical methods for molecular dynamics, focusing on the demands of soft matter simulation, where the purposes include sampling and dynamics calculations both in and out of equilibrium. We discuss the characteristics of different algorithms, including their essential conservation properties, the convergence of averages, and the accuracy of numerical discretizations. Formulations of the equations of motion which are suited to both equilibrium and nonequilibrium simulation include Langevin dynamics, dissipative particle dynamics (DPD), and the more recently proposed "pairwise adaptive Langevin" (PAdL) method, which, like DPD but unlike Langevin dynamics, conserves momentum and better matches the relaxation rate of orientational degrees of freedom. PAdL is easy to code and suitable for a variety of problems in nonequilibrium soft matter modeling; our simulations of polymer melts indicate that this method can also provide dramatic improvements in computational efficiency. Moreover we show that PAdL gives excellent control of the relaxation rate to equilibrium. In the nonequilibrium setting, we further demonstrate that while PAdL allows the recovery of accurate shear viscosities at higher shear rates than are possible using the DPD method at identical timestep, it also outperforms Langevin dynamics in terms of stability and accuracy at higher shear rates.
Modified network simulation model with token method of bus access
Directory of Open Access Journals (Sweden)
L.V. Stribulevich
2013-08-01
Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.
SIMULATION OF PULSED BREAKDOWN IN HELIUM BY ADAPTIVE METHODS
Directory of Open Access Journals (Sweden)
S. I. Eliseev
2014-09-01
Full Text Available The paper deals with the processes occurring during electrical breakdown in gases as well as numerical simulation of these processes using adaptive mesh refinement methods. Discharge between needle electrodes in helium at atmospheric pressure is selected for the test simulation. Physical model of the accompanying breakdown processes is based on self- consistent system of continuity equations for streams of charged particles (electrons and positive ions and Poisson equation for electric potential. Sharp plasma heterogeneity in the area of streamers requires the usage of adaptive algorithms for constructing of computational grids for modeling. The method for grid adaptive construction together with justification of its effectiveness for significantly unsteady gas breakdown simulation at atmospheric pressure is described. Upgraded version of Gerris package is used for numerical simulation of electrical gas breakdown. Software package, originally focused on solution of nonlinear problems in fluid dynamics, appears to be suitable for processes modeling in non-stationary plasma described by continuity equations. The usage of adaptive grids makes it possible to get an adequate numerical model for the breakdown development in the system of needle electrodes. Breakdown dynamics is illustrated by contour plots of electron densities and electric field intensity obtained in the course of solving. Breakdown mechanism of positive and negative (orientated to anode streamers formation is demonstrated and analyzed. Correspondence between adaptive building of computational grid and generated plasma gradients is shown. Obtained results can be used as a basis for full-scale numerical experiments on electric breakdown in gases.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Directory of Open Access Journals (Sweden)
Erkai Watson
2017-04-01
Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena.
Watson, Erkai; Steinhauser, Martin O
2017-04-02
In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI) phenomena which is based on the Discrete Element Method (DEM). Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy-conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Electromagnetic metamaterial simulations using a GPU-accelerated FDTD method
Seok, Myung-Su; Lee, Min-Gon; Yoo, SeokJae; Park, Q.-Han
2015-12-01
Metamaterials composed of artificial subwavelength structures exhibit extraordinary properties that cannot be found in nature. Designing artificial structures having exceptional properties plays a pivotal role in current metamaterial research. We present a new numerical simulation scheme for metamaterial research. The scheme is based on a graphic processing unit (GPU)-accelerated finite-difference time-domain (FDTD) method. The FDTD computation can be significantly accelerated when GPUs are used instead of only central processing units (CPUs). We explain how the fast FDTD simulation of large-scale metamaterials can be achieved through communication optimization in a heterogeneous CPU/GPU-based computer cluster. Our method also includes various advanced FDTD techniques: the non-uniform grid technique, the total-field/scattered-field (TFSF) technique, the auxiliary field technique for dispersive materials, the running discrete Fourier transform, and the complex structure setting. We demonstrate the power of our new FDTD simulation scheme by simulating the negative refraction of light in a coaxial waveguide metamaterial.
A New Parallel Method for Binary Black Hole Simulations
Directory of Open Access Journals (Sweden)
Quan Yang
2016-01-01
Full Text Available Simulating binary black hole (BBH systems are a computationally intensive problem and it can lead to great scientific discovery. How to explore more parallelism to take advantage of the large number of computing resources of modern supercomputers is the key to achieve high performance for BBH simulations. In this paper, we propose a scalable MPM (Mesh based Parallel Method which can explore both the inter- and intramesh level parallelism to improve the performance of BBH simulation. At the same time, we also leverage GPU to accelerate the performance. Different kinds of performance tests are conducted on Blue Waters. Compared with the existing method, our MPM can improve the performance from 5x speedup (compared with the normalized speed of 32 MPI processes to 8x speedup. For the GPU accelerated version, our MPM can improve the performance from 12x speedup to 28x speedup. Experimental results also show that when only enough CPU computing resource or limited GPU computing resource is available, our MPM can employ two special scheduling mechanisms to achieve better performance. Furthermore, our scalable GPU acceleration MPM can achieve almost ideal weak scaling up to 2048 GPU computing nodes which enables our software to handle even larger BBH simulations efficiently.
Calibration of three rainfall simulators with automatic measurement methods
Roldan, Margarita
2010-05-01
CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which
A simulation study of three methods for detecting disease clusters
Directory of Open Access Journals (Sweden)
Samuelsen Sven O
2006-04-01
Full Text Available Abstract Background Cluster detection is an important part of spatial epidemiology because it can help identifying environmental factors associated with disease and thus guide investigation of the aetiology of diseases. In this article we study three methods suitable for detecting local spatial clusters: (1 a spatial scan statistic (SaTScan, (2 generalized additive models (GAM and (3 Bayesian disease mapping (BYM. We conducted a simulation study to compare the methods. Seven geographic clusters with different shapes were initially chosen as high-risk areas. Different scenarios for the magnitude of the relative risk of these areas as compared to the normal risk areas were considered. For each scenario the performance of the methods were assessed in terms of the sensitivity, specificity, and percentage correctly classified for each cluster. Results The performance depends on the relative risk, but all methods are in general suitable for identifying clusters with a relative risk larger than 1.5. However, it is difficult to detect clusters with lower relative risks. The GAM approach had the highest sensitivity, but relatively low specificity leading to an overestimation of the cluster area. Both the BYM and the SaTScan methods work well. Clusters with irregular shapes are more difficult to detect than more circular clusters. Conclusion Based on our simulations we conclude that the methods differ in their ability to detect spatial clusters. Different aspects should be considered for appropriate choice of method such as size and shape of the assumed spatial clusters and the relative importance of sensitivity and specificity. In general, the BYM method seems preferable for local cluster detection with relatively high relative risks whereas the SaTScan method appears preferable for lower relative risks. The GAM method needs to be tuned (using cross-validation to get satisfactory results.
Transformation-optics simulation method for stimulated Brillouin scattering
Zecca, Roberto; Bowen, Patrick T.; Smith, David R.; Larouche, Stéphane
2016-12-01
We develop an approach to enable the full-wave simulation of stimulated Brillouin scattering and related phenomena in a frequency-domain, finite-element environment. The method uses transformation-optics techniques to implement a time-harmonic coordinate transform that reconciles the different frames of reference used by electromagnetic and mechanical finite-element solvers. We show how this strategy can be successfully applied to bulk and guided systems, comparing the results with the predictions of established theory.
Numerical Simulation Method for Combustion in a Oxyhydrogen Rocket Motor
Taki, Shiro; Fujiwara, Toshitaka; 滝, 史郎; 藤原, 俊隆
1984-01-01
Numerical simulations of unsteady phenomena in the combustion chamber of an oxyhydrogen rocket motor were made in an attempt to develop a computer code for use in investigating such phenomena as vibrating combustion. The combustion in this system is controlled by diffusion, the effect of which works much slower than sound or pressure waves, so that diffusions are usually solved using the implicit finite difference method for unlimited time step size caused by stability criterion. However, the...
From fuel cells to batteries: Synergies, scales and simulation methods
Bessler, Wolfgang G.
2011-01-01
The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our
Mixture and method for simulating soiling and weathering of surfaces
Energy Technology Data Exchange (ETDEWEB)
Sleiman, Mohamad; Kirchstetter, Thomas; Destaillats, Hugo; Levinson, Ronnen; Berdahl, Paul; Akbari, Hashem
2018-01-02
This disclosure provides systems, methods, and apparatus related to simulated soiling and weathering of materials. In one aspect, a soiling mixture may include an aqueous suspension of various amounts of salt, soot, dust, and humic acid. In another aspect, a method may include weathering a sample of material in a first exposure of the sample to ultraviolet light, water vapor, and elevated temperatures, depositing a soiling mixture on the sample, and weathering the sample in a second exposure of the sample to ultraviolet light, water vapor, and elevated temperatures.
The Simulation-Tabulation Method for Classical Diffusion Monte Carlo
Hwang, Chi-Ok; Given, James A.; Mascagni, Michael
2001-12-01
Many important classes of problems in materials science and biotechnology require the solution of the Laplace or Poisson equation in disordered two-phase domains in which the phase interface is extensive and convoluted. Green's function first-passage (GFFP) methods solve such problems efficiently by generalizing the “walk on spheres” (WOS) method to allow first-passage (FP) domains to be not just spheres but a wide variety of geometrical shapes. (In particular, this solves the difficulty of slow convergence with WOS by allowing FP domains that contain patches of the phase interface.) Previous studies accomplished this by using geometries for which the Green's function was available in quasi-analytic form. Here, we extend these studies by using the simulation-tabulation (ST) method. We simulate and then tabulate surface Green's functions that cannot be obtained analytically. The ST method is applied to the Solc-Stockmayer model with zero potential, to the mean trapping rate of a diffusing particle in a domain of nonoverlapping spherical traps, and to the effective conductivity for perfectly insulating, nonoverlapping spherical inclusions in a matrix of finite conductivity. In all cases, this class of algorithms provides the most efficient methods known to solve these problems to high accuracy.
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for
The Riemann walk: A method for simulating complex actions
Gocksch, Andreas
1988-05-01
A new method to simulate systems with complex actions is discussed. It is based on the stochastic evaluation of a certain density of states which explicitly depends on the “imaginary energy” but also has an implicit dependence on the parameters of the real part of the action. Since expectation values are obtained by approximating an integral by a Riemann sum, the method can be considered to be a hybrid between Monte Carlo and Riemann integration. Indeed, for the simple case of a complex coupling the method reduces to what is known as “stratified sampling”. In this letter the method is applied to the SU(3) spin model at finite chemical potential.
Parallel discrete vortex methods for viscous flow simulation
Takeda, Kenji
In this thesis a parallel discrete vortex method is developed in order to investigate the long-time behaviour of bluff body wakes. The method is based on inviscid theory, and its extension to include viscous effects is a far from trivial problem. In this work four grid-free viscous models are directly compared to assess their accuracy and efficiency. The random walk, diffusion velocity, corrected core-spreading and vorticity redistribution methods are compared for simulating unbounded fluid flows, and for flows past an impulsively started cylinder at Reynolds numbers between 550 and 9500. The code uses a common core, so that the only free parameters are those directly related to the viscous models. The vorticity redistribution method encompasses all of the advantages of a purely Lagrangian method and incorporates a dynamic regridding scheme to maintain accurate discretisation of the vorticity field. This is used to simulate long-time flow past an impulsively started cylinder for Reynolds numbers 100, 150 and 1000. The code is fully parallel and achieves good speedup on both commodity and proprietary supercomputer systems. At Reynolds numbers below 150 the breakdown of the primary vortex street has been simulated. Results reveal a merging process, causing relaxation to a parallel shear flow. This itself sheds vortices, creating a secondary wake of increased wavelength. At Reynolds number 1000 the cylinder wake becomes chaotic, forming distinct vortex couples. These couples self-convect and can travel upstream. This has a destabilising effect on the vortex street, inducing merging, formation of tripolar and quadrupolar structures and, ultimately, spontaneous ejection of vortex couples upstream of the initial disturbance.
Numerical method for IR background and clutter simulation
Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio
1997-06-01
The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.
A fast mollified impulse method for biomolecular atomistic simulations
Energy Technology Data Exchange (ETDEWEB)
Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)
2017-03-15
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.
Multinomial tau-leaping method for stochastic kinetic simulations
Pettigrew, Michel F.; Resat, Haluk
2007-02-01
We introduce the multinomial tau-leaping (MτL) method for general reaction networks with multichannel reactant dependencies. The MτL method is an extension of the binomial tau-leaping method where efficiency is improved in several ways. First, τ-leaping steps are determined simply and efficiently using a priori information and Poisson distribution-based estimates of expectation values for reaction numbers over a tentative τ-leaping step. Second, networks are partitioned into closed groups of reactions and corresponding reactants in which no group reactant set is found in any other group. Third, product formation is factored into upper-bound estimation of the number of times a particular reaction occurs. Together, these features allow larger time steps where the numbers of reactions occurring simultaneously in a multichannel manner are estimated accurately using a multinomial distribution. Furthermore, we develop a simple procedure that places a specific upper bound on the total reaction number to ensure non-negativity of species populations over a single multiple-reaction step. Using two disparate test case problems involving cellular processes—epidermal growth factor receptor signaling and a lactose operon model—we show that the τ-leaping based methods such as the MτL algorithm can significantly reduce the number of simulation steps thus increasing the numerical efficiency over the exact stochastic simulation algorithm by orders of magnitude.
Olson, Branden; Kleiber, William
2017-04-01
Stochastic precipitation generators (SPGs) produce synthetic precipitation data and are frequently used to generate inputs for physical models throughout many scientific disciplines. Especially for large data sets, statistical parameter estimation is difficult due to the high dimensionality of the likelihood function. We propose techniques to estimate SPG parameters for spatiotemporal precipitation occurrence based on an emerging set of methods called Approximate Bayesian computation (ABC), which bypass the evaluation of a likelihood function. Our statistical model employs a thresholded Gaussian process that reduces to a probit regression at single sites. We identify appropriate ABC penalization metrics for our model parameters to produce simulations whose statistical characteristics closely resemble those of the observations. Spell length metrics are appropriate for single sites, while a variogram-based metric is proposed for spatial simulations. We present numerical case studies at sites in Colorado and Iowa where the estimated statistical model adequately reproduces local and domain statistics.
Hardware-in-the-loop grid simulator system and method
Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos
2017-05-16
A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises an improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.
Simulating condensation on microstructured surfaces using Lattice Boltzmann Method
Alexeev, Alexander; Vasyliv, Yaroslav
2017-11-01
We simulate a single component fluid condensing on 2D structured surfaces with different wettability. To simulate the two phase fluid, we use the athermal Lattice Boltzmann Method (LBM) driven by a pseudopotential force. The pseudopotential force results in a non-ideal equation of state (EOS) which permits liquid-vapor phase change. To account for thermal effects, the athermal LBM is coupled to a finite volume discretization of the temperature evolution equation obtained using a thermal energy rate balance for the specific internal energy. We use the developed model to probe the effect of surface structure and surface wettability on the condensation rate in order to identify microstructure topographies promoting condensation. Financial support is acknowledged from Kimberly-Clark.
Simulation of secondary fault shear displacements - method and application
Fälth, Billy; Hökmark, Harald; Lund, Björn; Mai, P. Martin; Munier, Raymond
2014-05-01
We present an earthquake simulation method to calculate dynamically and statically induced shear displacements on faults near a large earthquake. Our results are aimed at improved safety assessment of underground waste storage facilities, e.g. a nuclear waste repository. For our simulations, we use the distinct element code 3DEC. We benchmark 3DEC by running an earthquake simulation and then compare the displacement waveforms at a number of surface receivers with the corresponding results obtained from the COMPSYN code package. The benchmark test shows a good agreement in terms of both phase and amplitude. In our application to a potential earthquake near a storage facility, we use a model with a pre-defined earthquake fault plane (primary fault) surrounded by numerous smaller discontinuities (target fractures) representing faults in which shear movements may be induced by the earthquake. The primary fault and the target fractures are embedded in an elastic medium. Initial stresses are applied and the fault rupture mechanism is simulated through a programmed reduction of the primary fault shear strength, which is initiated at a pre-defined hypocenter. The rupture is propagated at a typical rupture propagation speed and arrested when it reaches the fault plane boundaries. The primary fault residual strength properties are uniform over the fault plane. The method allows for calculation of target fracture shear movements induced by static stress redistribution as well as by dynamic effects. We apply the earthquake simulation method in a model of the Forsmark nuclear waste repository site in Sweden with rock mass properties, in situ stresses and fault geometries according to the description of the site established by the Swedish Nuclear Fuel and Waste Management Co (SKB). The target fracture orientations are based on the Discrete Fracture Network model developed for the site. With parameter values set to provide reasonable upper bound estimates of target fracture
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Analysis of Simulation Methods for Far-end Crosstalk Cancellation
Directory of Open Access Journals (Sweden)
P. Lafata
2011-04-01
Full Text Available The information capacity of current digital subscriber lines is limited mainly by a crosstalk in metallic cables. The influence of near-end crosstalk (NEXT can be well cancelled by frequency duplex method, but the elimination of far-end crosstalk (FEXT is not so easy. Therefore FEXT is the dominant source of disturbance in current digital subscriber lines (xDSL. One of the most promising solutions for far-end crosstalk cancellation is Vectored Discrete Multi-tone modulation (VDMT. For the testing of VDMT modulation efficiency it will be necessary to implement advanced methods for modeling of far-end crosstalk to obtain required predictions of the crosstalk behavior in a cable. The actual simple FEXT model is not very accurate and does not provide realistic results. That is why the new method for modeling of far-end crosstalk was developed and is presented in this paper. This advanced model is based on the capacitive and inductive unbalances between pairs in a cable and it also respects the cable’s internal structure. The results of the model are subsequently used for the simulation of VDMT modulation and its impact on the FEXT cancellation. These simulations are based on the estimations of transmission speed of VDSL2 lines with VDMT modulation.
The parallel subdomain-levelset deflation method in reservoir simulation
van der Linden, J. H.; Jönsthövel, T. B.; Lukyanov, A. A.; Vuik, C.
2016-01-01
Extreme and isolated eigenvalues are known to be harmful to the convergence of an iterative solver. These eigenvalues can be produced by strong heterogeneity in the underlying physics. We can improve the quality of the spectrum by 'deflating' the harmful eigenvalues. In this work, deflation is applied to linear systems in reservoir simulation. In particular, large, sudden differences in the permeability produce extreme eigenvalues. The number and magnitude of these eigenvalues is linked to the number and magnitude of the permeability jumps. Two deflation methods are discussed. Firstly, we state that harmonic Ritz eigenvector deflation, which computes the deflation vectors from the information produced by the linear solver, is unfeasible in modern reservoir simulation due to high costs and lack of parallelism. Secondly, we test a physics-based subdomain-levelset deflation algorithm that constructs the deflation vectors a priori. Numerical experiments show that both methods can improve the performance of the linear solver. We highlight the fact that subdomain-levelset deflation is particularly suitable for a parallel implementation. For cases with well-defined permeability jumps of a factor 104 or higher, parallel physics-based deflation has potential in commercial applications. In particular, the good scalability of parallel subdomain-levelset deflation combined with the robust parallel preconditioner for deflated system suggests the use of this method as an alternative for AMG.
High-order finite element methods for cardiac monodomain simulations
Directory of Open Access Journals (Sweden)
Kevin P Vincent
2015-08-01
Full Text Available Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori.
High-order finite element methods for cardiac monodomain simulations
Vincent, Kevin P.; Gonzales, Matthew J.; Gillette, Andrew K.; Villongco, Christopher T.; Pezzuto, Simone; Omens, Jeffrey H.; Holst, Michael J.; McCulloch, Andrew D.
2015-01-01
Computational modeling of tissue-scale cardiac electrophysiology requires numerically converged solutions to avoid spurious artifacts. The steep gradients inherent to cardiac action potential propagation necessitate fine spatial scales and therefore a substantial computational burden. The use of high-order interpolation methods has previously been proposed for these simulations due to their theoretical convergence advantage. In this study, we compare the convergence behavior of linear Lagrange, cubic Hermite, and the newly proposed cubic Hermite-style serendipity interpolation methods for finite element simulations of the cardiac monodomain equation. The high-order methods reach converged solutions with fewer degrees of freedom and longer element edge lengths than traditional linear elements. Additionally, we propose a dimensionless number, the cell Thiele modulus, as a more useful metric for determining solution convergence than element size alone. Finally, we use the cell Thiele modulus to examine convergence criteria for obtaining clinically useful activation patterns for applications such as patient-specific modeling where the total activation time is known a priori. PMID:26300783
Application of the quadrupole method for simulation of passive thermography
Winfree, William P.; Zalameda, Joseph N.; Gregory, Elizabeth D.
2017-05-01
Passive thermography has been shown to be an effective method for in situ and real time nondestructive evaluation (NDE) to measure damage growth in a composite structure during cyclic loading. The heat generation by subsurface flaw results in a measurable thermal profile at the surface. This paper models the heat generation as a planar subsurface source and calculates the resultant temperature profile at the surface using a three dimensional quadrupole. The results of the model are compared to finite difference simulations of the same planar sources.
Application of the Quadrupole Method for Simulation of Passive Thermography
Winfree, William P.; Zalameda, Joseph N.; Gregory, Elizabeth D.
2017-01-01
Passive thermography has been shown to be an effective method for in-situ and real time nondestructive evaluation (NDE) to measure damage growth in a composite structure during cyclic loading. The heat generation by subsurface flaw results in a measurable thermal profile at the surface. This paper models the heat generation as a planar subsurface source and calculates the resultant temperature profile at the surface using a three dimensional quadrupole. The results of the model are compared to finite element simulations of the same planar sources and experimental data acquired during cyclic loading of composite specimens.
The finite cell method for bone simulations: verification and validation.
Ruess, Martin; Tal, David; Trabelsi, Nir; Yosibash, Zohar; Rank, Ernst
2012-03-01
Standard methods for predicting bone's mechanical response from quantitative computer tomography (qCT) scans are mainly based on classical h-version finite element methods (FEMs). Due to the low-order polynomial approximation, the need for segmentation and the simplified approach to assign a constant material property to each element in h-FE models, these often compromise the accuracy and efficiency of h-FE solutions. Herein, a non-standard method, the finite cell method (FCM), is proposed for predicting the mechanical response of the human femur. The FCM is free of the above limitations associated with h-FEMs and is orders of magnitude more efficient, allowing its use in the setting of computational steering. This non-standard method applies a fictitious domain approach to simplify the modeling of a complex bone geometry obtained directly from a qCT scan and takes into consideration easily the heterogeneous material distribution of the various bone regions of the femur. The fundamental principles and properties of the FCM are briefly described in relation to bone analysis, providing a theoretical basis for the comparison with the p-FEM as a reference analysis and simulation method of high quality. Both p-FEM and FCM results are validated by comparison with an in vitro experiment on a fresh-frozen femur.
Application to radiation damage simulation calculation of Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Aruga, Takeo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2001-01-01
Recent progress in Monte Carlo calculation for radiation damage simulation of structural materials to be used in fast breeder reactors or thermonuclear fusion reactors under energetic neutron or charged particle bombardment is reviewed. Specifically usefulness of employing Monte Carlo methods in molecular dynamics calculations to understand mechanical properties change such as dimensional change, strength, creep, fatigue, corrosion, and crack growth of materials under irradiation on the basis of atomic collision processes is stressed. Structure and spatial distribution of point defects in iron, gold, or cooper as demonstrative examples at several hundreds of ps after the movement of primary knock-on atom (PKA) takes place are calculated as a function of PKA energy. The results are compared with those obtained by the method developed by Norgett, Robinson and Torrens and the usefulness is discussed. (S. Ohno)
Simulation of FEL pulse length calculation with THz streaking method
Energy Technology Data Exchange (ETDEWEB)
Gorgisyan, I., E-mail: ishkhan.gorgisyan@psi.ch [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); École Polytechnique Fédérale de Lausanne, Route Cantonale, 1015 Lausanne (Switzerland); Ischebeck, R.; Prat, E.; Reiche, S. [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); Rivkin, L. [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); École Polytechnique Fédérale de Lausanne, Route Cantonale, 1015 Lausanne (Switzerland); Juranić, P., E-mail: ishkhan.gorgisyan@psi.ch [Paul Scherrer Institut, 5232 Villigen PSI (Switzerland)
2016-04-02
Simulation of THz streaking of photoelectrons created by X-ray pulses from a free-electron laser and reconstruction of the free-electron laser pulse lengths. Having accurate and comprehensive photon diagnostics for the X-ray pulses delivered by free-electron laser (FEL) facilities is of utmost importance. Along with various parameters of the photon beam (such as photon energy, beam intensity, etc.), the pulse length measurements are particularly useful both for the machine operators to measure the beam parameters and monitor the stability of the machine performance, and for the users carrying out pump–probe experiments at such facilities to better understand their measurement results. One of the most promising pulse length measurement techniques used for photon diagnostics is the THz streak camera which is capable of simultaneously measuring the lengths of the photon pulses and their arrival times with respect to the pump laser. This work presents simulations of a THz streak camera performance. The simulation procedure utilizes FEL pulses with two different photon energies in hard and soft X-ray regions, respectively. It recreates the energy spectra of the photoelectrons produced by the photon pulses and streaks them by a single-cycle THz pulse. Following the pulse-retrieval procedure of the THz streak camera, the lengths were calculated from the streaked spectra. To validate the pulse length calculation procedure, the precision and the accuracy of the method were estimated for streaking configuration corresponding to previously performed experiments. The obtained results show that for the discussed setup the method is capable of measuring FEL pulses with about a femtosecond accuracy and precision.
A new tree code method for simulation of planetesimal dynamics
Richardson, D. C.
1993-03-01
A new tree code method for simulation of planetesimal dynamics is presented. A self-similarity argument is used to restrict the problem to a small patch of a ring of planetesimals at 1 AU from the sun. The code incorporates a sliding box model with periodic boundary conditions and surrounding ghost particles. The tree is self-repairing and exploits the flattened nature of Keplerian disks to maximize efficiency. The code uses a fourth-order force polynomial integration algorithm with individual particle time-steps. Collisions and mergers, which play an important role in planetesimal evolution, are treated in a comprehensive manner. In typical runs with a few hundred central particles, the tree code is approximately 2-3 times faster than a recent direct summation method and requires about 1 CPU day on a Sparc IPX workstation to simulate 100 yr of evolution. The average relative force error incurred in such runs is less than 0.2 per cent in magnitude. In general, the CPU time as a function of particle number varies in a way consistent with an O(N log N) algorithm. In order to take advantage of facilities available, the code was written in C in a Unix workstation environment. The unique aspects of the code are discussed in detail and the results of a number of performance tests - including a comparison with previous work - are presented.
Spectral element method implementation on GPU for Lamb wave simulation
Kudela, Pawel; Wandowski, Tomasz; Radzienski, Maciej; Ostachowicz, Wieslaw
2017-04-01
Parallel implementation of the time domain spectral element method on GPU (Graphics Processing Unit) is presented. The proposed spectral element method implementation is based on sparse matrix storage of local shape function derivatives calculated at Gauss-Lobatto-Legendre points. The algorithm utilizes two basic operations: multiplication of sparse matrix by vector and element-by-element vectors multiplication. Parallel processing is performed on the degree of freedom level. The assembly of resultant force is done by the aid of a mesh coloring algorithm. The implementation enables considerable computation speedup as well as a simulation of complex structural health monitoring systems based on anomalies of propagating Lamb waves. Hence, the complexity of various models can be tested and compared in order to be as close to reality as possible by using modern computers. A comparative example of a composite laminate modeling by using homogenization of material properties in one layer of 3D brick spectral elements with composite in which each ply is simulated by separate layer of 3D brick spectral elements is described. Consequences of application of each technique are explained. Further analysis is performed for composite laminate with delamination. In each case piezoelectric transducer as well as glue layer between actuator and host structure is modeled.
Fast integral methods for integrated optical systems simulations: a review
Kleemann, Bernd H.
2015-09-01
Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non
Bluff Body Flow Simulation Using a Vortex Element Method
Energy Technology Data Exchange (ETDEWEB)
Anthony Leonard; Phillippe Chatelain; Michael Rebel
2004-09-30
Heavy ground vehicles, especially those involved in long-haul freight transportation, consume a significant part of our nation's energy supply. it is therefore of utmost importance to improve their efficiency, both to reduce emissions and to decrease reliance on imported oil. At highway speeds, more than half of the power consumed by a typical semi truck goes into overcoming aerodynamic drag, a fraction which increases with speed and crosswind. Thanks to better tools and increased awareness, recent years have seen substantial aerodynamic improvements by the truck industry, such as tractor/trailer height matching, radiator area reduction, and swept fairings. However, there remains substantial room for improvement as understanding of turbulent fluid dynamics grows. The group's research effort focused on vortex particle methods, a novel approach for computational fluid dynamics (CFD). Where common CFD methods solve or model the Navier-Stokes equations on a grid which stretches from the truck surface outward, vortex particle methods solve the vorticity equation on a Lagrangian basis of smooth particles and do not require a grid. They worked to advance the state of the art in vortex particle methods, improving their ability to handle the complicated, high Reynolds number flow around heavy vehicles. Specific challenges that they have addressed include finding strategies to accurate capture vorticity generation and resultant forces at the truck wall, handling the aerodynamics of spinning bodies such as tires, application of the method to the GTS model, computation time reduction through improved integration methods, a closest point transform for particle method in complex geometrics, and work on large eddy simulation (LES) turbulence modeling.
High accuracy mantle convection simulation through modern numerical methods
Kronbichler, Martin
2012-08-21
Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.
Simulating Space Capsule Water Landing with Explicit Finite Element Method
Wang, John T.; Lyle, Karen H.
2007-01-01
A study of using an explicit nonlinear dynamic finite element code for simulating the water landing of a space capsule was performed. The finite element model contains Lagrangian shell elements for the space capsule and Eulerian solid elements for the water and air. An Arbitrary Lagrangian Eulerian (ALE) solver and a penalty coupling method were used for predicting the fluid and structure interaction forces. The space capsule was first assumed to be rigid, so the numerical results could be correlated with closed form solutions. The water and air meshes were continuously refined until the solution was converged. The converged maximum deceleration predicted is bounded by the classical von Karman and Wagner solutions and is considered to be an adequate solution. The refined water and air meshes were then used in the models for simulating the water landing of a capsule model that has a flexible bottom. For small pitch angle cases, the maximum deceleration from the flexible capsule model was found to be significantly greater than the maximum deceleration obtained from the corresponding rigid model. For large pitch angle cases, the difference between the maximum deceleration of the flexible model and that of its corresponding rigid model is smaller. Test data of Apollo space capsules with a flexible heat shield qualitatively support the findings presented in this paper.
Smoothed particle hydrodynamics method from a large eddy simulation perspective
Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.
2017-03-01
The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.
An experiment teaching method based on the Optisystem simulation platform
Zhu, Jihua; Xiao, Xuanlu; Luo, Yuan
2017-08-01
The experiment teaching of optical communication system is difficult to achieve because of expensive equipment. The Optisystem is optical communication system design software, being able to provide such a simulation platform. According to the characteristic of the OptiSystem, an approach of experiment teaching is put forward in this paper. It includes three gradual levels, the basics, the deeper looks and the practices. Firstly, the basics introduce a brief overview of the technology, then the deeper looks include demoes and example analyses, lastly the practices are going on through the team seminars and comments. A variety of teaching forms are implemented in class. The fact proves that this method can not only make up the laboratory but also motivate the students' learning interest and improve their practical abilities, cooperation abilities and creative spirits. On the whole, it greatly raises the teaching effect.
Discrete vortex method simulations of aerodynamic admittance in bridge aerodynamics
DEFF Research Database (Denmark)
Rasmussen, Johannes Tophøj; Hejlesen, Mads Mølholm; Larsen, Allan
, and to determine aerodynamic forces and the corresponding ﬂutter limit. A simulation of the three-dimensional bridge responseto turbulent wind is carried out by quasi steady theory by modelling the bridge girder as a line like structure [2], applying the aerodynamic load coefﬁcients found from the current version...... of DVMFLOW in a strip wise fashion. Neglecting the aerodynamic admittance, i.e. the correlation of the instantaneous lift force to the turbulent ﬂuctuations in the vertical velocities, leads to higher response to high frequency atmospheric turbulence than would be obtained from wind tunnel tests...... velocity spectra are found in good agreement with the target spectra. The aerodynamic admittance of the structure is measured by sampling vertical velocities immediately upstream of the structure and the lift forces on the structure. The method is validated against the analytic solution for the admittance...
Applying Simulation Method in Formulation of Gluten-Free Cookies
Directory of Open Access Journals (Sweden)
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
A fast Chebyshev method for simulating flexible-wing propulsion
Moore, M. Nicholas J.
2017-09-01
We develop a highly efficient numerical method to simulate small-amplitude flapping propulsion by a flexible wing in a nearly inviscid fluid. We allow the wing's elastic modulus and mass density to vary arbitrarily, with an eye towards optimizing these distributions for propulsive performance. The method to determine the wing kinematics is based on Chebyshev collocation of the 1D beam equation as coupled to the surrounding 2D fluid flow. Through small-amplitude analysis of the Euler equations (with trailing-edge vortex shedding), the complete hydrodynamics can be represented by a nonlocal operator that acts on the 1D wing kinematics. A class of semi-analytical solutions permits fast evaluation of this operator with O (Nlog N) operations, where N is the number of collocation points on the wing. This is in contrast to the minimum O (N2) cost of a direct 2D fluid solver. The coupled wing-fluid problem is thus recast as a PDE with nonlocal operator, which we solve using a preconditioned iterative method. These techniques yield a solver of near-optimal complexity, O (Nlog N) , allowing one to rapidly search the infinite-dimensional parameter space of all possible material distributions and even perform optimization over this space.
Steam generator tube rupture simulation using extended finite element method
Energy Technology Data Exchange (ETDEWEB)
Mohanty, Subhasish, E-mail: smohanty@anl.gov; Majumdar, Saurin; Natesan, Ken
2016-08-15
Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.
Geostatistic in Reservoir Characterization: from estimation to simulation methods
Directory of Open Access Journals (Sweden)
Mata Lima, H.
2005-12-01
Full Text Available In this article objective have been made to reviews different geostatistical methods available to estimate and simulate petrophysical properties (porosity and permeability of the reservoir. Different geostatistical techniques that allow the combination of hard and soft data are taken into account and one refers the main reason to use the geostatistical simulation rather than estimation. Uncertainty in reservoir characterization due to variogram assumption, which is a strict mathematical equation and can leads to serious simplification on description of the natural processes or phenomena under consideration, is treated here. Mutiple-point geostatistics methods based on the concept of training images, suggested by Strebelle (2000 and Caers (2003 owing to variogram limitation to capture complex heterogeneity, is another subject presented. This article intends to provide a review of geostatistical methods to serve the interest of students and researchers.Este artículo presenta una revisión de diversos métodos geoestatísticos disponibles para estimar y para simular características petrofísicas (porosidad y permeabilidad de la formación geológica (roca depósito del petróleo. Se presentan diversas técnicas geostatísticas que permiten la combinación de datos hard y soft y se explica la razón principal para utilizar la simulación geoestatística en vez de estimación. También se explica la incertidumbre en la caracterización del depósito debido a la asunción del variogram. El hecho de que el variogram sea una simple ecuación matemática conduce a la simplificación seria en la descripción de los procesos o de los fenómenos naturales bajo consideración. Los «métodos geostatísticos del Multiplepoint » (Multiple-point geostatistics methods basados en el concepto de training images, sugerido por Strebelle (2000 y Caers (2003, debido a la limitación del variogram para capturar heterogeneidad compleja es otro tema presentado. Este
Quantum Simulations of Solvated Biomolecules Using Hybrid Methods
Hodak, Miroslav
2009-03-01
One of the most important challenges in quantum simulations on biomolecules is efficient and accurate inclusion of the solvent, because the solvent atoms usually outnumber those in the biomolecule of interest. We have developed a hybrid method that allows for explicit quantum-mechanical treatment of the solvent at low computational cost. In this method, Kohn-Sham (KS) density functional theory (DFT) is combined with an orbital-free (OF) DFT. Kohn-Sham (KS) DFT is used to describe the biomolecule and its first solvation shells, while the orbital-free (OF) DFT is employed for the rest of the solvent. The OF part is fully O(N) and capable of handling 10^5 solvent molecules on current parallel supercomputers, while taking only ˜ 10 % of the total time. The compatibility between the KS and OF DFT methods enables seamless integration between the two. In particular, the flow of solvent molecules across the KS/OF interface is allowed and the total energy is conserved. As the first large-scale applications, the hybrid method has been used to investigate the binding of copper ions to proteins involved in prion (PrP) and Parkinson's diseases. Our results for the PrP, which causes mad cow disease when misfolded, resolve a contradiction found in experiments, in which a stronger binding mode is replaced by a weaker one when concentration of copper ions is increased, and show how it can act as a copper buffer. Furthermore, incorporation of copper stabilizes the structure of the full-length PrP, suggesting its protective role in prion diseases. For alpha-synuclein, a Parkinson's disease (PD) protein, we show that Cu binding modifies the protein structurally, making it more susceptible to misfolding -- an initial step in the onset of PD. In collaboration with W. Lu, F. Rose and J. Bernholc.
Nursing Student Anxiety in Simulation Settings: A Mixed Methods Study
Cato, Mary Louise
2013-01-01
The use of simulation as a clinical learning activity is growing in nursing programs across the country. Using simulation, educators can provide students with a realistic patient situation using mannequins or actors as patients in a simulated environment. Students can practice multiple aspects of patient care without the risk of making mistakes…
Numerical simulation for cracks detection using the finite elements method
Directory of Open Access Journals (Sweden)
S Bennoud
2016-09-01
Full Text Available The means of detection must ensure controls either during initial construction, or at the time of exploitation of all parts. The Non destructive testing (NDT gathers the most widespread methods for detecting defects of a part or review the integrity of a structure. In the areas of advanced industry (aeronautics, aerospace, nuclear …, assessing the damage of materials is a key point to control durability and reliability of parts and materials in service. In this context, it is necessary to quantify the damage and identify the different mechanisms responsible for the progress of this damage. It is therefore essential to characterize materials and identify the most sensitive indicators attached to damage to prevent their destruction and use them optimally. In this work, simulation by finite elements method is realized with aim to calculate the electromagnetic energy of interaction: probe and piece (with/without defect. From calculated energy, we deduce the real and imaginary components of the impedance which enables to determine the characteristic parameters of a crack in various metallic parts.
Simulating biofilm deformation and detachment with the immersed boundary method
Sudarsan, Rangarajan; Stockie, John M; Eberl, Hermann J
2015-01-01
We apply the immersed boundary (or IB) method to simulate deformation and detachment of a periodic array of wall-bounded biofilm colonies in response to a linear shear flow. The biofilm material is represented as a network of Hookean springs that are placed along the edges of a triangulation of the biofilm region. The interfacial shear stress, lift and drag forces acting on the biofilm colony are computed by using fluid stress jump method developed by Williams, Fauci and Gaver [Disc. Contin. Dyn. Sys. B 11(2):519-540, 2009], with a modified version of their exclusion filter. Our detachment criterion is based on the novel concept of an averaged equivalent continuum stress tensor defined at each IB point in the biofilm which is then used to determine a corresponding von Mises yield stress; wherever this yield stress exceeds a given critical threshold the connections to that node are severed, thereby signalling the onset of a detachment event. In order to capture the deformation and detachment behaviour of a bio...
Tsunami Simulation using CIP Method with Characteristic Curve Equations and TVD-MacCormack Method
Fukazawa, Souki; Tosaka, Hiroyuki
2015-04-01
After entering 21st century, we already had two big tsunami disasters associated with Mw9 earthquakes in Sumatra and Japan. To mitigate the damages of tsunami, the numerical simulation technology combined with information technologies could provide reliable predictions in planning countermeasures to prevent the damage to the social system, making safety maps, and submitting early evacuation information to the residents. Shallow water equations are still solved not only for global scale simulation of the ocean tsunami propagation but also for local scale simulation of overland inundation in many tsunami simulators though three-dimensional model starts to be used due to improvement of CPU. One-dimensional shallow water equations are below: partial bm{Q}/partial t+partial bm{E}/partial x=bm{S} in which bm{Q}=( D M )), bm{E}=( M M^2/D+gD^2/2 )), bm{S}=( 0 -gDpartial z/partial x-gn2 M|M| /D7/3 )). where D[m] is total water depth; M[m^2/s] is water flux; z[m] is topography; g[m/s^2] is the gravitational acceleration; n[s/m1/3] is Manning's roughness coefficient. To solve these, the staggered leapfrog scheme is used in a lot of wide-scale tsunami simulator. But this scheme has a problem that lagging phase error occurs when courant number is small. In some practical simulation, a kind of diffusion term is added. In this study, we developed two wide-scale tsunami simulators with different schemes and compared usual scheme and other schemes in practicability and validity. One is a total variation diminishing modification of the MacCormack method (TVD-MacCormack method) which is famous for the simulation of compressible fluids. The other is the Cubic Interpolated Profile (CIP) method with characteristic curve equations transformed from shallow water equations. Characteristic curve equations derived from shallow water equations are below: partial R_x±/partial t+C_x±partial R_x±/partial x=∓ g/2partial z/partial x in which R_x±=√{gD}± u/2, C_x±=u± √{gD}. where u
Directory of Open Access Journals (Sweden)
H. Pirali
2012-01-01
Full Text Available In this paper a combined node searching algorithm for simulation of crack discontinuities in meshless methods called combined visibility and surrounding triangles (CVT is proposed. The element free Galerkin (EFG method is employed for stress analysis of cracked bodies. The proposed node searching algorithm is based on the combination of surrounding triangles and visibility methods; the surrounding triangles method is used for support domains of nodes and quadrature points generated at the vicinity of crack faces and the visibility method is used for points located on the crack faces. In comparison with the conventional methods, such as the visibility, the transparency, and the diffraction method, this method is simpler with reasonable efficiency. To show the performance of this method, linear elastic fracture mechanics analyses are performed on number of standard test specimens and stress intensity factors are calculated. It is shown that the results are in good agreement with the exact solution and with those generated by the finite element method (FEM.
Gradient augmented level set method for phase change simulations
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation.
Breton, S-P; Sumner, J; Sørensen, J N; Hansen, K S; Sarmast, S; Ivanell, S
2017-04-13
Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).
A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation
Breton, S.-P.; Sumner, J.; Sørensen, J. N.; Hansen, K. S.; Sarmast, S.; Ivanell, S.
2017-03-01
Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple wake situations is also supplied. Some typical results for wind turbine and wind farm flows are presented to illustrate best practices for carrying out high-fidelity LES of wind farms under various atmospheric and terrain conditions. This article is part of the themed issue 'Wind energy in complex terrains'.
Earthquake Source Simulations: A Coupled Numerical Method and Large Scale Simulations
Ely, G. P.; Xin, Q.; Faerman, M.; Day, S.; Minster, B.; Kremenek, G.; Moore, R.
2003-12-01
We investigate a scheme for interfacing Finite-Difference (FD) and Finite-Element (FE) models in order to simulate dynamic earthquake rupture. The more powerful but slower FE method allows for (1) unusual geometries (e.g. dipping and curved faults), (2) nonlinear physics, and (3) finite displacements. These capabilities are computationally expensive and limit the useful size of the problem that can be solved. Large efficiencies are gained by employing FE only where necessary in the near source region and coupling this with an efficient FD solution for the surrounding medium. Coupling is achieved through setting up and an overlapping buffer zone between the domains modeled by the two methods. The buffer zone is handled numerically as a set of mutual offset boundary conditions. This scheme eliminates the effect of the artificial boundaries at the interface and allows energy to propagate in both directions across the boundary. In general it is necessary to interpolate variables between the meshes and time discretizations used for each model, and this can create artifacts that must be controlled. A modular approach has been used in which either of the two component codes can be substituted with another code. We have successfully demonstrated coupling for a simulation between a second-order FD rupture dynamics code and fourth-order staggered-grid FD code. To be useful earthquake source models must capture a large range of length and time scales, which is very computationally demanding. This requires that (for current computer technology) codes must utilize parallel processing. Additionally, if larges quantities of output data are to be saved, a high performance data management system is desirable. We show results from a large scale rupture dynamics simulation designed to test these capabilities. We use second-order FD with dimensions of 400 x 800 x 800 nodes, run for 3000 time steps. Data were saved for the entire volume for three components of velocity at every time
Laser method for simulating the transient radiation effects of semiconductor
Li, Mo; Sun, Peng; Tang, Ge; Wang, Xiaofeng; Wang, Jianwei; Zhang, Jian
2017-05-01
In this paper, we demonstrate the laser simulation adequacy both by theoretical analysis and experiments. We first explain the basic theory and physical mechanisms of laser simulation of transient radiation effect of semiconductor. Based on a simplified semiconductor structure, we describe the reflection, optical absorption and transmission of laser beam. Considering two cases of single-photon absorption when laser intensity is relatively low and two-photon absorption with higher laser intensity, we derive the laser simulation equivalent dose rate model. Then with 2 types of BJT transistors, laser simulation experiments and gamma ray radiation experiments are conducted. We found good linear relationship between laser simulation and gammy ray which depict the reliability of laser simulation.
Simulation and Verificaiton of Flow in Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm; Szabo, Peter; Geiker, Mette Rica
2005-01-01
Simulations and experimental results of L-box and slump flow test of a self-compacting mortar and a self-compacting concrete are compared. The simulations are based on a single fluid approach and assume an ideal Bingham behavior. It is possible to simulate the experimental results of both tests...... for a given set of rheological parameters. However, it is important to include boundary conditions related to the lifting procedure in the two tests....
Efficient extrapolation methods for electro- and magnetoquasistatic field simulations
Directory of Open Access Journals (Sweden)
M. Clemens
2003-01-01
Full Text Available In magneto- and electroquasi-static time domain simulations with implicit time stepping schemes the iterative solvers applied to the large sparse (non-linear systems of equations are observed to converge faster if more accurate start solutions are available. Different extrapolation techniques for such new time step solutions are compared in combination with the preconditioned conjugate gradient algorithm. Simple extrapolation schemes based on Taylor series expansion are used as well as schemes derived especially for multi-stage implicit Runge-Kutta time stepping methods. With several initial guesses available, a new subspace projection extrapolation technique is proven to produce an optimal initial value vector. Numerical tests show the resulting improvements in terms of computational efficiency for several test problems. In quasistatischen elektromagnetischen Zeitbereichsimulationen mit impliziten Zeitschrittverfahren zeigt sich, dass die iterativen Lösungsverfahren für die großen dünnbesetzten (nicht-linearen Gleichungssysteme schneller konvergieren, wenn genauere Startlösungen vorgegeben werden. Verschiedene Extrapolationstechniken werden für jeweils neue Zeitschrittlösungen in Verbindung mit dem präkonditionierten Konjugierte Gradientenverfahren vorgestellt. Einfache Extrapolationsverfahren basierend auf Taylorreihenentwicklungen werden ebenso benutzt wie speziell für mehrstufige implizite Runge-Kutta-Verfahren entwickelte Verfahren. Sind verschiedene Startlösungen verfügbar, so erlaubt ein neues Unterraum-Projektion- Extrapolationsverfahren die Konstruktion eines optimalen neuen Startvektors. Numerische Tests zeigen die aus diesen Verfahren resultierenden Verbesserungen der numerischen Effizienz.
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
Aeroelastic large eddy simulations using vortex methods: unfrozen turbulent and sheared inflow
DEFF Research Database (Denmark)
Branlard, Emmanuel Simon Pierre; Papadakis, G.; Gaunaa, Mac
2015-01-01
Vortex particles methods are applied to the aeroelastic simulation of a wind turbine in sheared and turbulent inflow. The possibility to perform large-eddy simulations of turbulence with the effect of the shear vorticity is demonstrated for the first time in vortex methods simulations. Most vorte...
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
A stochastic quasi Newton method for molecular simulations
Chau, Chun Dong
2010-01-01
In this thesis the Langevin equation with a space-dependent alternative mobility matrix has been considered. Simulations of a complex molecular system with many different length and time scales based on the fundamental equations of motion take a very long simulation time before capturing the
MDMS: Molecular dynamics meta-simulator for evaluating exchange type sampling methods
Smith, Daniel B.; Okur, Asim; Brooks, Bernard R.
2012-08-01
Replica exchange methods have become popular tools to explore conformational space for small proteins. For larger biological systems, even with enhanced sampling methods, exploring the free energy landscape remains computationally challenging. This problem has led to the development of many improved replica exchange methods. Unfortunately, testing these methods remains expensive. We propose a molecular dynamics meta-simulator (MDMS) based on transition state theory to simulate a replica exchange simulation, eliminating the need to run explicit dynamics between exchange attempts. MDMS simulations allow for rapid testing of new replica exchange based methods, greatly reducing the amount of time needed for new method development.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
DEFF Research Database (Denmark)
Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J
2012-01-01
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....
Numerical simulations of multicomponent ecological models with adaptive methods.
Owolabi, Kolade M; Patidar, Kailash C
2016-01-08
The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that
Sakamoto, Shinichi; Otsuru, Toru
2014-01-01
This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.
Mitsutake, Ayori; Okamoto, Yuko
2009-04-01
We discuss multidimensional generalizations of multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential-energy function E0 by adding any physical quantity V of interest as a new energy term with a coupling constant lambda. We then perform a multidimensional multicanonical simulation where a random walk in E0 and V spaces is realized. We can alternately perform a multidimensional simulated-tempering simulation where a random walk in temperature T and parameter lambda is realized. The results of the multidimensional replica-exchange simulations can be used to determine the weight factors for these multidimensional multicanonical and simulated-tempering simulations.
Modelling of dusty plasma properties by computer simulation methods
Energy Technology Data Exchange (ETDEWEB)
Baimbetov, F B [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Ramazanov, T S [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Dzhumagulova, K N [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Kadyrsizov, E R [Institute for High Energy Densities of RAS, Izhorskaya 13/19, Moscow 125412 (Russian Federation); Petrov, O F [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan); Gavrikov, A V [IETP, Al Farabi Kazakh National University, 96a, Tole bi St, Almaty 050012 (Kazakhstan)
2006-04-28
Computer simulation of dusty plasma properties is performed. The radial distribution functions, the diffusion coefficient are calculated on the basis of the Langevin dynamics. A comparison with the experimental data is made.
Directory of Open Access Journals (Sweden)
GHAREHPETIAN, G. B.
2009-06-01
Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.
NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.
Energy Technology Data Exchange (ETDEWEB)
LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.
2005-09-12
Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.
Development of a simulation method for the subsea production system
Directory of Open Access Journals (Sweden)
Jong Hun Woo
2014-07-01
Full Text Available The failure of a subsea production plant could induce fatal hazards and enormous loss to human lives, environments, and properties. Thus, for securing integrated design safety, core source technologies include subsea system integration that has high safety and reliability and a technique for the subsea flow assurance of subsea production plant and subsea pipeline network fluids. The evaluation of subsea flow assurance needs to be performed considering the performance of a subsea production plant, reservoir production characteristics, and the flow characteristics of multiphase fluids. A subsea production plant is installed in the deep sea, and thus is exposed to a high-pressure/low-temperature environment. Accordingly, hydrates could be formed inside a subsea production plant or within a subsea pipeline network. These hydrates could induce serious damages by blocking the flow of subsea fluids. In this study, a simulation technology, which can visualize the system configuration of subsea production processes and can simulate stable flow of fluids, was introduced. Most existing subsea simulations have performed the analysis of dynamic behaviors for the installation of subsea facilities or the flow analysis of multiphase flow within pipes. The above studies occupy extensive research areas of the subsea field. In this study, with the goal of simulating the configuration of an entire deep sea production system compared to existing studies, a DES-based simulation technology, which can logically simulate oil production processes in the deep sea, was analyzed, and an implementation example of a simplified case was introduced.
Simulated-tempering replica-exchange method for the multidimensional version.
Mitsutake, Ayori
2009-09-07
In this article, the general formulation of the multidimensional simulated-tempering replica-exchange method is described. In previous works, the one-dimensional replica-exchange simulated-tempering and simulated-tempering replica-exchange methods were developed. For the former method, the weight factor of the one-dimensional simulated tempering is determined by a short replica-exchange simulation and multiple-histogram reweighing techniques. For the latter method, the production run is a replica-exchange simulation with a few replicas not in the canonical ensembles but in the simulated-tempering ensembles which cover wide ranges of temperature. Recently, the general formulation of the multidimensional replica-exchange simulated tempering was presented. In this article, the extension of the simulated-tempering replica-exchange method for the multidimensional version is given. As an example of applications of the algorithm, a two-dimensional replica-exchange simulation and two simulated-tempering replica-exchange simulations have been performed. Here, an alpha-helical peptide system with a model solvent has been used for the applications.
Discrete vortex method simulations of the aerodynamic admittance in bridge aerodynamics
DEFF Research Database (Denmark)
Rasmussen, Johannes Tophøj; Hejlesen, Mads Mølholm; Larsen, Allan
2010-01-01
We present a novel method for the simulation of the aerodynamic admittance in bluff body aerodynamics. The method introduces a model for describing oncoming turbulence in two-dimensional discrete vortex method simulations by seeding the upstream ﬂow with vortex particles. The turbulence...
The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method
Directory of Open Access Journals (Sweden)
Dipakkumar Gohil
2012-06-01
Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.
Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design
Ang, Chee Siang; Zaphiris, Panayiotis
We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.
Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators
National Research Council Canada - National Science Library
Ling, Hao
2000-01-01
This report summarizes the scientific progress on the research grant "Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators" during the period 1 December...
National Research Council Canada - National Science Library
Ling, Hao
1998-01-01
This report summarizes the scientific progress on the research grant "Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators" during the period 1 December...
Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators
National Research Council Canada - National Science Library
Ling, Hao
1999-01-01
This report summarizes the scientific progress on the research grant "Application of Model-Based Signal Processing Methods to Computational Electromagnetics Simulators" during the period 1 December...
A Ten-Step Design Method for Simulation Games in Logistics Management
Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.
2011-01-01
Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a
Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation
Van Esch, J.M.
2010-01-01
Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses
USE OF ENERGY METHOD TO SIMULATE THE PERFORMANCE ...
African Journals Online (AJOL)
IMECEO4, ASME-conference Nov. 13-20, Anaheim California, USA. [10] Grossman, G.; Zaltash, A.; (2001). ABSIM - Modular simulation of advanced absorption system,. International J. of Refrigeration. (24), 531 – 543. [11] Keith, E. H; Radermacher R.; Klein. S.; (1996), Absorption Chillers and. Heat pumps, CRC Press, ...
Crop canopy BRDF simulation and analysis using Monte Carlo method
Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.
2006-01-01
This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-09-10
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.
Numerical simulation of GEW equation using RBF collocation method
Directory of Open Access Journals (Sweden)
Hamid Panahipour
2012-08-01
Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.
Accurate numerical methods for micromagnetics simulations with general geometries
García-Cervera, C J
2003-01-01
In current FFT-based algorithms for micromagnetics simulations, the boundary is typically replaced by a staircase approximation along the grid lines, either eliminating the incomplete cells or replacing them by complete cells. Sometimes the magnetizations at the boundary cells are weighted by the volume of the sample in the corresponding cell. We show that this leads to large errors in the computed exchange and stray fields. One consequence of this is that the predicted switching mechanism depends sensitively on the orientation of the numerical grid. We present a boundary-corrected algorithm to efficiently and accurately handle the incomplete cells at the boundary. We show that this boundary-corrected algorithm greatly improves the accuracy in micromagnetics simulations. We demonstrate by using A. Arrott's example of a hexagonal element that the switching mechanism is predicted independently of the grid orientation.
[Numerical flow simulation : A new method for assessing nasal breathing].
Hildebrandt, T; Osman, J; Goubergrits, L
2016-08-01
The current options for objective assessment of nasal breathing are limited. The maximum they can determine is the total nasal resistance. Possibilities to analyze the endonasal airstream are lacking. In contrast, numerical flow simulation is able to provide detailed information of the flow field within the nasal cavity. Thus, it has the potential to analyze the nasal airstream of an individual patient in a comprehensive manner and only a computed tomography (CT) scan of the paranasal sinuses is required. The clinical application is still limited due to the necessary technical and personnel resources. In particular, a statistically based referential characterization of normal nasal breathing does not yet exist in order to be able to compare and classify the simulation results.
Simulation methods for multiperiodic and aperiodic nanostructured dielectric waveguides
DEFF Research Database (Denmark)
Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina
2017-01-01
on Rudin–Shapiro, Fibonacci, and Thue–Morse binary sequences. The near-field and far-field properties are computed employing the finite-element method (FEM), the finite-difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). The results show that all three methods...
Fast Multilevel Panel Method for Wind Turbine Rotor Flow Simulations
van Garrel, Arne; Venner, Cornelis H.; Hoeijmakers, Hendrik Willem Marie
2017-01-01
A fast multilevel integral transform method has been developed that enables the rapid analysis of unsteady inviscid flows around wind turbines rotors. A low order panel method is used and the new multi-level multi-integration cluster (MLMIC) method reduces the computational complexity for
A Framework for Simulation Validation & Verification Method Selection
Roungas, V.; Meijer, S.A.; Verbraeck, A.; Ramezani, Arash; Williams, Edward; Bauer, Marek
2017-01-01
Thirty years of research on validation and verification (V\\&V) has returned a plethora of methods, statistical techniques, and reported case studies. It is that abundance of methods that poses a major challenge. Because of overlap between methods and time and budget constraints, it is impossible
Changing the Paradigm: Simulation, a Method of First Resort
2011-09-01
Analysis System COCOM Combatant Commander COSMOS C4ISR Space and Missile Operations Simulation CPU Central Processing Unit CSV Comma-Separated Values...the outputs of the model during the course of the model development without having to export the results and interpret the results using additional...of each scenario run as desired, with the output of all readily available to either view within Simio or be exported to a comma-separated values
Geometry optimization of zirconium sulfophenylphosphonate layers by molecular simulation methods
Czech Academy of Sciences Publication Activity Database
Škoda, J.; Pospíšil, M.; Kovář, P.; Melánová, Klára; Svoboda, J.; Beneš, L.; Zima, Vítězslav
2018-01-01
Roč. 24, č. 1 (2018), s. 1-12, č. článku 10. ISSN 1610-2940 R&D Projects: GA ČR(CZ) GA14-13368S; GA ČR(CZ) GA17-10639S Institutional support: RVO:61389013 Keywords : zirconium sulfophenylphosphonate * intercalation * molecular simulation Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 1.425, year: 2016
Simulation Method of Cumulative Flow without of Axial Stagnation Point
Directory of Open Access Journals (Sweden)
I. V. Minin
2015-01-01
Full Text Available The paper describes a developed analytical model of non-stationary formation of a cumulative jet without axial stagnation point. It shows that it is possible to control the weight, size, speed, and momentum of the jet with the parameters, which are not achievable in the classical mode of jet formation. Considered jet formation principle can be used to conduct laboratory simulation of astro-like plasma jets.
Bulliman, B T; Kuchel, P W
1990-01-01
Comparisons are made between some traditional numerical integrators and integration using "Adomian" power series solutions to the ordinary differential equations. These are initial investigations to determine the viability of their application to the simulation of large complex metabolic pathways. A small set of test equations was employed to represent the types of problems encountered in biochemical applications. It was found that the "Adomian" method is as accurate as the numerical methods and, for 'nonstiff' equations or for small simulation times, the "Adomian" method is often more efficient. The results suggest that it may be worthwhile refining this method for biochemical simulations for situations where the traditional numerical methods fail.
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Directory of Open Access Journals (Sweden)
Justin S Hogg
2014-04-01
Full Text Available Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that
Development and Analysis of Train Brake Curve Calculation Methods with Complex Simulation
Directory of Open Access Journals (Sweden)
Geza Tarnai
2006-01-01
Full Text Available This paper describes an efficient method using simulation for developing and analyzing train brake curve calculation methods for the on-board computer of the ETCS system. An application example with actual measurements is also presented.
2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method
Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)
2000-01-01
The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.
Digital system verification a combined formal methods and simulation framework
Li, Lun
2010-01-01
Integrated circuit capacity follows Moore's law, and chips are commonly produced at the time of this writing with over 70 million gates per device. Ensuring correct functional behavior of such large designs before fabrication poses an extremely challenging problem. Formal verification validates the correctness of the implementation of a design with respect to its specification through mathematical proof techniques. Formal techniques have been emerging as commercialized EDA tools in the past decade. Simulation remains a predominantly used tool to validate a design in industry. After more than 5
Modelling and simulation of diffusive processes methods and applications
Basu, SK
2014-01-01
This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport
Discrete vortex method simulations of aerodynamic admittance in bridge aerodynamics
DEFF Research Database (Denmark)
Rasmussen, Johannes Tophøj; Hejlesen, Mads Mølholm; Larsen, Allan
, and to determine aerodynamic forces and the corresponding ﬂutter limit. A simulation of the three-dimensional bridge responseto turbulent wind is carried out by quasi steady theory by modelling the bridge girder as a line like structure [2], applying the aerodynamic load coefﬁcients found from the current version...... of DVMFLOW in a strip wise fashion. Neglecting the aerodynamic admittance, i.e. the correlation of the instantaneous lift force to the turbulent ﬂuctuations in the vertical velocities, leads to higher response to high frequency atmospheric turbulence than would be obtained from wind tunnel tests....
Simulation Methods in the Contact with Impact of Rigid Bodies
Directory of Open Access Journals (Sweden)
Cristina Basarabă-Opritescu
2007-10-01
Full Text Available The analysis of impacts of elastic bodies is topical and it has many applications, practical and theoretical, too. The elastic character of collision is put in evidence, especially by the velocities of some parts of a particular body, named “ring”. In the presented paper, the situation of elastic collisions is put in evidence by the simulation with the help of the program ANSYS and it refers to the particular case of the ring, with the mechanical characteristics, given in the paper
Energy Technology Data Exchange (ETDEWEB)
Noack, Ruediger
2016-07-01
In this work a general method for reproduction of two-phase flow fields calculated by generic detailed CFD-simulations on a coarse grid is developed. Numerical models for only few sections of geometry and sets of flow parameters with respectively representative character allow a complete description of the large component considered. Thus a comprehensive three-dimensional CFD-simulation, meeting industrial needs for low computational costs, is generated.
Simulation Methods for Multiperiodic and Aperiodic Nanostructured Dielectric Waveguides
DEFF Research Database (Denmark)
Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina
on Rudin-Shapiro, Fibonacci, and Thue-Morse binary sequences. The near-field and far-field properties are calculated employing the finite-element method (FEM), the finite- difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). References [1] S. V. Boriskina, A. Gopinath...
Toward a practical method for adaptive QM/MM simulations
Bulo, R.E.; Ensing, B.; Sikkema, J.; Visscher, L.
2009-01-01
We present an accurate adaptive multiscale molecular dynamics method that will enable the detailed study of large molecular systems that mimic experiment. The method treats the reactive regions at the quantum mechanical level and the inactive environment regions at lower levels of accuracy, while at
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Energy Technology Data Exchange (ETDEWEB)
Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua
2016-02-15
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
Stochastic linear multistep methods for the simulation of chemical kinetics.
Barrio, Manuel; Burrage, Kevin; Burrage, Pamela
2015-02-14
In this paper, we introduce the Stochastic Adams-Bashforth (SAB) and Stochastic Adams-Moulton (SAM) methods as an extension of the τ-leaping framework to past information. Using the Θ-trapezoidal τ-leap method of weak order two as a starting procedure, we show that the k-step SAB method with k ≥ 3 is order three in the mean and correlation, while a predictor-corrector implementation of the SAM method is weak order three in the mean but only order one in the correlation. These convergence results have been derived analytically for linear problems and successfully tested numerically for both linear and non-linear systems. A series of additional examples have been implemented in order to demonstrate the efficacy of this approach.
Reduction Methods for Real-time Simulations in Hybrid Testing
DEFF Research Database (Denmark)
Andersen, Sebastian
2016-01-01
and complexity of kinematic nonlinear numerical substructures are presented, with special emphasis on the use of basis reduction methods. Three elements that can help to improve the accuracy are presented and illustrated. In kinematic nonlinear systems, various deformation modes are coupled through a nonlinear......Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...
Validation and Continued Development of Methods for Spheromak Simulation
Benedett, Thomas
2017-10-01
The HIT-SI experiment has demonstrated stable sustainment of spheromaks. Determining how the underlying physics extrapolate to larger, higher-temperature regimes is of prime importance in determining the viability of the inductively-driven spheromak. It is thus prudent to develop and validate a computational model that can be used to study current results and study the effect of possible design choices on plasma behavior. An extended MHD model has shown good agreement with experimental data at 14 kHz injector operation. Efforts to extend the existing validation to a range of higher frequencies (36, 53, 68 kHz) using the PSI-Tet 3D extended MHD code will be presented, along with simulations of potential combinations of flux conserver features and helicity injector configurations and their impact on current drive performance, density control, and temperature for future SIHI experiments. Work supported by USDoE.
Phase portrait methods for verifying fluid dynamic simulations
Energy Technology Data Exchange (ETDEWEB)
Stewart, H.B.
1989-01-01
As computing resources become more powerful and accessible, engineers more frequently face the difficult and challenging engineering problem of accurately simulating nonlinear dynamic phenomena. Although mathematical models are usually available, in the form of initial value problems for differential equations, the behavior of the solutions of nonlinear models is often poorly understood. A notable example is fluid dynamics: while the Navier-Stokes equations are believed to correctly describe turbulent flow, no exact mathematical solution of these equations in the turbulent regime is known. Differential equations can of course be solved numerically, but how are we to assess numerical solutions of complex phenomena without some understanding of the mathematical problem and its solutions to guide us
Method of Modeling and Simulation of Shaped External Occulters
Lyon, Richard G. (Inventor); Clampin, Mark (Inventor); Petrone, Peter, III (Inventor)
2016-01-01
The present invention relates to modeling an external occulter including: providing at least one processor executing program code to implement a simulation system, the program code including: providing an external occulter having a plurality of petals, the occulter being coupled to a telescope; and propagating light from the occulter to a telescope aperture of the telescope by scalar Fresnel propagation, by: obtaining an incident field strength at a predetermined wavelength at an occulter surface; obtaining a field propagation from the occulter to the telescope aperture using a Fresnel integral; modeling a celestial object at differing field angles by shifting a location of a shadow cast by the occulter on the telescope aperture; calculating an intensity of the occulter shadow on the telescope aperture; and applying a telescope aperture mask to a field of the occulter shadow, and propagating the light to a focal plane of the telescope via FFT techniques.
Simulation of 3D tumor cell growth using nonlinear finite element method.
Dong, Shoubing; Yan, Yannan; Tang, Liqun; Meng, Junping; Jiang, Yi
2016-01-01
We propose a novel parallel computing framework for a nonlinear finite element method (FEM)-based cell model and apply it to simulate avascular tumor growth. We derive computation formulas to simplify the simulation and design the basic algorithms. With the increment of the proliferation generations of tumor cells, the FEM elements may become larger and more distorted. Then, we describe a remesh and refinement processing of the distorted or over large finite elements and the parallel implementation based on Message Passing Interface to improve the accuracy and efficiency of the simulation. We demonstrate the feasibility and effectiveness of the FEM model and the parallelization methods in simulations of early tumor growth.
Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth
2015-01-01
based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Blaabjerg, Frede
2017-01-01
. Through this method, the required computation time and CPU memory can be reduced, where this faster simulation can be an advantage of a large network simulation. Besides, the achieved results show the same results as the non-linear time-domain simulation. Furthermore, the HSS modeling can describe how...... with different switching frequency or harmonics from ac-dc converters makes that harmonics and frequency coupling are both problems of ac system and challenges of dc system. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling...
Human swallowing simulation based on videofluorography images using Hamiltonian MPS method
Kikuchi, Takahiro; Michiwaki, Yukihiro; Kamiya, Tetsu; Toyama, Yoshio; Tamai, Tasuku; Koshizuka, Seiichi
2015-09-01
In developed nations, swallowing disorders and aspiration pneumonia have become serious problems. We developed a method to simulate the behavior of the organs involved in swallowing to clarify the mechanisms of swallowing and aspiration. The shape model is based on anatomically realistic geometry, and the motion model utilizes forced displacements based on realistic dynamic images to reflect the mechanisms of human swallowing. The soft tissue organs are modeled as nonlinear elastic material using the Hamiltonian MPS method. This method allows for stable simulation of the complex swallowing movement. A penalty method using metaballs is employed to simulate contact between organ walls and smooth sliding along the walls. We performed four numerical simulations under different analysis conditions to represent four cases of swallowing, including a healthy volunteer and a patient with a swallowing disorder. The simulation results were compared to examine the epiglottic downfolding mechanism, which strongly influences the risk of aspiration.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images
Energy Technology Data Exchange (ETDEWEB)
De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2017-06-15
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.
An efficient hybrid explicit/implicit solvent method for biomolecular simulations.
Lee, Michael S; Salsbury, Freddie R; Olson, Mark A
2004-12-01
We present a new hybrid explicit/implicit solvent method for dynamics simulations of macromolecular systems. The method models explicitly the hydration of the solute by either a layer or sphere of water molecules, and the generalized Born (GB) theory is used to treat the bulk continuum solvent outside the explicit simulation volume. To reduce the computational cost, we implemented a multigrid method for evaluating the pairwise electrostatic and GB terms. It is shown that for typical ion and protein simulations our method achieves similar equilibrium and dynamical observables as the conventional particle mesh Ewald (PME) method. Simulation timings are reported, which indicate that the hybrid method is much faster than PME, primarily due to a significant reduction in the number of explicit water molecules required to model hydration effects. (c) 2004 Wiley Periodicals, Inc.
Marshall, Deborah A; Burgos-Liz, Lina; IJzerman, Maarten J; Osgood, Nathaniel D; Padula, William V; Higashi, Mitchell K; Wong, Peter K; Pasupathy, Kalyan S; Crown, William
2015-01-01
Health care delivery systems are inherently complex, consisting of multiple tiers of interdependent subsystems and processes that are adaptive to changes in the environment and behave in a nonlinear fashion. Traditional health technology assessment and modeling methods often neglect the wider health system impacts that can be critical for achieving desired health system goals and are often of limited usefulness when applied to complex health systems. Researchers and health care decision makers can either underestimate or fail to consider the interactions among the people, processes, technology, and facility designs. Health care delivery system interventions need to incorporate the dynamics and complexities of the health care system context in which the intervention is delivered. This report provides an overview of common dynamic simulation modeling methods and examples of health care system interventions in which such methods could be useful. Three dynamic simulation modeling methods are presented to evaluate system interventions for health care delivery: system dynamics, discrete event simulation, and agent-based modeling. In contrast to conventional evaluations, a dynamic systems approach incorporates the complexity of the system and anticipates the upstream and downstream consequences of changes in complex health care delivery systems. This report assists researchers and decision makers in deciding whether these simulation methods are appropriate to address specific health system problems through an eight-point checklist referred to as the SIMULATE (System, Interactions, Multilevel, Understanding, Loops, Agents, Time, Emergence) tool. It is a primer for researchers and decision makers working in health care delivery and implementation sciences who face complex challenges in delivering effective and efficient care that can be addressed with system interventions. On reviewing this report, the readers should be able to identify whether these simulation modeling
Application of Conjugate Gradient methods to tidal simulation
Barragy, E.; Carey, G.F.; Walters, R.A.
1993-01-01
A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.
Simple method for any planar wiggler field simulation
Directory of Open Access Journals (Sweden)
M. N. Smolyakov
2001-04-01
Full Text Available This paper deals with a nonstandard method for calculating the magnetic field of planar wigglers and undulators consisting of pure permanent magnets. This method of calculation is based on certain properties of the Fourier transform. It allows the analytical expression of the Fourier transform for the planar magnetic fields through the wiggler's geometry and magnetization of its blocks. The upper theoretical limit for the amplitude of the magnetic field is derived and matched with the field amplitude of planar wigglers with standard designs. The property of universality for planar wigglers is also taken into consideration as it may greatly simplify the analysis of magnetic fields for wigglers with different designs.
Directory of Open Access Journals (Sweden)
Uma Devi Kumaravelu
2012-01-01
Full Text Available A method of simulation and modeling outer rotor permanent magnet brushless DC (ORPMBLDC motor under dynamic conditions using finite element method by FEMM 4.2 software package is presented. In the proposed simulation, the torque developed at various positions of the rotor, under a complete cycle of excitation of the stator, is analysed. A novel method of sinusoidal excitation is proposed to enhance the overall torque development of ORPMBLDC motor.
A Fast Finite-Difference Time Domain Simulation Method for the Source-Stirring Reverberation Chamber
Wenxing Li; Chongyi Yue; Atef Elsherbeni
2017-01-01
Numerical analysis methods are often employed to improve the efficiency of the design and application of the source-stirring reverberation chamber. However, the state of equilibrium of the field inside the chamber is hard to reach. In this paper, we present a fast simulation method, which is able to significantly decrease the simulation time of the source-stirring reverberation chamber. The mathematical model of this method is given in detail and home-made FDTD code is employed to conduct the...
Uma Devi Kumaravelu; Sanavullah Mohamed Yakub
2012-01-01
A method of simulation and modeling outer rotor permanent magnet brushless DC (ORPMBLDC) motor under dynamic conditions using finite element method by FEMM 4.2 software package is presented. In the proposed simulation, the torque developed at various positions of the rotor, under a complete cycle of excitation of the stator, is analysed. A novel method of sinusoidal excitation is proposed to enhance the overall torque development of ORPMBLDC motor.
Pardo D.; Nam M.J.; Torres-Verdín C.; Hoversten M.G.; Garay Iñ.
2011-01-01
We introduce a new numerical method to simulate geophysical marine controlled source electromagnetic (CSEM) measurements for the case of 2D structures and finite 3D sources of electromagnetic (EM) excitation. The method of solution is based on a spatial discretization that combines a 1D Fourier transform with a 2D self-adaptive, goal-oriented, hp-Finite element method. It enables fast and accurate simulations for a variety of important, challenging and practical cases of marine CSEM acquisiti...
Clinical simulation as an evaluation method in health informatics
DEFF Research Database (Denmark)
Jensen, Sanne
2016-01-01
Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...
Development of new deactivation method for simulation of fluid ...
Indian Academy of Sciences (India)
processing unit in the petroleum industry for converting gas oil streams into high octane gasoline, cycle oils, and .... 4.0. 292. 72. 24.66. - where Z/M= ratio of zeolite to matrix. MM= Mitchell method equilibrium catalyst. *Numbers in the bracket indicate percent E-Cat metals. TPR unit. In an experiment, 250 mg of the catalyst.
A Simulator to Enhance Teaching and Learning of Mining Methods ...
African Journals Online (AJOL)
Audio visual education that incorporates devices and materials which involve sight, sound, or both has become a sine qua non in recent times in the teaching and learning process. An automated physical model of mining methods aided with video instructions was designed and constructed by harnessing locally available ...
Modified enthalpy method for the simulation of melting and ...
Indian Academy of Sciences (India)
face obtained is compared satisfactorily with the experimental results available in literature. Keywords. Melting; enthalpy method; wavy interface; mushy zone constant. 1. Introduction. The study of melting and solidification offers insights in the design of casting, welding, latent thermal energy storage systems, etc., and in the ...
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
Mathematical modeling and simulation methods in energy systems
Energy Technology Data Exchange (ETDEWEB)
Bahn, O. [Montreal Univ., PQ (Canada). Ecole des Hautes Etudes Commerciales; Haurie, A. [Montreal Univ., PQ (Canada). Groupe d' etudes et de recherche en analyse des decisions]|[Geneva Univ., (Switzerland); Zachary, D.S. [American Univ. of Sharjah (United Arab Emirates). Dept. of Physics
2004-05-01
This study reviewed a modeling approach that evaluates the interactions between the economy of a region, its energy production/consumption system and the environmental impacts of these activities. Various linear programming models have been used to examine how an economy can adapt to abrupt changes in crude oil supply as a primary energy source. Although the models capture the complex interactions between technologies, energy options, economic development and social acceptance of energy policies, they should be modified to clarify the fact that energy demand is a derived demand that is related to technology choices. The models should also clarify that energy is a fundamental resource for the economy. As such, energy demand is influenced by macro-economic adjustments occurring in other economic sectors. This paper presents a first account of the general structure and potential use of mathematical and simulation models of energy systems. A first taxonomy of energy-economy-environment (E3) models was provided along with their main modeling approach. An alternative classification was proposed and technology ranking was discussed along with some issues in energy modeling. 16 refs., 1 tab., 2 figs.
Simulation As a Method To Support Complex Organizational Transformations in Healthcare
Rothengatter, D.C.F.; Katsma, Christiaan; Van Hillegersberg, Jos
2010-01-01
In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that is in a major transition. The simulation represents the future situation of the hospital and enables the healthcare professionals to analyze and reflect on processes, planning, staffing and collabo...
Energy Technology Data Exchange (ETDEWEB)
BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.
2002-06-03
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.
[Application of power band graph method to the modeling and simulation of cardiovascular system].
Feng, Y; Feng, Y; Tian, S; Ling, H; Chen, S
1999-09-01
This paper presents a computer simulation model of the cardiovascular circulation system, which describes the blood flow dynamic law in the cardiovascular system by the state equation. The model can be used in physiological study and computer-aided medical education. In this paper, the Power Band Graph (PBG) modeling method is used to realize modeling of the human circulation system and conduct a simulation study on a simplified physiological system model. The results demonstrate that the PBG method, as an understandable and unity modeling method, is quite effective and practicable and can be used widely in the field of physiological system simulation.
Methods and Simulations of Muon Tomography and Reconstruction
Schreiner, Henry Fredrick, III
This dissertation investigates imaging with cosmic ray muons using scintillator-based portable particle detectors, and covers a variety of the elements required for the detectors to operate and take data, from the detector internal communications and software algorithms to a measurement to allow accurate predictions of the attenuation of physical targets. A discussion of the tracking process for the three layer helical design developed at UT Austin is presented, with details of the data acquisition system, and the highly efficient data format. Upgrades to this system provide a stable system for taking images in harsh or inaccessible environments, such as in a remote jungle in Belize. A Geant4 Monte Carlo simulation was used to develop our understanding of the efficiency of the system, as well as to make predictions for a variety of different targets. The projection process is discussed, with a high-speed algorithm for sweeping a plane through data in near real time, to be used in applications requiring a search through space for target discovery. Several other projections and a foundation of high fidelity 3D reconstructions are covered. A variable binning scheme for rapidly varying statistics over portions of an image plane is also presented and used. A discrepancy in our predictions and the observed attenuation through smaller targets is shown, and it is resolved with a new measurement of low energy spectrum, using a specially designed enclosure to make a series of measurements underwater. This provides a better basis for understanding the images of small amounts of materials, such as for thin cover materials.
Fang, Suping; Wang, Leijie; Komori, Masaharu; Kubo, Aizoh
2010-11-20
We present a ray-tracing-based method for simulation of interference fringe patterns (IFPs) for measuring gear tooth flanks with a two-path interferometer. This simulation method involves two steps. In the first step, the profile of an IFP is achieved by means of ray tracing within the object path of the interferometer. In the second step, the profile of an IFP is filled with interference fringes, according to a set of functions from an optical path length to a fringe gray level. To examine the correctness of this simulation method, simulations are performed for two spur involute gears, and the simulated IFPs are verified by experiments using the actual two-path interferometer built on an optical platform.
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Zio, Enrico
2013-01-01
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...
Zhang, Xiaotian; Liu, Tao; Qiu, Xinming
2017-11-01
This paper reports a finite element modeling approach to simulate the hypervelocity impact (HVI) response of composite laminate. Node-separation finite element (NSFE) method based on scalar-element-fracture technique for isotropic material in HVI simulation has been presented in the previous study. To extend NSFE to composite materials, an orthotropic node-separation finite element (ONSFE) method is developed. This approach employs an orthotropic continuum material model and a corresponding orthotropic-element-fracture technique to represent the HVI behavior/damage of composite laminate. A series of HVI simulations are conducted and the developed ONSFE method is validated by comparing with the experimental data. The simulation results show that ONSFE can successfully capture the HVI phenomena of composite laminate, such as the orthotropic property, nonlinear shock response, perforation, fiber breakage and delamination. Finally, a HVI event of Whipple shield is simulated and the computational capability of ONSFE for predicting the damage state of the composite bumper is further evaluated.
A virtual source method for Monte Carlo simulation of Gamma Knife Model C
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)
2016-05-15
The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.
Energy Technology Data Exchange (ETDEWEB)
Sentis, R. [CEA Bruyeres-le-Chatel, Dept. de Conception et Simulation des Armes, 91 (France); Golse, F. [CEA Saclay, Dept. de Modelisation des Systemes et Structures, 91 - Gif-sur-Yvette (France); Lafitte, O. [Paris-7 Univ., 75 (France)]|[Ecole Normale Superieure, 75 - Paris (France)
2001-07-01
For the simulation of the laser absorption in a plasma hydrodynamic code, one uses generally a ray tracing method. We show here where are the main difficulties related to a numerical solution of the eikonal equation by an alternative method called Eulerian. We indicate also what way are considered to clear up these difficulties. One of the main assets of the Eulerian method is to give a more regular estimation of the energy absorbed in each elementary volume than the ray-tracing method.
Energy Technology Data Exchange (ETDEWEB)
Crestaux, Th. [CEA Saclay, Dept. Modelisation de Systemes et Structures (DEN/DANS/DM2S/SFME), 91 - Gif sur Yvette (France)
2008-07-01
The context of this thesis is the development of the numerical simulation in industrial processes. It aims to study and develop methods allowing a decrease of the numerical cost of calculi of Chaos Polynomials development. The implementing concerns problems of high stochastic dimension and more particularly the transport model of radionuclides in radioactive wastes disposal. (A.L.B.)
Székely, Tamás; Burrage, Kevin; Zygalakis, Konstantinos C; Barrio, Manuel
2014-06-18
Biochemical systems with relatively low numbers of components must be simulated stochastically in order to capture their inherent noise. Although there has recently been considerable work on discrete stochastic solvers, there is still a need for numerical methods that are both fast and accurate. The Bulirsch-Stoer method is an established method for solving ordinary differential equations that possesses both of these qualities. In this paper, we present the Stochastic Bulirsch-Stoer method, a new numerical method for simulating discrete chemical reaction systems, inspired by its deterministic counterpart. It is able to achieve an excellent efficiency due to the fact that it is based on an approach with high deterministic order, allowing for larger stepsizes and leading to fast simulations. We compare it to the Euler τ-leap, as well as two more recent τ-leap methods, on a number of example problems, and find that as well as being very accurate, our method is the most robust, in terms of efficiency, of all the methods considered in this paper. The problems it is most suited for are those with increased populations that would be too slow to simulate using Gillespie's stochastic simulation algorithm. For such problems, it is likely to achieve higher weak order in the moments. The Stochastic Bulirsch-Stoer method is a novel stochastic solver that can be used for fast and accurate simulations. Crucially, compared to other similar methods, it better retains its high accuracy when the timesteps are increased. Thus the Stochastic Bulirsch-Stoer method is both computationally efficient and robust. These are key properties for any stochastic numerical method, as they must typically run many thousands of simulations.
Simulation of XPS C1s Spectra of Organic Monolayers by Quantum Chemical Methods
Giesbers, M.; Marcelis, A.T.M.; Zuilhof, H.
2013-01-01
Several simple methods are presented and evaluated to simulate the X-ray photoelectron spectra (XPS) of organic monolayers and polymeric layers by density functional theory (DFT) and second-order Møller–Plesset theory (MP2) in combination with a series of basis sets. The simulated carbon (C1s) XPS
An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments
Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram
2018-01-01
Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.
Rozema, Wybe
2015-01-01
More and more often computer simulations of airflow are used as a tool in aircraft design. In this research project, which is a collaboration of the University of Groningen and the National Aerospace Laboratory NLR, new computer methods and models for the accurate simulation of airflow around
Corrected momentum exchange method for lattice Boltzmann simulations of suspension flow
Lorenz, E.; Caiazzo, A.; Hoekstra, A.G.
2009-01-01
Standard methods for lattice Boltzmann simulations of suspended particles, based on the momentum exchange algorithm, might lack accuracy or violate Galilean invariance in some particular situations. Aiming at simulations of dense suspensions in high-shear flows, we motivate and investigate necessary
Simulation As a Method To Support Complex Organizational Transformations in Healthcare
Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos
2010-01-01
In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that
Discrete event simulation of crop operations in sweet pepper in support of work method innovation
Ooster, van 't Bert; Aantjes, Wiger; Melamed, Z.
2017-01-01
Greenhouse Work Simulation, GWorkS, is a model that simulates crop operations in greenhouses for the purpose of analysing work methods. GWorkS is a discrete event model that approaches reality as a discrete stochastic dynamic system. GWorkS was developed and validated using cut-rose as a case
Multi-class continuum traffic flow models : Analysis and simulation methods
Van Wageningen-Kessels, F.L.M.
2013-01-01
How to model and simulate traffic flow including different vehicles such as cars and trucks? This dissertation answers this question by analyzing existing models and simulation methods and by developing new ones. The new model (Fastlane) describes traffic as a continuum flow while accounting for
DEFF Research Database (Denmark)
Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove
2010-01-01
The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete....... Current reservoir simulators apply timestepping algorithms that are based on safeguarded heuristics, and can neither guarantee convergence in the underlying equation solver, nor provide estimates of the relations between convergence, integration error and stepsizes. We establish predictive stepsize...... control applied to high order methods for temporal discretization in reservoir simulation. The family of Runge-Kutta methods is presented and in particular the explicit singly diagonally implicit Runge-Kutta (ESDIRK) method with an embedded error estimate is described. A predictive stepsize adjustment...
Fernandez, Pablo; Roca, Xevi; Peraire, Jaime
2016-01-01
We present a high-order implicit large-eddy simulation (ILES) approach for simulating transitional turbulent flows. The approach consists of an Interior Embedded Discontinuous Galerkin (IEDG) method for the discretization of the compressible Navier-Stokes equations and a parallel preconditioned Newton-GMRES solver for the resulting nonlinear system of equations. The IEDG method arises from the marriage of the Embedded Discontinuous Galerkin (EDG) method and the Hybridizable Discontinuous Galerkin (HDG) method. As such, the IEDG method inherits the advantages of both the EDG method and the HDG method to make itself well-suited for turbulence simulations. We propose a minimal residual Newton algorithm for solving the nonlinear system arising from the IEDG discretization of the Navier-Stokes equations. The preconditioned GMRES algorithm is based on a restricted additive Schwarz (RAS) preconditioner in conjunction with a block incomplete LU factorization at the subdomain level. The proposed approach is applied to...
Energy Technology Data Exchange (ETDEWEB)
Berthiau, G.
1995-10-01
The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)
Energy Technology Data Exchange (ETDEWEB)
Morillon, B.
1996-12-31
With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.
Directory of Open Access Journals (Sweden)
Xueshang eFeng
2016-03-01
Full Text Available This paper presents a comparative study of divergence cleaning methods of magnetic field in the solar coronal three-dimensional numerical simulation. For such purpose, the diffusive method, projection method, generalized Lagrange multiplier method and constrained-transport method are used. All these methods are combined with a finite-volume scheme based on a six-component grid system in spherical coordinates. In order to see the performance between the four divergence cleaning methods, solar coronal numerical simulation for Carrington rotation 2056 has been studied. Numerical results show that the average relative divergence error is around $10^{-4.5}$ for the constrained-transport method, while about $10^{-3.1}- 10^{-3.6}$ for the other three methods. Although there exist some differences in the average relative divergence errors for the four employed methods, our tests show they can all produce basic structured solar wind.
DEFF Research Database (Denmark)
Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove
2010-01-01
The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete......-Kutta methods, ESDIRK, Newton-Raphson, convergence control, error control, stepsize selection.......The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete...... approximations, while the IMPES scheme benefits from the explicit treatment of the saturation. However, in tems of controlling the integration error, the low order of the FIM method leads to small integration steps, while the explicit treatment of the saturation may restrict the stepsizes for the IMPES scheme...
A Fast Finite-Difference Time Domain Simulation Method for the Source-Stirring Reverberation Chamber
Directory of Open Access Journals (Sweden)
Wenxing Li
2017-01-01
Full Text Available Numerical analysis methods are often employed to improve the efficiency of the design and application of the source-stirring reverberation chamber. However, the state of equilibrium of the field inside the chamber is hard to reach. In this paper, we present a fast simulation method, which is able to significantly decrease the simulation time of the source-stirring reverberation chamber. The mathematical model of this method is given in detail and home-made FDTD code is employed to conduct the simulations and optimizations as well. The results show that the implementation of the method can give us the accurate frequency response of the source-stirring chamber and make the simulation of source-stirring chamber more efficient.
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
Simulation embedded artificial intelligence search method for supplier trading portfolio decision
DEFF Research Database (Denmark)
Feng, Donghan; Yan, Z.; Østergaard, Jacob
2010-01-01
. The simulation results also reveal the accumulation effect along trading period, which will improve the normality of the supplier trading portfolios. The authors believe the proposed method is a useful complement for the MV method and conditional value at risk (CVaR)-based methods in the supplier trading...
Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study
Suero, Manuel; Privado, Jesús; Botella, Juan
2017-01-01
A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…
METHODS OF SIMULATION ON THE MAP OF ETHNOGEOGRAPHICAL KNOWLEDGE
Directory of Open Access Journals (Sweden)
A. M. Saraeva
2017-01-01
Full Text Available The article deals with the features of the spatial representation of the location of objects and phenomena on the Earth. One of the types of “cartographic representation” is modeling on the contour map. The advantages of the method are revealed. The application of modeling techniques that allows one to include ethnogeographic data in the content of the characteristics of the territory and reflect them on the contour map. The basis of ethnogeographic modeling is the identification and creation of elements of the material and spiritual culture of peoples by means of conventional signs. Comparison of these elements, their superimposition with respect to each other, as well as their comparison with geographic maps allow us to determine the interrelations and the dependence of the phenomenon. Modeling on contour maps is the basic method of learning in geography. On the one hand, it creates a cartographic image of the studied territory, and on the other hand it facilitates the creation of “visual supports” on the map.Modeling on contour maps, at the beginning students put the basic geographical names, which will serve as the basic knowledge. Then, by purposefully analyzing and comparing the thematic maps of the atlas or textbook, the students reflect specific ethno-geographical knowledge on contour maps. As a result, contour maps acquire “their own face”, and do not become a simple copy of maps of an atlas or textbook.Also, the features of the effect of this technique on the formation of spatial representations about the studied object have been analyzed. Thanks to the cartographic model, one can maintain a constant cognitive interest in the material studied. Modeling on the contour map will allow one to present the structure of the links between the elements of the ethnogeographical material. The basis of ethnogeographic modeling on the contour map is the identification and mapping of elements of the material and spiritual culture of
A stable cutting method for finite elements based virtual surgery simulation.
Jerábková, Lenka; Jerábek, Jakub; Chudoba, Rostislav; Kuhlen, Torsten
2007-01-01
In this paper we present a novel approach for stable interactive cutting of deformable objects in virtual environments. Our method is based on the extended finite elements method, allowing for a modeling of discontinuities without remeshing. As no new elements are created, the impact on simulation performance is minimized. We also propose an appropriate mass lumping technique to guarantee for the stability of the simulation regardless of the position of the cut.
A Study of Different Modeling Choices For Simulating Platelets Within the Immersed Boundary Method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hooke...
Computer Simulation Methods for Crushing Process in an Jaw Crusher
Il'ich Beloglazov, Ilia; Andreevich Ikonnikov, Dmitrii
2016-08-01
One of the trends at modern mining enterprises is the application of combined systems for extraction and transportation of the rock mass. Given technology involves the use the conveyor lines as a continuous link of combined technology. The application of a conveyor transport provides significant reduction of costs for energy resources, increase in labor productivity and process automation. However, the use of a conveyor transport provides for certain requirements for the quality of transported material. The maximum size of the rock mass pieces is one of the basic parameters for it. The crushing plants applies as a coarse crushing followed by crushing the material to the maximum size of piece which possible to use for conveyor transport. It is often represented by jaw crushers. Modelling of crushing process in jaw crushers allows to maximally optimize workflow and increase efficiency of the equipment at the further transportation and processing of rocks. We studied the interaction between walls of the jaw crusher and bulk material by using discrete element method (DEM) in this paper. The article examines the process of modeling by stages. It includes design of the crusher construction in solid and surface modeling system. Modelling of the crushing process based on the experimental data received via the crushing unit BOYD. The process of destruction and particle size distribution in the study was done. Analysis of research results shows a comparability of actual experiment and modeling process.
Directory of Open Access Journals (Sweden)
Fei Wei
2012-01-01
Full Text Available As both fluid flow measurement techniques and computer simulation methods continue to improve, there is a growing need for numerical simulation approaches that can assimilate experimental data into the simulation in a flexible and mathematically consistent manner. The problem of interest here is the simulation of blood flow in the left ventricle with the assimilation of experimental data provided by ultrasound imaging of microbubbles in the blood. The weighted least-squares finite element method is used because it allows data to be assimilated in a very flexible manner so that accurate measurements are more closely matched with the numerical solution than less accurate data. This approach is applied to two different test problems: a flexible flap that is displaced by a jet of fluid and blood flow in the porcine left ventricle. By adjusting how closely the simulation matches the experimental data, one can observe potential inaccuracies in the model because the simulation without experimental data differs significantly from the simulation with the data. Additionally, the assimilation of experimental data can help the simulation capture certain small effects that are present in the experiment, but not modeled directly in the simulation.
Wei, Fei; Westerdale, John; McMahon, Eileen M; Belohlavek, Marek; Heys, Jeffrey J
2012-01-01
As both fluid flow measurement techniques and computer simulation methods continue to improve, there is a growing need for numerical simulation approaches that can assimilate experimental data into the simulation in a flexible and mathematically consistent manner. The problem of interest here is the simulation of blood flow in the left ventricle with the assimilation of experimental data provided by ultrasound imaging of microbubbles in the blood. The weighted least-squares finite element method is used because it allows data to be assimilated in a very flexible manner so that accurate measurements are more closely matched with the numerical solution than less accurate data. This approach is applied to two different test problems: a flexible flap that is displaced by a jet of fluid and blood flow in the porcine left ventricle. By adjusting how closely the simulation matches the experimental data, one can observe potential inaccuracies in the model because the simulation without experimental data differs significantly from the simulation with the data. Additionally, the assimilation of experimental data can help the simulation capture certain small effects that are present in the experiment, but not modeled directly in the simulation.
Evaluation of a proposed optimization method for discrete-event simulation models
Directory of Open Access Journals (Sweden)
Alexandre Ferreira de Pinho
2012-12-01
Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.
A Simulation Method for High-Cycle Fatigue-Driven Delamination using a Cohesive Zone Model
DEFF Research Database (Denmark)
Bak, Brian Lau Verndal; Turon, A.; Lindgaard, Esben
2016-01-01
A novel computational method for simulating fatigue-driven mixed-mode delamination cracks in laminated structures under cyclic loading is presented. The proposed fatigue method is based on linking a cohesive zone model for quasi-static crack growth and a Paris' law-like model described...... on parameter fitting of any kind. The method has been implemented as a zero-thickness eight-node interface element for Abaqus and as a spring element for a simple finite element model in MATLAB. The method has been validated in simulations of mode I, mode II, and mixed-mode crack loading for both self......-similar and non-self-similar crack propagation. The method produces highly accurate results compared with currently available methods and is capable of simulating general mixed-mode non-self-similar crack growth problems....
Simulation of thermal behavior of residential buildings using fuzzy active learning method
Directory of Open Access Journals (Sweden)
Masoud Taheri Shahraein
2015-01-01
Full Text Available In this paper, a fuzzy modeling technique called Modified Active Learning Method (MALM was introduced and utilized for fuzzy simulation of indoor and inner surface temperatures in residential buildings using meteorological data and its capability for fuzzy simulation was compared with other studies. The case studies for simulations were two residential apartments in the Fakouri and Rezashahr neighborhoods of Mashhad, Iran. The hourly inner surface and indoor temperature data were accumulated during measurements taken in 2010 and 2011 in different rooms of the apartments under heating and natural ventilation conditions. Hourly meteorological data (dry bulb temperature, wind speed and direction and solar radiation were measured by a meteorological station and utilized with zero to three hours lags as input variables for the simulation of inner surface and indoor temperatures. The results of simulations demonstrated the capability of MALM to be used for nonlinear fuzzy simulation of inner surface and indoor temperatures in residential apartments.
INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING
Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong
2017-01-01
Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363
Core Spreading Vortex Method for Simulating 3D Flows Around Bluff Bodies
Directory of Open Access Journals (Sweden)
Lavi R. Zuhal
2014-12-01
Full Text Available This paper presents the development of core spreading vortex element method, which is a mesh-free method, for simulating 3D viscous flow over bluff bodies. The developed method simulates external flow around complex geometry by tracking local velocities and vorticities of particles introduced within the fluid domain. The viscous effect is modeled using core spreading method coupled with the splitting spatial adaption scheme, and a smoothing interpolation scheme for overlapping issue and population control, respectively. The particle’s velocity is calculated using Biot-Savart formulation. To accelerate computation, Fast Multipole Method (FMM is employed. The solver is validated, for both unbounded and bounded flows at low Reynolds numbers, using a number of benchmark problems. For unbounded case, simulation of the collision of two vortex rings was performed. To test the performance of the method in simulating bounded flow problem, simulation of flow around a sphere was carried out. The results are found to be in good agreement with those reported in literatures and also simulations using other diffusion model.
Yiluan, Guo; Guilei, Wang; Chao, Zhao; Jun, Luo
2015-08-01
A new simulation method and test instrument has been adopted to verify the traditional stress simulation in FinFET. First, a new algorithm named lattice kinetic Monte Carlo (LKMC) is used to simulate the SiGe epitaxy in source/drain regions and the stress distribution is consequently extracted after the LKMC simulation. Systematic comparison between the traditional polyhedron method and the LKMC method is carried out. The results confirm that extracted stress from both methods is consistent, which verifies the validity of traditional polyhedron method for the purpose of simulating stress in FinFET. In the following experiment, p-type FinFETs with SiGe stressors in source/drain regions are fabricated. The nano beam diffraction (NBD) method is employed to characterize the strain in Si fin. The strain value from the NBD test agrees well with the value extracted from traditional polyhedron simulation. Project supported by the “National S&T Major Project 02”, the Opening Project of Microelectronics Devices & Bulk Si FinFET Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences (No. 2013ZX02303007-001).
Ngada, Narcisse
2015-06-15
The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Geant4 based Monte Carlo simulation for verifying the modified sum-peak method.
Aso, Tsukasa; Ogata, Yoshimune; Makino, Ryuta
2017-09-14
The modified sum-peak method can practically estimate radioactivity by using solely the peak and the sum peak count rate. In order to efficiently verify the method in various experimental conditions, a Geant4 based Monte Carlo simulation for a high-purity germanium detector system was applied. The energy spectra in the detector were simulated for a 60Co point source in various source to detector distances. The calculated radioactivity shows good agreement with the number of decays in the simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
A method for data handling numerical results in parallel OpenFOAM simulations
Energy Technology Data Exchange (ETDEWEB)
Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)
2015-12-31
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Shock waves simulated using the dual domain material point method combined with molecular dynamics
Zhang, Duan Z.; Dhakal, Tilak R.
2017-04-01
In this work we combine the dual domain material point method with molecular dynamics in an attempt to create a multiscale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically nonequilibrium state, and conventional constitutive relations or equations of state are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a molecular dynamics simulation of a group of atoms surrounding the material point. Rather than restricting the multiscale simulation in a small spatial region, such as phase interfaces, or crack tips, this multiscale method can be used to consider nonequilibrium thermodynamic effects in a macroscopic domain. This method takes the advantage that the material points only communicate with mesh nodes, not among themselves; therefore molecular dynamics simulations for material points can be performed independently in parallel. The dual domain material point method is chosen for this multiscale method because it can be used in history dependent problems with large deformation without generating numerical noise as material points move across cells, and also because of its convergence and conservation properties. To demonstrate the feasibility and accuracy of this method, we compare the results of a shock wave propagation in a cerium crystal calculated using the direct molecular dynamics simulation with the results from this combined multiscale calculation.
Optimized Mooring Line Simulation Using a Hybrid Method Time Domain Scheme
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye; Voie, Per Erlend Torbergsen; Høgsberg, Jan
2014-01-01
Dynamic analyses of slender marine structures are computationally expensive. Recently it has been shown how a hybrid method which combines FEM models and artificial neural networks (ANN) can be used to reduce the computation time spend on the time domain simulations associated with fatigue analysis...... of mooring lines by two orders of magnitude. The present study shows how an ANN trained to perform nonlinear dynamic response simulation can be optimized using a method known as optimal brain damage (OBD) and thereby be used to rank the importance of all analysis input. Both the training and the optimization...... of the ANN are based on one short time domain simulation sequence generated by a FEM model of the structure. This means that it is possible to evaluate the importance of input parameters based on this single simulation only. The method is tested on a numerical model of mooring lines on a floating offshore...
DEFF Research Database (Denmark)
Petersen, Steffen; Svendsen, Svend
2011-01-01
A method for simulating predictive control of building systems operation in the early stages of building design is presented. The method uses building simulation based on weather forecasts to predict whether there is a future heating or cooling requirement. This information enables the thermal...... control systems of the building to respond proactively to keep the operational temperature within the thermal comfort range with the minimum use of energy. The method is implemented in an existing building simulation tool designed to inform decisions in the early stages of building design through...... parametric analysis. This enables building designers to predict the performance of the method and include it as a part of the solution space. The method furthermore facilitates the task of configuring appropriate building systems control schemes in the tool, and it eliminates time consuming manual...
Borštnik, Urban; Miller, Benjamin T; Brooks, Bernard R; Janežič, Dušanka
2011-11-15
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. Copyright © 2011 Wiley Periodicals, Inc.
Transient Response of a Projectile in Gun Launch Simulation Using Lagrangian and Ale Methods
Directory of Open Access Journals (Sweden)
A Tabiei
2016-09-01
Full Text Available This paper describes the usefulness of Lagrangian and arbitrary Lagrangian/Eulerian (ALE methods in simulating the gun launch dynamics of a generic artillery component subjected to launch simulation in an air gun test. Lagrangian and ALE methods are used to simulate the impact mitigation environment in which the kinetic energy of a projectile is absorbed by the crushing of aluminum honeycomb mitigator. In order to solve the problem due to high impact penetration, a new fluid structure coupling algorithm is developed and implemented in LS-DYNA, a three dimensional FEM code. The fluid structure coupling algorithm used in this paper combined with ALE formulation for the aluminum honeycomb mitigator allows to solve problems for which the contact algorithm in the Lagrangian calculation fails due to high mesh distortion. The numerical method used for the fluid and fluid structure coupling is discussed. A new coupling method is used in order to prevent mesh distortion. Issues related to the effectiveness of these methods in simulating a high degree of distortion of Aluminum honeycomb mitigator with the commonly used material models (metallic honeycomb and crushable foam are discussed. Both computational methods lead to the same prediction for the deceleration of the test projectile and are able to simulate the behavior of the projectile. Good agreement between the test results and the predicted projectile response is achieved via the presented models and the methods employed.
Petascale molecular dynamics simulation using the fast multipole method on K computer
Ohno, Yousuke
2014-10-01
In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.
van der Stelt, A.A.; Bor, Teunis Cornelis; Geijselaers, Hubertus J.M.; Quak, W.; Akkerman, Remko; Huetink, Han; Menary, G
2011-01-01
In this paper, the material flow around the pin during friction stir welding (FSW) is simulated using a 2D plane strain model. A pin rotates without translation in a disc with elasto-viscoplastic material properties and the outer boundary of the disc is clamped. Two numerical methods are used to
Innovative teaching methods in the professional training of nurses – simulation education
Directory of Open Access Journals (Sweden)
Michaela Miertová
2013-12-01
Full Text Available Introduction: The article is aimed to highlight usage of innovative teaching methods within simulation education in the professional training of nurses abroad and to present our experience based on passing intensive study programme at School of Nursing, Midwifery and Social Work, University of Salford (United Kingdom, UK within Intensive EU Lifelong Learning Programme (LPP Erasmus EU RADAR 2013. Methods: Implementation of simulation methods such as role-play, case studies, simulation scenarios, practical workshops and clinical skills workstation within structured ABCDE approach (AIM© Assessment and Management Tool was aimed to promote the development of theoretical knowledge and skills to recognize and manage acutely deteriorated patients. Structured SBAR approach (Acute SBAR Communication Tool was used for the training of communication and information sharing among the members of multidisciplinary health care team. OSCE approach (Objective Structured Clinical Examination was used for student’s individual formative assessment. Results: Simulation education is proved to have lots of benefits in the professional training of nurses. It is held in safe, controlled and realistic conditions (in simulation laboratories reflecting real hospital and community care environment with no risk of harming real patients accompanied by debriefing, discussion and analysis of all activities students have performed within simulated scenario. Such learning environment is supportive, challenging, constructive, motivated, engaging, skilled, flexible, inspiring and respectful. Thus the simulation education is effective, interactive, interesting, efficient and modern way of nursing education. Conclusion: Critical thinking and clinical competences of nurses are crucial for early recognition and appropriate response to acute deterioration of patient’s condition. These competences are important to ensure the provision of high quality nursing care. Methods of
Directory of Open Access Journals (Sweden)
Song Fujian
2012-09-01
Full Text Available Abstract Background Indirect treatment comparison (ITC and mixed treatment comparisons (MTC have been increasingly used in network meta-analyses. This simulation study comprehensively investigated statistical properties and performances of commonly used ITC and MTC methods, including simple ITC (the Bucher method, frequentist and Bayesian MTC methods. Methods A simple network of three sets of two-arm trials with a closed loop was simulated. Different simulation scenarios were based on different number of trials, assumed treatment effects, extent of heterogeneity, bias and inconsistency. The performance of the ITC and MTC methods was measured by the type I error, statistical power, observed bias and mean squared error (MSE. Results When there are no biases in primary studies, all ITC and MTC methods investigated are on average unbiased. Depending on the extent and direction of biases in different sets of studies, ITC and MTC methods may be more or less biased than direct treatment comparisons (DTC. Of the methods investigated, the simple ITC method has the largest mean squared error (MSE. The DTC is superior to the ITC in terms of statistical power and MSE. Under the simulated circumstances in which there are no systematic biases and inconsistencies, the performances of MTC methods are generally better than the performance of the corresponding DTC methods. For inconsistency detection in network meta-analysis, the methods evaluated are on average unbiased. The statistical power of commonly used methods for detecting inconsistency is very low. Conclusions The available methods for indirect and mixed treatment comparisons have different advantages and limitations, depending on whether data analysed satisfies underlying assumptions. To choose the most valid statistical methods for research synthesis, an appropriate assessment of primary studies included in evidence network is required.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
Analysis on the influence of supply method on a workstation with the help of dynamic simulation
Directory of Open Access Journals (Sweden)
Gavriluță Alin
2017-01-01
Full Text Available Considering the need of flexibility in any manufacturing process, the choice of the supply method of an assembly workstation can be a decision with instead influence on its performances. Using dynamic simulation, this article wants to compare the effect on a workstation cycle time of three different supply methods: supply on stock, supply in “Strike Zone” and synchronous supply. This study is part of an extended work that has the aim of compering by 3D layout design and dynamic simulation, different supply methods on an assembly line performances.
Simulating ligand-induced conformational changes in proteins using a mechanical disassembly method.
Cortés, Juan; Le, Duc Thanh; Iehl, Romain; Siméon, Thierry
2010-08-01
Simulating protein conformational changes induced or required by the internal diffusion of a ligand is important for the understanding of their interaction mechanisms. Such simulations are challenging for currently available computational methods. In this paper, the problem is formulated as a mechanical disassembly problem where the protein and the ligand are modeled like articulated mechanisms, and an efficient method for computing molecular disassembly paths is described. The method extends recent techniques developed in the framework of robot motion planning. Results illustrating the capacities of the approach are presented on two biologically interesting systems involving ligand-induced conformational changes: lactose permease (LacY), and the beta(2)-adrenergic receptor.
DEFF Research Database (Denmark)
Cook, Gerald; Lin, Ching-Fang
1980-01-01
The local linearization algorithm is presented as a possible numerical integration scheme to be used in real-time simulation. A second-order nonlinear example problem is solved using different methods. The local linearization approach is shown to require less computing time and give significant...... improvement in accuracy over the classical second-order integration methods....
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile)
2016-07-07
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of the intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.
Application of a perturbation method for realistic dynamic simulation of industrial robots
Waiboer, R.R.; Aarts, Ronald G.K.M.; Jonker, Jan B.
2005-01-01
This paper presents the application of a perturbation method for the closed-loop dynamic simulation of a rigid-link manipulator with joint friction. In this method the perturbed motion of the manipulator is modelled as a first-order perturbation of the nominal manipulator motion. A non-linear finite
Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations
DEFF Research Database (Denmark)
Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht
2011-01-01
Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shado...
Directory of Open Access Journals (Sweden)
V. A. Bazhenov
2013-01-01
shown that both methods are applied and give the coinciding results for system with elastic rigid impact under periodic external loading. Loading curves built by parameter continuation method are confirming this result. Impact simulation by the second method is also fulfilled for vibroimpact system with rigid impact under random external loading. For vibroimpact system with soft impact, the simulation of impact by the second method gives a better result. The application of linear elastic force as contact one is possible too but the use of Hertz's contact force is more preferable. The authors consider that the impact simulation by Hertz contact interaction force gives good results for nonlinear vibroimpact systems with impacts of any kind if all limitations with Hertz's law used are observed.
A New Method to Simulate Free Surface Flows for Viscoelastic Fluid
Directory of Open Access Journals (Sweden)
Yu Cao
2015-01-01
Full Text Available Free surface flows arise in a variety of engineering applications. To predict the dynamic characteristics of such problems, specific numerical methods are required to accurately capture the shape of free surface. This paper proposed a new method which combined the Arbitrary Lagrangian-Eulerian (ALE technique with the Finite Volume Method (FVM to simulate the time-dependent viscoelastic free surface flows. Based on an open source CFD toolbox called OpenFOAM, we designed an ALE-FVM free surface simulation platform. In the meantime, the die-swell flow had been investigated with our proposed platform to make a further analysis of free surface phenomenon. The results validated the correctness and effectiveness of the proposed method for free surface simulation in both Newtonian fluid and viscoelastic fluid.
Directory of Open Access Journals (Sweden)
Cristina Portalés
2017-06-01
Full Text Available The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users.
Accelerated molecular dynamics and equation-free methods for simulating diffusion in solids.
Energy Technology Data Exchange (ETDEWEB)
Deng, Jie; Zimmerman, Jonathan A.; Thompson, Aidan Patrick; Brown, William Michael (Oak Ridge National Laboratories, Oak Ridge, TN); Plimpton, Steven James; Zhou, Xiao Wang; Wagner, Gregory John; Erickson, Lindsay Crowl
2011-09-01
Many of the most important and hardest-to-solve problems related to the synthesis, performance, and aging of materials involve diffusion through the material or along surfaces and interfaces. These diffusion processes are driven by motions at the atomic scale, but traditional atomistic simulation methods such as molecular dynamics are limited to very short timescales on the order of the atomic vibration period (less than a picosecond), while macroscale diffusion takes place over timescales many orders of magnitude larger. We have completed an LDRD project with the goal of developing and implementing new simulation tools to overcome this timescale problem. In particular, we have focused on two main classes of methods: accelerated molecular dynamics methods that seek to extend the timescale attainable in atomistic simulations, and so-called 'equation-free' methods that combine a fine scale atomistic description of a system with a slower, coarse scale description in order to project the system forward over long times.
Directory of Open Access Journals (Sweden)
Peng Chen
2016-12-01
Full Text Available Urban waterlogging seriously threatens the safety of urban residents and properties. Wargame simulation research on resident emergency evacuation from waterlogged areas can determine the effectiveness of emergency response plans for high risk events at low cost. Based on wargame theory and emergency evacuation plans, we used a wargame exercise method, incorporating qualitative and quantitative aspects, to build an urban waterlogging disaster emergency shelter using a wargame exercise and evaluation model. The simulation was empirically tested in Daoli District of Harbin. The results showed that the wargame simulation scored 96.40 points, evaluated as good. From the simulation results, wargame simulation of urban waterlogging emergency procedures for disaster response can improve the flexibility and capacity for command, management and decision-making in emergency management departments.
Chen, Peng; Zhang, Jiquan; Sun, Yingyue; Liu, Xiaojing
2016-12-21
Urban waterlogging seriously threatens the safety of urban residents and properties. Wargame simulation research on resident emergency evacuation from waterlogged areas can determine the effectiveness of emergency response plans for high risk events at low cost. Based on wargame theory and emergency evacuation plans, we used a wargame exercise method, incorporating qualitative and quantitative aspects, to build an urban waterlogging disaster emergency shelter using a wargame exercise and evaluation model. The simulation was empirically tested in Daoli District of Harbin. The results showed that the wargame simulation scored 96.40 points, evaluated as good. From the simulation results, wargame simulation of urban waterlogging emergency procedures for disaster response can improve the flexibility and capacity for command, management and decision-making in emergency management departments.
Jamali, J; Moini, R; Sadeghi, H
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper.
Multiscale Lattice Boltzmann method for flow simulations in highly heterogenous porous media
Li, Jun
2013-01-01
A lattice Boltzmann method (LBM) for flow simulations in highly heterogeneous porous media at both pore and Darcy scales is proposed in the paper. In the pore scale simulations, flow of two phases (e.g., oil and gas) or two immiscible fluids (e.g., water and oil) are modeled using cohesive or repulsive forces, respectively. The relative permeability can be computed using pore-scale simulations and seamlessly applied for intermediate and Darcy-scale simulations. A multiscale LBM that can reduce the computational complexity of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with the averaged results obtained using fine grid.
Chen, Peng; Zhang, Jiquan; Sun, Yingyue; Liu, Xiaojing
2016-01-01
Urban waterlogging seriously threatens the safety of urban residents and properties. Wargame simulation research on resident emergency evacuation from waterlogged areas can determine the effectiveness of emergency response plans for high risk events at low cost. Based on wargame theory and emergency evacuation plans, we used a wargame exercise method, incorporating qualitative and quantitative aspects, to build an urban waterlogging disaster emergency shelter using a wargame exercise and evaluation model. The simulation was empirically tested in Daoli District of Harbin. The results showed that the wargame simulation scored 96.40 points, evaluated as good. From the simulation results, wargame simulation of urban waterlogging emergency procedures for disaster response can improve the flexibility and capacity for command, management and decision-making in emergency management departments. PMID:28009805
Apparatus and method for interaction phenomena with world modules in data-flow-based simulation
Xavier, Patrick G [Albuquerque, NM; Gottlieb, Eric J [Corrales, NM; McDonald, Michael J [Albuquerque, NM; Oppel, III, Fred J.
2006-08-01
A method and apparatus accommodate interaction phenomenon in a data-flow-based simulation of a system of elements, by establishing meta-modules to simulate system elements and by establishing world modules associated with interaction phenomena. World modules are associated with proxy modules from a group of meta-modules associated with one of the interaction phenomenon. The world modules include a communication world, a sensor world, a mobility world, and a contact world. World modules can be further associated with other world modules if necessary. Interaction phenomenon are simulated in corresponding world modules by accessing member functions in the associated group of proxy modules. Proxy modules can be dynamically allocated at a desired point in the simulation to accommodate the addition of elements in the system of elements such as a system of robots, a system of communication terminals, or a system of vehicles, being simulated.
Song, Fujian; Clark, Allan; Bachmann, Max O; Maas, Jim
2012-09-12
Indirect treatment comparison (ITC) and mixed treatment comparisons (MTC) have been increasingly used in network meta-analyses. This simulation study comprehensively investigated statistical properties and performances of commonly used ITC and MTC methods, including simple ITC (the Bucher method), frequentist and Bayesian MTC methods. A simple network of three sets of two-arm trials with a closed loop was simulated. Different simulation scenarios were based on different number of trials, assumed treatment effects, extent of heterogeneity, bias and inconsistency. The performance of the ITC and MTC methods was measured by the type I error, statistical power, observed bias and mean squared error (MSE). When there are no biases in primary studies, all ITC and MTC methods investigated are on average unbiased. Depending on the extent and direction of biases in different sets of studies, ITC and MTC methods may be more or less biased than direct treatment comparisons (DTC). Of the methods investigated, the simple ITC method has the largest mean squared error (MSE). The DTC is superior to the ITC in terms of statistical power and MSE. Under the simulated circumstances in which there are no systematic biases and inconsistencies, the performances of MTC methods are generally better than the performance of the corresponding DTC methods. For inconsistency detection in network meta-analysis, the methods evaluated are on average unbiased. The statistical power of commonly used methods for detecting inconsistency is very low. The available methods for indirect and mixed treatment comparisons have different advantages and limitations, depending on whether data analysed satisfies underlying assumptions. To choose the most valid statistical methods for research synthesis, an appropriate assessment of primary studies included in evidence network is required.
Peter, Emanuel; Dick, Bernhard; Baeurle, Stephan A.
2012-03-01
Signal proteins are able to adapt their response to a change in the environment, governing in this way a broad variety of important cellular processes in living systems. While conventional molecular-dynamics (MD) techniques can be used to explore the early signaling pathway of these protein systems at atomistic resolution, the high computational costs limit their usefulness for the elucidation of the multiscale transduction dynamics of most signaling processes, occurring on experimental timescales. To cope with the problem, we present in this paper a novel multiscale-modeling method, based on a combination of the kinetic Monte-Carlo- and MD-technique, and demonstrate its suitability for investigating the signaling behavior of the photoswitch light-oxygen-voltage-2-Jα domain from Avena Sativa (AsLOV2-Jα) and an AsLOV2-Jα-regulated photoactivable Rac1-GTPase (PA-Rac1), recently employed to control the motility of cancer cells through light stimulus. More specifically, we show that their signaling pathways begin with a residual re-arrangement and subsequent H-bond formation of amino acids near to the flavin-mononucleotide chromophore, causing a coupling between β-strands and subsequent detachment of a peripheral α-helix from the AsLOV2-domain. In the case of the PA-Rac1 system we find that this latter process induces the release of the AsLOV2-inhibitor from the switchII-activation site of the GTPase, enabling signal activation through effector-protein binding. These applications demonstrate that our approach reliably reproduces the signaling pathways of complex signal proteins, ranging from nanoseconds up to seconds at affordable computational costs.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.; Pasquali, Andrea; Schönherr, Martin; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Trask, Nathaniel; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li-Shi; Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based on the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence
Methods for simulation-based analysis of fluid-structure interaction.
Energy Technology Data Exchange (ETDEWEB)
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.
Lattice Boltzmann method used to simulate particle motion in a conduit
Directory of Open Access Journals (Sweden)
Dolanský Jindřich
2017-06-01
Full Text Available A three-dimensional numerical simulation of particle motion in a pipe with a rough bed is presented. The simulation based on the Lattice Boltzmann Method (LBM employs the hybrid diffuse bounce-back approach to model moving boundaries. The bed of the pipe is formed by stationary spherical particles of the same size as the moving particles. Particle movements are induced by gravitational and hydrodynamic forces. To evaluate the hydrodynamic forces, the Momentum Exchange Algorithm is used. The LBM unified computational frame makes it possible to simulate both the particle motion and the fluid flow and to study mutual interactions of the carrier liquid flow and particles and the particle–bed and particle–particle collisions. The trajectories of simulated and experimental particles are compared. The Particle Tracking method is used to track particle motion. The correctness of the applied approach is assessed.
Multi-Scale Bridge Wash Out Simulation During Tsunami by Using a Particle Method
Directory of Open Access Journals (Sweden)
Miyagawa Yoshiya
2016-01-01
Full Text Available In 2011, the huge tsunami caused by the great east Japan earthquake devastated many infrastructures in pacific coast of north eastern Japan. Particularly, collapse of bridges caused a traffic disorder and these collapse behaviors led to delay of recovery after the disaster. In this study, the bridge wash away accident is selected as a target issue, and it is represented by a numerical simulation. For this purpose, Smoothed Particle Hydrodynamics (SPH Method, which is one of the pure mesh free methods, is utilized for the rigid body motion simulation. In this study, rigid body motion is introduced for the fluid-rigid interaction behavior during bridge wash away simulation. In the numerical analysis, the upper bridge structure is washed away by receiving an impact fluid force. The wash away simulation of two types of the bridge girder showed good agreement with the real accident on the great east Japan earthquake tsunami.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
Directory of Open Access Journals (Sweden)
Fan Yuxin
2014-12-01
Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Solar panel thermal cycling testing by solar simulation and infrared radiation methods
Nuss, H. E.
1980-01-01
For the solar panels of the European Space Agency (ESA) satellites OTS/MAROTS and ECS/MARECS the thermal cycling tests were performed by using solar simulation methods. The performance data of two different solar simulators used and the thermal test results are described. The solar simulation thermal cycling tests for the ECS/MARECS solar panels were carried out with the aid of a rotatable multipanel test rig by which simultaneous testing of three solar panels was possible. As an alternative thermal test method, the capability of an infrared radiation method was studied and infrared simulation tests for the ultralight panel and the INTELSAT 5 solar panels were performed. The setup and the characteristics of the infrared radiation unit using a quartz lamp array of approx. 15 sq and LN2-cooled shutter and the thermal test results are presented. The irradiation uniformity, the solar panel temperature distribution, temperature changing rates for both test methods are compared. Results indicate the infrared simulation is an effective solar panel thermal testing method.
Feitosa, V P; Gotti, V B; Grohmann, C V; Abuná, G; Correr-Sobrinho, L; Sinhoreti, M A C; Correr, A B
2014-09-01
To evaluate the effects of two methods to simulate physiological pulpal pressure on the dentine bonding performance of two all-in-one adhesives and a two-step self-etch silorane-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage surveys. The self-etch adhesives [G-Bond Plus (GB), Adper Easy Bond (EB) and silorane adhesive (SIL)] were applied to flat deep dentine surfaces from extracted human molars. The restorations were constructed using resin composites Filtek Silorane or Filtek Z350 (3M ESPE). After 24 h using the two methods of simulated pulpal pressure or no pulpal pressure (control groups), the bonded teeth were cut into specimens and submitted to μTBS and silver uptake examination. Results were analysed with two-way anova and Tukey's test (P adhesives. No difference between control and pulpal pressure groups was found for SIL and GB. EB led significant drop (P = 0.002) in bond strength under pulpal pressure. Silver impregnation was increased after both methods of simulated pulpal pressure for all adhesives, and it was similar between the simulated pulpal pressure methods. The innovative method to simulate pulpal pressure behaved similarly to the classic one and could be used as an alternative. The HEMA-free one-step and the two-step self-etch adhesives had acceptable resistance against pulpal pressure, unlike the HEMA-rich adhesive. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads
Lu, Ming-Lun; Waters, Thomas; Werren, Dwight
2015-01-01
Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435
Directory of Open Access Journals (Sweden)
Sebastian Grundstein
2015-01-01
Full Text Available Production planning and control faces increasing uncertainty, dynamics and complexity. Autonomous control methods proved themselves as a promising approach for coping with these challenges. However, there is a lack of knowledge regarding the interaction between autonomous control and precedent functions of production planning and control. In particular, up to now previous research has paid no attention to the influence of order release methods on the efficiency of autonomous control methods. Thereby, many researchers over the last decades provided evidence that the order release function has great influence on the logistic objective achievement in conventional production systems. Therefore, this paper examines the influence of order release methods on the efficiency of autonomous control methods by both theoretic evaluation and discrete event simulation. The simulation results indicate an overall high influence. Moreover, the logistic performance differs considerably depending on the implemented order release methods and the combinations of order release methods with autonomous control methods. The findings highlight demand for further research in this field.
Energy Technology Data Exchange (ETDEWEB)
Herbold, E. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walton, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Homel, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-10-26
This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-4 weeks of an FTE split amongst two staff scientists and one post-doc. The DEM simulations emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-particles square by 10-particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.
Directory of Open Access Journals (Sweden)
R. Rabenstein
2004-06-01
Full Text Available The functional transformation method (FTM is a well-established mathematical method for accurate simulations of multidimensional physical systems from various fields of science, including optics, heat and mass transfer, electrical engineering, and acoustics. This paper applies the FTM to real-time simulations of transversal vibrating strings. First, a physical model of a transversal vibrating lossy and dispersive string is derived. Afterwards, this model is solved with the FTM for two cases: the ideally linearly vibrating string and the string interacting nonlinearly with the frets. It is shown that accurate and stable simulations can be achieved with the discretization of the continuous solution at audio rate. Both simulations can also be performed with a multirate approach with only minor degradations of the simulation accuracy but with preservation of stability. This saves almost 80% of the computational cost for the simulation of a six-string guitar and therefore it is in the range of the computational cost for digital waveguide simulations.
More Than One Way to Debrief: A Critical Review of Healthcare Simulation Debriefing Methods.
Sawyer, Taylor; Eppich, Walter; Brett-Fleegler, Marisa; Grant, Vincent; Cheng, Adam
2016-06-01
Debriefing is a critical component in the process of learning through healthcare simulation. This critical review examines the timing, facilitation, conversational structures, and process elements used in healthcare simulation debriefing. Debriefing occurs either after (postevent) or during (within-event) the simulation. The debriefing conversation can be guided by either a facilitator (facilitator-guided) or the simulation participants themselves (self-guided). Postevent facilitator-guided debriefing may incorporate several conversational structures. These conversational structures break the debriefing discussion into a series of 3 or more phases to help organize the debriefing and ensure the conversation proceeds in an orderly manner. Debriefing process elements are an array of techniques to optimize reflective experience and maximize the impact of debriefing. These are divided here into the following 3 categories: essential elements, conversational techniques/educational strategies, and debriefing adjuncts. This review provides both novice and advanced simulation educators with an overview of various methods of conducting healthcare simulation debriefing. Future research will investigate which debriefing methods are best for which contexts and for whom, and also explore how lessons from simulation debriefing translate to debriefing in clinical practice.
Ilya Y. Zhbannikov; Konstantin G. Arbeev; Anatoliy I. Yashin
2017-01-01
Simulation is important in evaluating novel methods when input data is not easily obtainable or specific assumptions are needed. We present cophesim, a software to add the phenotype to generated genotype data prepared with a genetic simulator. The output of cophesim can be used as a direct input for different genome wide association study tools. cophesim is available from https://bitbucket.org/izhbannikov/cophesim.
Directory of Open Access Journals (Sweden)
Ilya Y. Zhbannikov
2017-08-01
Full Text Available Simulation is important in evaluating novel methods when input data is not easily obtainable or specific assumptions are needed. We present cophesim, a software to add the phenotype to generated genotype data prepared with a genetic simulator. The output of cophesim can be used as a direct input for different genome wide association study tools. cophesim is available from https://bitbucket.org/izhbannikov/cophesim.
Evaluation of Constant Potential Method in Simulating Electric Double-Layer Capacitors
Wang, Zhenxing; Yang, Yang; Olmsted, David L.; Asta, Mark; Laird, Brian B.
2014-01-01
A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations induced by charge fluctuations in the electrolyte. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potent...
Simulation of external flows using a hybrid particle mesh vortex method
DEFF Research Database (Denmark)
Spietz, Henrik; Hejlesen, Mads Mølholm; Walther, Jens Honore
The long-term goal of this project is to develop and apply state-of-the-art simulation software to enable accurate prediction of fluid structure interaction, specifically vortex-induced-vibration and flutter of long-span suspension bridges to avoid error-prone structural designs. In the following...... a hybrid particle mesh vortex method is applied for the simulation of uniform flow past stationary solid obstacles of arbitrary shapes....
Fast and accurate simulations of transmission-line metamaterials using transmission-matrix method
Ma, Hui Feng; Cui, Tie Jun; Chin, Jessie Yao; Cheng, Qiang
2009-01-01
Recently, two-dimensional (2D) periodically L and C loaded transmission-line (TL) networks have been applied to represent metamaterials. The commercial Agilent's Advanced Design System (ADS) is a commonly-used tool to simulate the TL metamaterials. However, it takes a lot of time to set up the TL network and perform numerical simulations using ADS, making the metamaterial analysis inefficient, especially for large-scale TL networks. In this paper, we propose transmission-matrix method (TMM) t...
GPU implementation of the Rosenbluth generation method for static Monte Carlo simulations
Guo, Yachong; Baulin, Vladimir A.
2017-07-01
We present parallel version of Rosenbluth Self-Avoiding Walk generation method implemented on Graphics Processing Units (GPUs) using CUDA libraries. The method scales almost linearly with the number of CUDA cores and the method efficiency has only hardware limitations. The method is introduced in two realizations: on a cubic lattice and in real space. We find a good agreement between serial and parallel implementations and consistent results between lattice and real space realizations of the method for linear chain statistics. The developed GPU implementations of Rosenbluth algorithm can be used in Monte Carlo simulations and other computational methods that require large sampling of molecules conformations.
Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods
Chamis, Christos C.; Coroneos, Rula M.
2007-01-01
Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.
A Cartesian cut cell method for rarefied flow simulations around moving obstacles
Energy Technology Data Exchange (ETDEWEB)
Dechristé, G., E-mail: Guillaume.Dechriste@math.u-bordeaux1.fr [Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence (France); CNRS, IMB, UMR 5251, F-33400 Talence (France); Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux1.fr [Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence (France); CNRS, IMB, UMR 5251, F-33400 Talence (France); Bordeaux INP, IMB, UMR 5251, F-33400 Talence (France); INRIA, F-33400 Talence (France)
2016-06-01
For accurate simulations of rarefied gas flows around moving obstacles, we propose a cut cell method on Cartesian grids: it allows exact conservation and accurate treatment of boundary conditions. Our approach is designed to treat Cartesian cells and various kinds of cut cells by the same algorithm, with no need to identify the specific shape of each cut cell. This makes the implementation quite simple, and allows a direct extension to 3D problems. Such simulations are also made possible by using an adaptive mesh refinement technique and a hybrid parallel implementation. This is illustrated by several test cases, including a 3D unsteady simulation of the Crookes radiometer.
Comparison of multiple-criteria decision-making methods - results of simulation study
Directory of Open Access Journals (Sweden)
Michał Adamczak
2016-12-01
Full Text Available Background: Today, both researchers and practitioners have many methods for supporting the decision-making process. Due to the conditions in which supply chains function, the most interesting are multi-criteria methods. The use of sophisticated methods for supporting decisions requires the parameterization and execution of calculations that are often complex. So is it efficient to use sophisticated methods? Methods: The authors of the publication compared two popular multi-criteria decision-making methods: the Weighted Sum Model (WSM and the Analytic Hierarchy Process (AHP. A simulation study reflects these two decision-making methods. Input data for this study was a set of criteria weights and the value of each in terms of each criterion. Results: The iGrafx Process for Six Sigma simulation software recreated how both multiple-criteria decision-making methods (WSM and AHP function. The result of the simulation was a numerical value defining the preference of each of the alternatives according to the WSM and AHP methods. The alternative producing a result of higher numerical value was considered preferred, according to the selected method. In the analysis of the results, the relationship between the values of the parameters and the difference in the results presented by both methods was investigated. Statistical methods, including hypothesis testing, were used for this purpose. Conclusions: The simulation study findings prove that the results obtained with the use of two multiple-criteria decision-making methods are very similar. Differences occurred more frequently in lower-value parameters from the "value of each alternative" group and higher-value parameters from the "weight of criteria" group.
Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines
Directory of Open Access Journals (Sweden)
Ivo Prah
2016-09-01
Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.
Directory of Open Access Journals (Sweden)
Ahmed Kibria
2015-01-01
Full Text Available The reliability modeling of a module in a turbine engine requires knowledge of its failure rate, which can be estimated by identifying statistical distributions describing the percentage of failure per component within the turbine module. The correct definition of the failure statistical behavior per component is highly dependent on the engineer skills and may present significant discrepancies with respect to the historical data. There is no formal methodology to approach this problem and a large number of labor hours are spent trying to reduce the discrepancy by manually adjusting the distribution’s parameters. This paper addresses this problem and provides a simulation-based optimization method for the minimization of the discrepancy between the simulated and the historical percentage of failures for turbine engine components. The proposed methodology optimizes the parameter values of the component’s failure statistical distributions within the component’s likelihood confidence bounds. A complete testing of the proposed method is performed on a turbine engine case study. The method can be considered as a decision-making tool for maintenance, repair, and overhaul companies and will potentially reduce the cost of labor associated to finding the appropriate value of the distribution parameters for each component/failure mode in the model and increase the accuracy in the prediction of the mean time to failures (MTTF.
A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks
Moraes, Alvaro
2016-07-07
In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.
Method for simulating dose reduction in digital mammography using the Anscombe transformation.
Borges, Lucas R; Oliveira, Helder C R de; Nunes, Polyana F; Bakic, Predrag R; Maidment, Andrew D A; Vieira, Marcelo A C
2016-06-01
This work proposes an accurate method for simulating dose reduction in digital mammography starting from a clinical image acquired with a standard dose. The method developed in this work consists of scaling a mammogram acquired at the standard radiation dose and adding signal-dependent noise. The algorithm accounts for specific issues relevant in digital mammography images, such as anisotropic noise, spatial variations in pixel gain, and the effect of dose reduction on the detective quantum efficiency. The scaling process takes into account the linearity of the system and the offset of the detector elements. The inserted noise is obtained by acquiring images of a flat-field phantom at the standard radiation dose and at the simulated dose. Using the Anscombe transformation, a relationship is created between the calculated noise mask and the scaled image, resulting in a clinical mammogram with the same noise and gray level characteristics as an image acquired at the lower-radiation dose. The performance of the proposed algorithm was validated using real images acquired with an anthropomorphic breast phantom at four different doses, with five exposures for each dose and 256 nonoverlapping ROIs extracted from each image and with uniform images. The authors simulated lower-dose images and compared these with the real images. The authors evaluated the similarity between the normalized noise power spectrum (NNPS) and power spectrum (PS) of simulated images and real images acquired with the same dose. The maximum relative error was less than 2.5% for every ROI. The added noise was also evaluated by measuring the local variance in the real and simulated images. The relative average error for the local variance was smaller than 1%. A new method is proposed for simulating dose reduction in clinical mammograms. In this method, the dependency between image noise and image signal is addressed using a novel application of the Anscombe transformation. NNPS, PS, and local noise
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is
Benchmark Study of 3D Pore-scale Flow and Solute Transport Simulation Methods
Scheibe, T. D.; Yang, X.; Mehmani, Y.; Perkins, W. A.; Pasquali, A.; Schoenherr, M.; Kim, K.; Perego, M.; Parks, M. L.; Trask, N.; Balhoff, M.; Richmond, M. C.; Geier, M.; Krafczyk, M.; Luo, L. S.; Tartakovsky, A. M.
2015-12-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that benchmark study to include additional models of the first type based on the immersed-boundary method (IMB), lattice Boltzmann method (LBM), and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries in the manner of PNMs has not been fully determined. We apply all five approaches (FVM-based CFD, IMB, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The benchmark study was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods, and motivates further development and application of pore-scale simulation methods.
Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods
Energy Technology Data Exchange (ETDEWEB)
Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.
2006-12-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.
Directory of Open Access Journals (Sweden)
Luo Yuanxiang
2015-01-01
Full Text Available To the problem that safety distance is insufficient for 500 kV substation live working, a magnetic field analysis method for overhead line bus is given based on the charge simulation method. In the method, charge is calculated firstly, and the space field intensity distribution calculation is completed by overlying charge. The space field intensity distribution rule is carried out based on the appropriate analysis, and space field intensity distribution rule of substation is obtained. Then according to the calculation formula of inducing current, the human body induction current under a substation busbar is simulated based on MATLAB. The simulation results have a certain guidance function for actual live working.
Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods
Directory of Open Access Journals (Sweden)
L. Brancik
2011-04-01
Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.
Finite element method for one-dimensional rill erosion simulation on a curved slope
Directory of Open Access Journals (Sweden)
Lijuan Yan
2015-03-01
Full Text Available Rill erosion models are important to hillslope soil erosion prediction and to land use planning. The development of rill erosion models and their use has become increasingly of great concern. The purpose of this research was to develop mathematic models with computer simulation procedures to simulate and predict rill erosion. The finite element method is known as an efficient tool in many other applications than in rill soil erosion. In this study, the hydrodynamic and sediment continuity model equations for a rill erosion system were solved by the Galerkin finite element method and Visual C++ procedures. The simulated results are compared with the data for spatially and temporally measured processes for rill erosion under different conditions. The results indicate that the one-dimensional linear finite element method produced excellent predictions of rill erosion processes. Therefore, this study supplies a tool for further development of a dynamic soil erosion prediction model.
Simulation of cryolipolysis as a novel method for noninvasive fat layer reduction.
Majdabadi, Abbas; Abazari, Mohammad
2016-12-20
Regarding previous problems in conventional liposuction methods, the need for development of new fat removal operations was appreciated. In this study we are going to simulate one of the novel methods, cryolipolysis, aimed to tackle those drawbacks. We think that simulation of clinical procedures contributes considerably in efficacious performance of the operations. To do this we have attempted to simulate temperature distribution in a sample fat of the human body. Using Abaqus software we have presented the graphical display of temperature-time variations within the medium. Findings of our simulation indicate that tissue temperature decreases after cold exposure of about 30 min. It can be seen that the minimum temperature of tissue occurs in shallow layers of the sample and the temperature in deeper layers of the sample remains nearly unchanged. It is clear that cold exposure time of more than the specific time (t > 30 min) does not result in considerable changes. Numerous clinical studies have proved the efficacy of cryolipolysis. This noninvasive technique has eliminated some of drawbacks of conventional methods. Findings of our simulation clearly prove the efficiency of this method, especially for superficial fat layers.
Nonlinear simulation of arch dam cracking with mixed finite element method
Directory of Open Access Journals (Sweden)
Ren Hao
2008-06-01
Full Text Available This paper proposes a new, simple and efficient method for nonlinear simulation of arch dam cracking from the construction period to the operation period, which takes into account the arch dam construction process and temperature loads. In the calculation mesh, the contact surface of pair nodes is located at places on the arch dam where cracking is possible. A new effective iterative method, the mixed finite element method for friction-contact problems, is improved and used for nonlinear simulation of the cracking process. The forces acting on the structure are divided into two parts: external forces and contact forces. The displacement of the structure is chosen as the basic variable and the nodal contact force in the possible contact region of the local coordinate system is chosen as the iterative variable, so that the nonlinear iterative process is only limited within the possible contact surface and is much more economical. This method was used to simulate the cracking process of the Shuanghe Arch Dam in Southwest China. In order to prove the validity and accuracy of this method and to study the effect of thermal stress on arch dam cracking, three schemes were designed for calculation. Numerical results agree with actual measured data, proving that it is feasible to use this method to simulate the entire process of nonlinear arch dam cracking.
Simulation of anisotropic diffusion by means of a diffusion velocity method
Beaudoin, A; Rivoalen, E
2003-01-01
An alternative method to the Particle Strength Exchange method for solving the advection-diffusion equation in the general case of a non-isotropic and non-uniform diffusion is proposed. This method is an extension of the diffusion velocity method. It is shown that this extension is quite straightforward due to the explicit use of the diffusion flux in the expression of the diffusion velocity. This approach is used to simulate pollutant transport in groundwater and the results are compared to those of the PSE method presented in an earlier study by Zimmermann et al.
Datema, C P; Eijk, C W E
2002-01-01
Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found.
Parameter Studies, time-dependent simulations and design with automated Cartesian methods
Aftosmis, Michael
2005-01-01
Over the past decade, NASA has made a substantial investment in developing adaptive Cartesian grid methods for aerodynamic simulation. Cartesian-based methods played a key role in both the Space Shuttle Accident Investigation and in NASA's return to flight activities. The talk will provide an overview of recent technological developments focusing on the generation of large-scale aerodynamic databases, automated CAD-based design, and time-dependent simulations with of bodies in relative motion. Automation, scalability and robustness underly all of these applications and research in each of these topics will be presented.
An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls
Lantz, Renee; Vykukal, H.; Webbon, Bruce
1987-01-01
An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.
A Lagrangian finite element method for the simulation of flow of non-newtonian liquids
DEFF Research Database (Denmark)
Hassager, Ole; Bisgaard, C
1983-01-01
A Lagrangian method for the simulation of flow of non-Newtonian liquids is implemented. The fluid mechanical equations are formulated in the form of a variational principle, and a discretization is performed by finite elements. The method is applied to the slow of a contravariant convected Maxwell...... liquid around a sphere moving axially in a cylinder. The simulations show that the friction factor for a sphere in a narrow cylinder is a rapidly decreasing function of the Deborah number, while the friction factor for a sphere in a very wide cylinder is not significantly affected by fluid elasticity...
A method to simulate grictional heating at defects in ultrasonic infrared thermography
Energy Technology Data Exchange (ETDEWEB)
Choi, Won Jar; Choi, Man Yong; Park, Jeong Hak [Center for Safety Measurement, KRISS, Daejeon(Korea, Republic of)
2015-12-15
Ultrasonic infrared thermography is an active thermography methods. In this method, mechanical energy is introduced to a structure, it is converted into heat energy at the defects, and an infrared camera detects the heat for inspection. The heat generation mechanisms are dependent on many factors such as structure characteristics, defect type, excitation method and contact condition, which make it difficult to predict heat distribution in ultrasonic infrared thermography. In this paper, a method to simulate frictional heating, known to be one of the main heat generation mechanisms at the closed defects in metal structures, is proposed for ultrasonic infrared thermography. This method uses linear vibration analysis results without considering the contact boundary condition at the defect so that it is intuitive and simple to implement. Its advantages and disadvantages are also discussed. The simulation results show good agreement with the modal analysis and experiment result.
Hydrodynamic Force Evaluation by Momentum Exchange Method in Lattice Boltzmann Simulations
Directory of Open Access Journals (Sweden)
Binghai Wen
2015-12-01
Full Text Available As a native scheme to evaluate hydrodynamic force in the lattice Boltzmann method, the momentum exchange method has some excellent features, such as simplicity, accuracy, high efficiency and easy parallelization. Especially, it is independent of boundary geometry, preventing from solving the Navier–Stokes equations on complex boundary geometries in the boundary-integral methods. We review the origination and main developments of the momentum exchange method in lattice Boltzmann simulations. Then several practical techniques to fill newborn fluid nodes are discussed for the simulations of fluid-structure interactions. Finally, some representative applications show the wide applicability of the momentum exchange method, such as movements of rigid particles, interactions of deformation particles, particle suspensions in turbulent flow and multiphase flow, etc.
A novel energy conversion based method for velocity correction in molecular dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)
2017-05-01
Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.
Simulation on the Self-Compacting Concrete by an Enhanced Lagrangian Particle Method
Directory of Open Access Journals (Sweden)
Jun Wu
2016-01-01
Full Text Available The industry has embraced self-compacting concrete (SCC to overcome deficiencies related to consolidation, improve productivity, and enhance safety and quality. Due to the large deformation at the flowing process of SCC, an enhanced Lagrangian particle-based method, Smoothed Particles Hydrodynamics (SPH method, though first developed to study astrophysics problems, with its exceptional advantages in solving problems involving fragmentation, coalescence, and violent free surface deformation, is developed in this study to simulate the flow of SCC as a non-Newtonian fluid to achieve stable results with satisfactory convergence properties. Navier-Stokes equations and incompressible mass conservation equations are solved as basics. Cross rheological model is used to simulate the shear stress and strain relationship of SCC. Mirror particle method is used for wall boundaries. The improved SPH method is tested by a typical 2D slump flow problem and also applied to L-box test. The capability and results obtained from this method are discussed.
Numerical study of MPS method with large eddy simulation for fluid solid coupling problem
YANG, Chao; ZHANG, Huaixin; YAO, Huilan
2017-02-01
The Moving-Particle Semi-implicit method (MPS) is a kind of meshless Lagrangian calculation method. This method uses particles instead of mesh. In the pretreatment it works simply and conveniently and has high computational efficiency. In practical engineering, many of fluid problems are turbulent flows. Large eddy simulation is a major means of studying turbulence. Fluid-structure coupling is an independent branch of mechanics combined with fluid dynamics and solid mechanics, which is the hot and difficult area of research in many fields at present. In this paper, for the numerical simulation of turbulent flow with interaction of fluid-structure, the modified MPS-LES method is applied in two dimensional dam-break problem. It proves that MPS-LES method can be extended on solving the fluid-solid coupled problem.
A fast exact simulation method for a class of Markov jump processes.
Li, Yao; Hu, Lili
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
A fast exact simulation method for a class of Markov jump processes
Energy Technology Data Exchange (ETDEWEB)
Li, Yao, E-mail: yaoli@math.umass.edu [Department of Mathematics and Statistics, University of Massachusetts Amherst, Amherst, Massachusetts 10003 (United States); Hu, Lili, E-mail: lilyhu86@gmail.com [School of Mathematics, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States)
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
Directory of Open Access Journals (Sweden)
Syarizal Fonna
2016-01-01
Full Text Available Many studies have suggested that the corrosion detection of reinforced concrete (RC based on electrical potential on concrete surface was an ill-posed problem, and thus it may present an inaccurate interpretation of corrosion. However, it is difficult to prove the ill-posed problem of the RC corrosion detection by experiment. One promising technique is using a numerical method. The objective of this study is to simulate the ill-posed problem of RC corrosion detection based on electrical potential on a concrete surface using the Boundary Element Method (BEM. BEM simulates electrical potential within a concrete domain. In order to simulate the electrical potential, the domain is assumed to be governed by Laplace’s equation. The boundary conditions for the corrosion area and the noncorrosion area of rebar were selected from its polarization curve. A rectangular reinforced concrete model with a single rebar was chosen to be simulated using BEM. The numerical simulation results using BEM showed that the same electrical potential distribution on the concrete surface could be generated from different combinations of parameters. Corresponding to such a phenomenon, this problem can be categorized as an ill-posed problem since it has many solutions. Therefore, BEM successfully simulates the ill-posed problem of reinforced concrete corrosion detection.
Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves
Directory of Open Access Journals (Sweden)
Shukui Liu
2011-03-01
Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.
Simulation of granular and gas-solid flows using discrete element method
Boyalakuntla, Dhanunjay S.
2003-10-01
In recent years there has been increased research activity in the experimental and numerical study of gas-solid flows. Flows of this type have numerous applications in the energy, pharmaceuticals, and chemicals process industries. Typical applications include pulverized coal combustion, flow and heat transfer in bubbling and circulating fluidized beds, hopper and chute flows, pneumatic transport of pharmaceutical powders and pellets, and many more. The present work addresses the study of gas-solid flows using computational fluid dynamics (CFD) techniques and discrete element simulation methods (DES) combined. Many previous studies of coupled gas-solid flows have been performed assuming the solid phase as a continuum with averaged properties and treating the gas-solid flow as constituting of interpenetrating continua. Instead, in the present work, the gas phase flow is simulated using continuum theory and the solid phase flow is simulated using DES. DES treats each solid particle individually, thus accounting for its dynamics due to particle-particle interactions, particle-wall interactions as well as fluid drag and buoyancy. The present work involves developing efficient DES methods for dense granular flow and coupling this simulation to continuum simulations of the gas phase flow. Simulations have been performed to observe pure granular behavior in vibrating beds. Benchmark cases have been simulated and the results obtained match the published literature. The dimensionless acceleration amplitude and the bed height are the parameters governing bed behavior. Various interesting behaviors such as heaping, round and cusp surface standing waves, as well as kinks, have been observed for different values of the acceleration amplitude for a given bed height. Furthermore, binary granular mixtures (granular mixtures with two particle sizes) in a vibrated bed have also been studied. Gas-solid flow simulations have been performed to study fluidized beds. Benchmark 2D
Miao, Linling; Young, Charles D.; Sing, Charles E.
2017-07-01
Brownian Dynamics (BD) simulations are a standard tool for understanding the dynamics of polymers in and out of equilibrium. Quantitative comparison can be made to rheological measurements of dilute polymer solutions, as well as direct visual observations of fluorescently labeled DNA. The primary computational challenge with BD is the expensive calculation of hydrodynamic interactions (HI), which are necessary to capture physically realistic dynamics. The full HI calculation, performed via a Cholesky decomposition every time step, scales with the length of the polymer as O(N3). This limits the calculation to a few hundred simulated particles. A number of approximations in the literature can lower this scaling to O(N2 - N2.25), and explicit solvent methods scale as O(N); however both incur a significant constant per-time step computational cost. Despite this progress, there remains a need for new or alternative methods of calculating hydrodynamic interactions; large polymer chains or semidilute polymer solutions remain computationally expensive. In this paper, we introduce an alternative method for calculating approximate hydrodynamic interactions. Our method relies on an iterative scheme to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Comparison to standard BD simulation and polymer theory results demonstrates that this method quantitatively captures both equilibrium and steady-state dynamics after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. We also investigate limitations of this conformational averaging approach in ring polymers.
Simulation methods to estimate design power: an overview for applied research
2011-01-01
Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447
Direct simulation Monte Carlo method for gas cluster ion beam technology
Insepov, Z
2003-01-01
A direct simulation Monte Carlo method has been developed and applied for the simulation of a supersonic Ar gas expansion through a converging-diverging nozzle, with the stagnation pressures of P sub 0 =0.1-10 atm, at various temperatures. A body-fitted coordinate system has been developed that allows modeling nozzles of arbitrary shape. A wide selection of nozzle sizes, apex angles, with diffuse and specular atomic reflection laws from the nozzle walls, has been studied. The results of nozzle simulation were used to obtain a scaling law P sub 0 T sub 0 sup 1 sup 9 sup / sup 8 d supalpha L sub n supbeta=const. for the constant mean cluster sizes that are formed in conical nozzles. The Hagena's formula, valid for the conical nozzles with a constant length, has further been extended to the conical nozzles with variable lengths, based on our simulation results.
A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.
Ling, Hong; Luo, Ercang; Dai, Wei
2006-12-22
Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.
Energy Technology Data Exchange (ETDEWEB)
Katya L Le Blanc; Ronald L Boring; David I Gertman
2001-11-01
With the increased use of digital systems in Nuclear Power Plant (NPP) control rooms comes a need to thoroughly understand the human performance issues associated with digital systems. A common way to evaluate human performance is to test operators and crews in NPP control room simulators. However, it is often challenging to characterize human performance in meaningful ways when measuring performance in NPP control room simulations. A review of the literature in NPP simulator studies reveals a variety of ways to measure human performance in NPP control room simulations including direct observation, automated computer logging, recordings from physiological equipment, self-report techniques, protocol analysis and structured debriefs, and application of model-based evaluation. These methods and the particular measures used are summarized and evaluated.
Three-dimensional implementation of the Low Diffusion method for continuum flow simulations
Mirza, A.; Nizenkov, P.; Pfeiffer, M.; Fasoulas, S.
2017-11-01
Concepts of a particle-based continuum method have existed for many years. The ultimate goal is to couple such a method with the Direct Simulation Monte Carlo (DSMC) in order to bridge the gap of numerical tools in the treatment of the transitional flow regime between near-equilibrium and rarefied gas flows. For this purpose, the Low Diffusion (LD) method, introduced first by Burt and Boyd, offers a promising solution. In this paper, the LD method is revisited and the implementation in a modern particle solver named PICLas is given. The modifications of the LD routines enable three-dimensional continuum flow simulations. The implementation is successfully verified through a series of test cases: simple stationary shock, oblique shock simulation and thermal Couette flow. Additionally, the capability of this method is demonstrated by the simulation of a hypersonic nitrogen flow around a 70°-blunted cone. Overall results are in very good agreement with experimental data. Finally, the scalability of PICLas using LD on a high performance cluster is presented.
Directory of Open Access Journals (Sweden)
Y. Zhao
2017-06-01
Full Text Available Local line rolling forming is a common forming approach for the complex curvature plate of ships. However, the processing mode based on artificial experience is still applied at present, because it is difficult to integrally determine relational data for the forming shape, processing path, and process parameters used to drive automation equipment. Numerical simulation is currently the major approach for generating such complex relational data. Therefore, a highly precise and effective numerical computation method becomes crucial in the development of the automated local line rolling forming system for producing complex curvature plates used in ships. In this study, a three-dimensional elastoplastic finite element method was first employed to perform numerical computations for local line rolling forming, and the corresponding deformation and strain distribution features were acquired. In addition, according to the characteristics of strain distributions, a simplified deformation simulation method, based on the deformation obtained by applying strain was presented. Compared to the results of the three-dimensional elastoplastic finite element method, this simplified deformation simulation method was verified to provide high computational accuracy, and this could result in a substantial reduction in calculation time. Thus, the application of the simplified deformation simulation method was further explored in the case of multiple rolling loading paths. Moreover, it was also utilized to calculate the local line rolling forming for the typical complex curvature plate of ships. Research findings indicated that the simplified deformation simulation method was an effective tool for rapidly obtaining relationships between the forming shape, processing path, and process parameters.
Merrikh-Bayat, Farshad
2011-04-01
One main approach for time-domain simulation of the linear output-feedback systems containing fractional-order controllers is to approximate the transfer function of the controller with an integer-order transfer function and then perform the simulation. In general, this approach suffers from two main disadvantages: first, the internal stability of the resulting feedback system is not guaranteed, and second, the amount of error caused by this approximation is not exactly known. The aim of this paper is to propose an efficient method for time-domain simulation of such systems without facing the above mentioned drawbacks. For this purpose, the fractional-order controller is approximated with an integer-order transfer function (possibly in combination with the delay term) such that the internal stability of the closed-loop system is guaranteed, and then the simulation is performed. It is also shown that the resulting approximate controller can effectively be realized by using the proposed method. Some formulas for estimating and correcting the simulation error, when the feedback system under consideration is subjected to the unit step command or the unit step disturbance, are also presented. Finally, three numerical examples are studied and the results are compared with the Oustaloup continuous approximation method. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Application of CLEAR-VOF method to wave and flow simulations
Directory of Open Access Journals (Sweden)
Ying-wei SUN
2012-03-01
Full Text Available A two-dimensional numerical model based on the Navier-Stokes equations and computational Lagrangian-Eulerian advection remap-volume of fluid (CLEAR-VOF method was developed to simulate wave and flow problems. The Navier-Stokes equations were discretized with a three-step finite element method that has a third-order accuracy. In the CLEAR-VOF method, the VOF function F was calculated in the Lagrangian manner and allowed the complicated free surface to be accurately captured. The propagation of regular waves and solitary waves over a flat bottom, and shoaling and breaking of solitary waves on two different slopes were simulated with this model, and the numerical results agreed with experimental data and theoretical solutions. A benchmark test of dam-collapse flow was also simulated with an unstructured mesh, and the capability of the present model for wave and flow simulations with unstructured meshes, was verified. The results show that the model is effective for numerical simulation of wave and flow problems with both structured and unstructured meshes.
A general method for closed-loop inverse simulation of helicopter maneuver flight
Directory of Open Access Journals (Sweden)
Wei WU
2017-12-01
Full Text Available Maneuverability is a key factor to determine whether a helicopter could finish certain flight missions successfully or not. Inverse simulation is commonly used to calculate the pilot controls of a helicopter to complete a certain kind of maneuver flight and to assess its maneuverability. A general method for inverse simulation of maneuver flight for helicopters with the flight control system online is developed in this paper. A general mathematical describing function is established to provide mathematical descriptions of different kinds of maneuvers. A comprehensive control solver based on the optimal linear quadratic regulator theory is developed to calculate the pilot controls of different maneuvers. The coupling problem between pilot controls and flight control system outputs is well solved by taking the flight control system model into the control solver. Inverse simulation of three different kinds of maneuvers with different agility requirements defined in the ADS-33E-PRF is implemented based on the developed method for a UH-60 helicopter. The results show that the method developed in this paper can solve the closed-loop inverse simulation problem of helicopter maneuver flight with high reliability as well as efficiency. Keywords: Closed-loop, Flying quality, Helicopters, Inverse simulation, Maneuver flight
Creation and Delphi-method refinement of pediatric disaster triage simulations.
Cicero, Mark X; Brown, Linda; Overly, Frank; Yarzebski, Jorge; Meckler, Garth; Fuchs, Susan; Tomassoni, Anthony; Aghababian, Richard; Chung, Sarita; Garrett, Andrew; Fagbuyi, Daniel; Adelgais, Kathleen; Goldman, Ran; Parker, James; Auerbach, Marc; Riera, Antonio; Cone, David; Baum, Carl R
2014-01-01
There is a need for rigorously designed pediatric disaster triage (PDT) training simulations for paramedics. First, we sought to design three multiple patient incidents for EMS provider training simulations. Our second objective was to determine the appropriate interventions and triage level for each victim in each of the simulations and develop evaluation instruments for each simulation. The final objective was to ensure that each simulation and evaluation tool was free of bias toward any specific PDT strategy. We created mixed-methods disaster simulation scenarios with pediatric victims: a school shooting, a school bus crash, and a multiple-victim house fire. Standardized patients, high-fidelity manikins, and low-fidelity manikins were used to portray the victims. Each simulation had similar acuity of injuries and 10 victims. Examples include children with special health-care needs, gunshot wounds, and smoke inhalation. Checklist-based evaluation tools and behaviorally anchored global assessments of function were created for each simulation. Eight physicians and paramedics from areas with differing PDT strategies were recruited as Subject Matter Experts (SMEs) for a modified Delphi iterative critique of the simulations and evaluation tools. The modified Delphi was managed with an online survey tool. The SMEs provided an expected triage category for each patient. The target for modified Delphi consensus was ≥85%. Using Likert scales and free text, the SMEs assessed the validity of the simulations, including instances of bias toward a specific PDT strategy, clarity of learning objectives, and the correlation of the evaluation tools to the learning objectives and scenarios. After two rounds of the modified Delphi, consensus for expected triage level was >85% for 28 of 30 victims, with the remaining two achieving >85% consensus after three Delphi iterations. To achieve consensus, we amended 11 instances of bias toward a specific PDT strategy and corrected 10
A Multi-Stage Method for Connecting Participatory Sensing and Noise Simulations
Directory of Open Access Journals (Sweden)
Mingyuan Hu
2015-01-01
Full Text Available Most simulation-based noise maps are important for official noise assessment but lack local noise characteristics. The main reasons for this lack of information are that official noise simulations only provide information about expected noise levels, which is limited by the use of large-scale monitoring of noise sources, and are updated infrequently. With the emergence of smart cities and ubiquitous sensing, the possible improvements enabled by sensing technologies provide the possibility to resolve this problem. This study proposed an integrated methodology to propel participatory sensing from its current random and distributed sampling origins to professional noise simulation. The aims of this study were to effectively organize the participatory noise data, to dynamically refine the granularity of the noise features on road segments (e.g., different portions of a road segment, and then to provide a reasonable spatio-temporal data foundation to support noise simulations, which can be of help to researchers in understanding how participatory sensing can play a role in smart cities. This study first discusses the potential limitations of the current participatory sensing and simulation-based official noise maps. Next, we explain how participatory noise data can contribute to a simulation-based noise map by providing (1 spatial matching of the participatory noise data to the virtual partitions at a more microscopic level of road networks; (2 multi-temporal scale noise estimations at the spatial level of virtual partitions; and (3 dynamic aggregation of virtual partitions by comparing the noise values at the relevant temporal scale to form a dynamic segmentation of each road segment to support multiple spatio-temporal noise simulations. In this case study, we demonstrate how this method could play a significant role in a simulation-based noise map. Together, these results demonstrate the potential benefits of participatory noise data as dynamic
Science classroom inquiry (SCI) simulations: a novel method to scaffold science learning.
Peffer, Melanie E; Beckler, Matthew L; Schunn, Christian; Renken, Maggie; Revak, Amanda
2015-01-01
Science education is progressively more focused on employing inquiry-based learning methods in the classroom and increasing scientific literacy among students. However, due to time and resource constraints, many classroom science activities and laboratory experiments focus on simple inquiry, with a step-by-step approach to reach predetermined outcomes. The science classroom inquiry (SCI) simulations were designed to give students real life, authentic science experiences within the confines of a typical classroom. The SCI simulations allow students to engage with a science problem in a meaningful, inquiry-based manner. Three discrete SCI simulations were created as website applications for use with middle school and high school students. For each simulation, students were tasked with solving a scientific problem through investigation and hypothesis testing. After completion of the simulation, 67% of students reported a change in how they perceived authentic science practices, specifically related to the complex and dynamic nature of scientific research and how scientists approach problems. Moreover, 80% of the students who did not report a change in how they viewed the practice of science indicated that the simulation confirmed or strengthened their prior understanding. Additionally, we found a statistically significant positive correlation between students' self-reported changes in understanding of authentic science practices and the degree to which each simulation benefitted learning. Since SCI simulations were effective in promoting both student learning and student understanding of authentic science practices with both middle and high school students, we propose that SCI simulations are a valuable and versatile technology that can be used to educate and inspire a wide range of science students on the real-world complexities inherent in scientific study.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Kerr, Rex A; Bartol, Thomas M; Kaminsky, Boris; Dittrich, Markus; Chang, Jen-Chien Jack; Baden, Scott B; Sejnowski, Terrence J; Stiles, Joel R
2008-10-13
Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods.
[Pollen information of airborne Japanese cedar pollen using a simulation method].
Takahashi, Y; Kawashima, S; Aikawa, S
1996-04-01
We have developed a simulation method of airborne Cryptomeria japonica pollen distribution on a map displayed visually on a TV screen. Each patient can be available the information where the place he or she lives. The pollen season in 1995, we served the information about airborne pollen distribution on a map and C. japonica flowering areas on a map to a local resident through TV broadcasting. To verify the simulation method, comparison was made between the result from actual pollen counting and from simulation. It was clarified that both results were comparatively agreed on daily basis. Problem about compatibility among personal computers were solved to rewrite the program of displaying the image using Visual Basic for MS-Windows and create image files. The files can be read continuously by animation software. We think the information can be offered to local resident, local clinicians and patients waiting at the clinics by use of computer networks.
Simulations of Micro Gas Flows by the DS-BGK Method
Li, Jun
2011-01-01
For gas flows in micro devices, the molecular mean free path is of the same order as the characteristic scale making the Navier-Stokes equation invalid. Recently, some micro gas flows are simulated by the DS-BGK method, which is convergent to the BGK equation and very efficient for low-velocity cases. As the molecular reflection on the boundary is the dominant effect compared to the intermolecular collisions in micro gas flows, the more realistic boundary condition, namely the CLL reflection model, is employed in the DS-BGK simulation and the influence of the accommodation coefficients used in the molecular reflection model on the results are discussed. The simulation results are verified by comparison with those of the DSMC method as criteria. Copyright © 2011 by ASME.
Integrated Building Energy Design of a Danish Office Building Based on Monte Carlo Simulation Method
DEFF Research Database (Denmark)
Sørensen, Mathias Juul; Myhre, Sindre Hammer; Hansen, Kasper Kingo
2017-01-01
and improve the collaboration efficiency. Monte Carlo Simulation method is adopted to simulate both the energy performance and indoor climate of the building. Building physics parameters, including characteristics of facades, walls, windows, etc., are taken into consideration, and thousands of combinations...... fulfil the requirements and leaves additional design freedom for the architects. This study utilizes global design exploration with Monte Carlo Simulations, in order to form feasible solutions for architects and improves the collaboration efficiency between architects and engineers....... office building located in Aarhus, Denmark. Building geometry, floor plans and employee schedules were obtained from the architects which is the basis for this study. This study aims to simplify the iterative design process that is based on the traditional trial and error method in the late design phases...
DSMC calculations for the double ellipse. [direct simulation Monte Carlo method
Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet
1990-01-01
The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.
A Modified SPH Method for Dynamic Failure Simulation of Heterogeneous Material
Directory of Open Access Journals (Sweden)
G. W. Ma
2014-01-01
Full Text Available A modified smoothed particle hydrodynamics (SPH method is applied to simulate the failure process of heterogeneous materials. An elastoplastic damage model based on an extension form of the unified twin shear strength (UTSS criterion is adopted. Polycrystalline modeling is introduced to generate the artificial microstructure of specimen for the dynamic simulation of Brazilian splitting test and uniaxial compression test. The strain rate effect on the predicted dynamic tensile and compressive strength is discussed. The final failure patterns and the dynamic strength increments demonstrate good agreements with experimental results. It is illustrated that the polycrystalline modeling approach combined with the SPH method is promising to simulate more complex failure process of heterogeneous materials.
[COMPARATIVE CHARACTERISTIC OF VARIOUS METHODS OF SIMULATION OF BILIARY PERITONITIS IN EXPERIMENT].
Nichitaylo, M Yu; Furmanov, Yu O; Gutsulyak, A I; Savytska, I M; Zagriychuk, M S; Goman, A V
2016-02-01
In experiment on rabbits a comparative analysis of various methods of a biliary peritonitis simulation was conducted. In 6 animals a biliary peritonitis was simulated, using perforation of a gallbladder, local serous-fibrinous peritonitis have occurred in 50% of them. In 7 animals biliary peritonitis was simulated, applying intraabdominal injection of medical sterile bile in a 5-40 ml volume. Diffuse peritonitis with exudates and stratification of fibrin was absent. Most effective method have appeared that, when intraabdominal injection of bile was done together with E. coli culture in the rate of 0.33 microbal bodies McF (1.0 x 10(8) CFU/ml) on 1 kg of the animal body mass. Diffuse biliary peritonitis have occurred in all 23 animals, including serous-fibrinous one--in 17 (76%), and purulent-fibrinous--in 6 (24%).
Work in process level definition: a method based on computer simulation and electre tri
Directory of Open Access Journals (Sweden)
Isaac Pergher
2014-09-01
Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.
The simulation of Lamb waves in a cracked plate using the scaled boundary finite element method.
Gravenkamp, Hauke; Prager, Jens; Saputra, Albert A; Song, Chongmin
2012-09-01
The scaled boundary finite element method is applied to the simulation of Lamb waves for ultrasonic testing applications. With this method, the general elastodynamic problem is solved, while only the boundary of the domain under consideration has to be discretized. The reflection of the fundamental Lamb wave modes from cracks of different geometry in a steel plate is modeled. A test problem is compared with commercial finite element software, showing the efficiency and convergence of the scaled boundary finite element method. A special formulation of this method is utilized to calculate dispersion relations for plate structures. For the discretization of the boundary, higher-order elements are employed to improve the efficiency of the simulations. The simplicity of mesh generation of a cracked plate for a scaled boundary finite element analysis is illustrated.
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore
2014-01-01
In resent work we have developed a new FFT based Poisson solver, which uses regularized Greens functions to obtain arbitrary high order convergence to the unbounded Poisson equation. The high order Poisson solver has been implemented in an unbounded particle-mesh based vortex method which uses a re......, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data a novel method on analyzing the dynamics of the enstrophy is presented based on the alignment of the vorticity vector......-meshing of the vortex particles to ensure the convergence of the method. Furthermore, we use a re-projection of the vorticity field to include the constraint of a divergence-free stream function which is essential for the underlying Helmholtz decomposition and ensures a divergence free vorticity field. The high order...
Parallel simulation of dam-break flow by OpenMP-based SPH method
Luo, Zhao; Wu, Qihe; Zhang, Lei
2017-10-01
Smoothed particle hydrodynamics (SPH), a Lagrangian mesh-free particle numerical method, is suitable for simulating strong impact and large deformation problems. In the method, quantities of particles can ensure high precision. However, with the increase of the particle numbers, the calculation efficiency becomes a challenge for applying the method to the engineering practice. OpenMP, a Portable Shared Memory Parallel Programming, is a great solution to improve the efficiency of SPH algorithm. In the paper, dam-break flow is simulated by SPH method. Vortex centre is discovered, the role of numerical technique is compared. At the same time, two parallel schemes for SPH algorithm is introduced, the speedup ratio with respect to number of particles or threads are revealed.
Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations
Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli
2001-01-01
A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.
Sapriza, Gonzalo; Jodar, Jorge; Carrera, Jesús; Gupta, Hoshin V.
2013-04-01
Climate Change Impacts Studies (CCIS) for Water Resources Management (WRM) are of crucial importance for the human community and especially for water scarce Mediterranean- like regions, where the available water is expected to decrease due to climate change. General Circulation Models (GCM) are one of the most valuable tools available to perform CCIS. However, they cannot be directly applied to water resources evaluations due to their coarse spatial resolution and bias in their simulation of certain outputs, especially precipitation. Downscaling methods have been developed to address this problem, by defining statistical relationships between the variables simulated by GCMs and local observations. Once these relationships are defined and tested via post evaluation during a control period, the relationship is used to generate synthetic time series for the future, based on the different future climate scenarios simulated by the GCMs. For CCIS in WRM, synthetic time series of precipitation and temperature are applied as input variables to run hydrological models and obtain future projections of hydrological response. The main drawbacks of this procedure are: (1) inevitably we have to assume time stationary in the downscaling parameters (which in principle can vary with climate change), and (2) The downscaling parameterizations are another source of model uncertainties that must be quantified and communicated. Here, we evaluate the sensitivity of hydrological model simulations to assumptions underlying a downscaling method based on a Stochastic Rainfall Generating process (SRGP). The method is used to demonstrate that exact daily rainfall sequences are not necessary for climate impacts assessment, and that the "stochastically equivalent" rainfall sequence simulations provided by the model are both sufficient, and provide important added value in terms of realistic assessments of uncertainty. The method also establishes which parameters of the rainfall generating
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Parallel shooting methods for finding steady state solutions to engine simulation models
DEFF Research Database (Denmark)
Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik
2007-01-01
Parallel single- and multiple shooting methods were tested for finding periodic steady state solutions to a Stirling engine model. The model was used to illustrate features of the methods and possibilities for optimisations. Performance was measured using simulation of an experimental data set...... as test case. A parallel speedup factor of 23 on 33 processors was achieved with multiple shooting. But fast transients at the beginnings of sub intervals caused significant overhead for the multiple shooting methods and limited the best speedup to 3.8 relative to the fastest sequential method: Single...... shooting with reduced dimension of the boundary value problem....
Directory of Open Access Journals (Sweden)
Latimer Nicholas
2011-01-01
Full Text Available Abstract Background We investigate methods used to analyse the results of clinical trials with survival outcomes in which some patients switch from their allocated treatment to another trial treatment. These included simple methods which are commonly used in medical literature and may be subject to selection bias if patients switching are not typical of the population as a whole. Methods which attempt to adjust the estimated treatment effect, either through adjustment to the hazard ratio or via accelerated failure time models, were also considered. A simulation study was conducted to assess the performance of each method in a number of different scenarios. Results 16 different scenarios were identified which differed by the proportion of patients switching, underlying prognosis of switchers and the size of true treatment effect. 1000 datasets were simulated for each of these and all methods applied. Selection bias was observed in simple methods when the difference in survival between switchers and non-switchers were large. A number of methods, particularly the AFT method of Branson and Whitehead were found to give less biased estimates of the true treatment effect in these situations. Conclusions Simple methods are often not appropriate to deal with treatment switching. Alternative approaches such as the Branson & Whitehead method to adjust for switching should be considered.
Flow simulation of a Pelton bucket using finite volume particle method
Vessaz, C.; Jahanbakhsh, E.; Avellan, F.
2014-03-01
The objective of the present paper is to perform an accurate numerical simulation of the high-speed water jet impinging on a Pelton bucket. To reach this goal, the Finite Volume Particle Method (FVPM) is used to discretize the governing equations. FVPM is an arbitrary Lagrangian-Eulerian method, which combines attractive features of Smoothed Particle Hydrodynamics and conventional mesh-based Finite Volume Method. This method is able to satisfy free surface and no-slip wall boundary conditions precisely. The fluid flow is assumed weakly compressible and the wall boundary is represented by one layer of particles located on the bucket surface. In the present study, the simulations of the flow in a stationary bucket are investigated for three different impinging angles: 72°, 90° and 108°. The particles resolution is first validated by a convergence study. Then, the FVPM results are validated with available experimental data and conventional grid-based Volume Of Fluid simulations. It is shown that the wall pressure field is in good agreement with the experimental and numerical data. Finally, the torque evolution and water sheet location are presented for a simulation of five rotating Pelton buckets.
Development of Simulation Methods in the Gibbs Ensemble to Predict Polymer-Solvent Phase Equilibria
Gartner, Thomas; Epps, Thomas; Jayaraman, Arthi
Solvent vapor annealing (SVA) of polymer thin films is a promising method for post-deposition polymer film morphology control. The large number of important parameters relevant to SVA (polymer, solvent, and substrate chemistries, incoming film condition, annealing and solvent evaporation conditions) makes systematic experimental study of SVA a time-consuming endeavor, motivating the application of simulation and theory to the SVA system to provide both mechanistic insight and scans of this wide parameter space. However, to rigorously treat the phase equilibrium between polymer film and solvent vapor while still probing the dynamics of SVA, new simulation methods must be developed. In this presentation, we compare two methods to study polymer-solvent phase equilibrium-Gibbs Ensemble Molecular Dynamics (GEMD) and Hybrid Monte Carlo/Molecular Dynamics (Hybrid MC/MD). Liquid-vapor equilibrium results are presented for the Lennard Jones fluid and for coarse-grained polymer-solvent systems relevant to SVA. We found that the Hybrid MC/MD method is more stable and consistent than GEMD, but GEMD has significant advantages in computational efficiency. We propose that Hybrid MC/MD simulations be used for unfamiliar systems in certain choice conditions, followed by much faster GEMD simulations to map out the remainder of the phase window.
Numerical simulation of hydrodynamic wave loading by a compressible two-phase flow method
Wemmenhove, Rik; Luppes, Roelf; Veldman, Arthur; Bunnik, Tim
2015-01-01
Hydrodynamic wave loading on and in offshore structures is studied by carrying out numerical simulations. Particular attention is paid to complex hydrodynamic phenomena such as wave breaking and air entrapment. The applied CFD method, ComFLOW, solves the Navier–Stokes equations with an improved
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
Simulation of Experimental Parameters of RC Beams by Employing the Polynomial Regression Method
Sayin, B.; Sevgen, S.; Samli, R.
2016-07-01
A numerical model based on the method polynomial regression is developed to simulate the mechanical behavior of reinforced concrete beams strengthened with a carbon-fiber-reinforced polymer and subjected to four-point bending. The results obtained are in good agreement with data of laboratory tests.
Simulations of the Yawed MEXICO Rotor Using a Viscous-Inviscid Panel Method
DEFF Research Database (Denmark)
Ramos García, Néstor; Sørensen, Jens Nørkær; Shen, Wen Zhong
2014-01-01
In the present work the viscous-inviscid interactive model MIRAS is used to simulate flows past the MEXICO rotor in yawed conditions. The solver is based on an unsteady three-dimensional free wake panel method which uses a strong viscous-inviscid interaction technique to account for the viscous...
New method of processing heat treatment experiments with numerical simulation support
Kik, T.; Moravec, J.; Novakova, I.
2017-08-01
In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.
Simulation of a soil loosening process by means of the modified distinct element method
Momuzu, M.; Oida, A.; Yamazaki, M.; Koolen, A.J.
2002-01-01
We apply the Distinct Element Method (DEM) to analyze the dynamic behavior of soil. However, the conventional DEM model for calculation of contact forces between elements has some problems; for example, the movement of elements is too discrete to simulate real soil particle movement. Therefore, we
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Pustejovsky, James E.; Runyon, Christopher
2014-01-01
Direct observation recording procedures produce reductive summary measurements of an underlying stream of behavior. Previous methodological studies of these recording procedures have employed simulation methods for generating random behavior streams, many of which amount to special cases of a statistical model known as the alternating renewal…
Grover, Anita; Lam, Tai Ning; Hunt, C. Anthony
2008-01-01
We present a simulation tool to aid the study of basic pharmacology principles. By taking advantage of the properties of agent-based modeling, the tool facilitates taking a mechanistic approach to learning basic concepts, in contrast to the traditional empirical methods. Pharmacodynamics is a particular aspect of pharmacology that can benefit from…
Jensen, T.B.; Billiet, H.A.H.; Van der Wielen, L.A.M.
2000-01-01
The present invention relates to a method of separating a first solute A and a second solute B using (simulated) moving bed chromatography. According to the present invention at least one of a) a feedstream; and b) a desorbent stream comprises an organic solvent. The use of different solvent liquids
Directory of Open Access Journals (Sweden)
Zijia Wang
2012-11-01
Full Text Available The emergency evacuation test method of rail transit station not only affects the operation safety of the station, but it also has significant influence on the scale and cost of the station. A reasonable test method should guarantee both the safety of evacuation and that the investment is neither excessive nor too conservative. The paper compares and analyzes the differences of the existing emergency evacuation test methods of rail stations in China and other regions on the evacuation load, evacuation time calculation and the capacity of egress components, etc. Based on the field survey analysis, the desired velocity distribution of pedestrians in various station facilities and the capacity of egress components have been obtained, and then the parameters of pedestrian simulation tool were calibrated. By selecting a station for the case study, an evacuation simulation model has been established, where five evacuation scenarios have been set according to different specifications and the simulation results have been carefully analyzed. Through analyzing the simulation results, some modification proposals of the current emergency evacuation test method in the design manual have been considered, including taking into account the section passenger volume, walking time on escalators and stairs of the platform, and the condition in which the escalator most critical to evacuation should be considered as out of service.
An Improved Ghost-cell Immersed Boundary Method for Compressible Inviscid Flow Simulations
Chi, Cheng
2015-05-01
This study presents an improved ghost-cell immersed boundary approach to represent a solid body in compressible flow simulations. In contrast to the commonly used approaches, in the present work ghost cells are mirrored through the boundary described using a level-set method to farther image points, incorporating a higher-order extra/interpolation scheme for the ghost cell values. In addition, a shock sensor is in- troduced to deal with image points near the discontinuities in the flow field. Adaptive mesh refinement (AMR) is used to improve the representation of the geometry efficiently. The improved ghost-cell method is validated against five test cases: (a) double Mach reflections on a ramp, (b) supersonic flows in a wind tunnel with a forward- facing step, (c) supersonic flows over a circular cylinder, (d) smooth Prandtl-Meyer expansion flows, and (e) steady shock-induced combustion over a wedge. It is demonstrated that the improved ghost-cell method can reach the accuracy of second order in L1 norm and higher than first order in L∞ norm. Direct comparisons against the cut-cell method demonstrate that the improved ghost-cell method is almost equally accurate with better efficiency for boundary representation in high-fidelity compressible flow simulations. Implementation of the improved ghost-cell method in reacting Euler flows further validates its general applicability for compressible flow simulations.
A vascular image registration method based on network structure and circuit simulation.
Chen, Li; Lian, Yuxi; Guo, Yi; Wang, Yuanyuan; Hatsukami, Thomas S; Pimentel, Kristi; Balu, Niranjan; Yuan, Chun
2017-05-02
Image registration is an important research topic in the field of image processing. Applying image registration to vascular image allows multiple images to be strengthened and fused, which has practical value in disease detection, clinical assisted therapy, etc. However, it is hard to register vascular structures with high noise and large difference in an efficient and effective method. Different from common image registration methods based on area or features, which were sensitive to distortion and uncertainty in vascular structure, we proposed a novel registration method based on network structure and circuit simulation. Vessel images were transformed to graph networks and segmented to branches to reduce the calculation complexity. Weighted graph networks were then converted to circuits, in which node voltages of the circuit reflecting the vessel structures were used for node registration. The experiments in the two-dimensional and three-dimensional simulation and clinical image sets showed the success of our proposed method in registration. The proposed vascular image registration method based on network structure and circuit simulation is stable, fault tolerant and efficient, which is a useful complement to the current mainstream image registration methods.
Kofke Peter, David A.; Cummings, T.
The precision of several methods for computing the chemical potential by molecular simulation is investigated. The study does not apply molecular simulation to the analysis but instead works with models of the simulation process. These models enable the variance of the chemical potential to be computed accurately and very quickly and thereby permits the methods (freeenergy perturbation, expanded ensembles, thermodynamic integration, and histogram-distribution methods) to be optimized and compared over a range of densities. The study focuses exclusively on the hard-sphere model. This model is simple and well characterized; yet it exhibits the essential features that make the chemical potential calculation difficult; arguments are presented to support the broader applicability of the study. The severe asymmetry of particle insertion against particle deletion is highlighted, and it is shown that any staged free-energy perturbation method with a 'deletion' component is highly prone to systematic error. More generally this implies that such methods should always be staged in the direction of decreasing entropy. Other findings show that uniform sampling is not optimal for umbrellasampling and expanded-ensemble applications, although it remains a good rule of thumb for tuning these approaches. Among the techniques we study, optimally staged insertion and the distribution-histogram methods are the most efficient and precise. The latter is effective only when used in an interpolative fashion, and we identify it as the most likely route to further progress in the field.
Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C
2017-10-01
Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3
Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media
Li, Jun
2017-02-16
An upscaled Lattice Boltzmann Method (LBM) for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution) and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.
Itakura, Kota; Hatakeyama, Go; Akiyoshi, Masanori; Komoda, Norihisa
Recently, there are various proposals on tool for multi-agent simulation. However, in such simulation tools, analysts who do not have programming skill spend a lot of time to develop programs because notation of simulation models is not defined sufficiently and programming language is varied on tools. To solve this problem, a programming environment that defines the notation of simulation model has poposed. In this environment, analysts can design simulation with a graph representation and get the program code without writing programs. However, it is difficult to find errors that cause unintended behavior in simulation. Therefore, we propose a support method as a model debugger which helps users to find errors. The debugger generates candidates of errors, using a user's report of unintended behavior based on “typical report patterns”. Candidates of errors are extracted from “tree structure of error-inducing factors” that consists of source patterns of errors. In this paper, we executed experiments that compare time needed for examinees to find errors. Experimental results show the time to find errors by utilizing our model debugger is shortened.
Upscaled Lattice Boltzmann Method for Simulations of Flows in Heterogeneous Porous Media
Directory of Open Access Journals (Sweden)
Jun Li
2017-01-01
Full Text Available An upscaled Lattice Boltzmann Method (LBM for flow simulations in heterogeneous porous media at the Darcy scale is proposed in this paper. In the Darcy-scale simulations, the Shan-Chen force model is used to simplify the algorithm. The proposed upscaled LBM uses coarser grids to represent the average effects of the fine-grid simulations. In the upscaled LBM, each coarse grid represents a subdomain of the fine-grid discretization and the effective permeability with the reduced-order models is proposed as we coarsen the grid. The effective permeability is computed using solutions of local problems (e.g., by performing local LBM simulations on the fine grids using the original permeability distribution and used on the coarse grids in the upscaled simulations. The upscaled LBM that can reduce the computational cost of existing LBM and transfer the information between different scales is implemented. The results of coarse-grid, reduced-order, simulations agree very well with averaged results obtained using a fine grid.
GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method
Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu
2011-07-01
Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.
Alapati, Suresh; Che, Woo Seong; Mannoor, Madhusoodanan; Suh, Yong Kweon
2016-06-01
In this paper, we present the results obtained from the simulation of particle motion induced by the fluid flow driven by an array of beating artificial cilia inside a micro-channel. A worm-like-chain model is used to simulate the elastic cilia, and the lattice Boltzmann equation is used to compute the fluid flow. We employ a harmonic force at the extreme tip of each cilium to actuate it. Our simulation methods are first validated by applying them to the motion of a single cilium and a freely falling sphere. After validation, we simulate the fluid flow generated by an array of beating cilia and find that a maximum flow rate is achieved at an optimum sperm number. Next, we simulate the motion of a neutrally buoyant spherical particle at this optimum sperm number by tracking the particle motion with a smoothed profile method. We address the effect of the following parameters on the particle velocity: the gap between cilia and particle, the particle size, the cilia density, and the presence of an array of intermediate particles.
Adelman, Joshua L; Grabe, Michael
2015-04-14
Ion channels are responsible for a myriad of fundamental biological processes via their role in controlling the flow of ions through water-filled membrane-spanning pores in response to environmental cues. Molecular simulation has played an important role in elucidating the mechanism of ion conduction, but connecting atomistically detailed structural models of the protein to electrophysiological measurements remains a broad challenge due to the computational cost of reaching the necessary time scales. Here, we introduce an enhanced sampling method for simulating the conduction properties of narrow ion channels using the Weighted ensemble (WE) sampling approach. We demonstrate the application of this method to calculate the current–voltage relationship as well as the nonequilibrium ion distribution at steady-state of a simple model ion channel. By direct comparisons with long brute force simulations, we show that the WE simulations rigorously reproduce the correct long-time scale kinetics of the system and are capable of determining these quantities using significantly less aggregate simulation time under conditions where permeation events are rare.
ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method
Inampudi, Ravi
2016-01-01
This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.
Numerical methods for simulating blood flow at macro, micro, and multi scales.
Imai, Yohsuke; Omori, Toshihiro; Shimogonya, Yuji; Yamaguchi, Takami; Ishikawa, Takuji
2016-07-26
In the past decade, numerical methods for the computational biomechanics of blood flow have progressed to overcome difficulties in diverse applications from cellular to organ scales. Such numerical methods may be classified by the type of computational mesh used for the fluid domain, into fixed mesh methods, moving mesh (boundary-fitted mesh) methods, and mesh-free methods. The type of computational mesh used is closely related to the characteristics of each method. We herein provide an overview of numerical methods recently used to simulate blood flow at macro and micro scales, with a focus on computational meshes. We also discuss recent progress in the multi-scale modeling of blood flow. Copyright © 2015 Elsevier Ltd. All rights reserved.
Heat transfer simulation of motorcycle fins under varying velocity using CFD method
Shahril, K.; Mohd Kasim, Nurhayati Binti; Sabri, M.
2013-12-01
Motorcycle engine releases heat to the atmosphere through the mode of force convection. To solve this, fins are provided on the outer of the cylinder. The heat transfer rate is defined depending on the velocity of vehicle, fin geometry and the ambient temperature. Increasing the temperature difference between the object and the environment, increasing the convection heat transfer coefficient, or increasing the surface area of the object increases the heat transfer. Many experimental methods are available in literature to analyze the effect of these factors on the heat transfer rate. However, CFD analysis will be use to simulate the heat transfer of the engine block. ANSYS software is selected to run the simulation.
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...... bursts of path simulations with extrapolation of a number of macroscopic state variables forward in time. The new microscopic state, consistent with the extrapolated variables, is obtained by a matching operator that minimises the perturbation caused by the extrapolation. We provide a proof...
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Shuttle vertical fin flowfield by the direct simulation Monte Carlo method
Hueser, J. E.; Brock, F. J.; Melfi, L. T.
1985-01-01
The flow properties in a model flowfield, simulating the shuttle vertical fin, determined using the Direct Simulation Monte Carlo method. The case analyzed corresponds to an orbit height of 225 km with the freestream velocity vector orthogonal to the fin surface. Contour plots of the flowfield distributions of density, temperature, velocity and flow angle are presented. The results also include mean molecular collision frequency (which reaches 1/60 sec near the surface), collision frequency density (approaches 7 x 10 to the 18/cu m sec at the surface) and the mean free path (19 m at the surface).
Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method
Celenligil, M. Cevdet; Moss, James N.
1993-01-01
A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.
Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald
2017-12-01
An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.
Simulation of regimes of convection and plume dynamics by the thermal Lattice Boltzmann Method
Mora, Peter; Yuen, David A.
2018-02-01
We present 2D simulations using the Lattice Boltzmann Method (LBM) of a fluid in a rectangular box being heated from below, and cooled from above. We observe plumes, hot narrow upwellings from the base, and down-going cold chutes from the top. We have varied both the Rayleigh numbers and the Prandtl numbers respectively from Ra = 1000 to Ra =1010 , and Pr = 1 through Pr = 5 ×104 , leading to Rayleigh-Bénard convection cells at low Rayleigh numbers through to vigorous convection and unstable plumes with pronounced vortices and eddies at high Rayleigh numbers. We conduct simulations with high Prandtl numbers up to Pr = 50, 000 to simulate in the inertial regime. We find for cases when Pr ⩾ 100 that we obtain a series of narrow plumes of upwelling fluid with mushroom heads and chutes of downwelling fluid. We also present simulations at a Prandtl number of 0.7 for Rayleigh numbers varying from Ra =104 through Ra =107.5 . We demonstrate that the Nusselt number follows power law scaling of form Nu ∼Raγ where γ = 0.279 ± 0.002 , which is consistent with published results of γ = 0.281 in the literature. These results show that the LBM is capable of reproducing results obtained with classical macroscopic methods such as spectral methods, and demonstrate the great potential of the LBM for studying thermal convection and plume dynamics relevant to geodynamics.
Validation of population-based disease simulation models: a review of concepts and methods
Directory of Open Access Journals (Sweden)
Sharif Behnam
2010-11-01
Full Text Available Abstract Background Computer simulation models are used increasingly to support public health research and policy, but questions about their quality persist. The purpose of this article is to review the principles and methods for validation of population-based disease simulation models. Methods We developed a comprehensive framework for validating population-based chronic disease simulation models and used this framework in a review of published model validation guidelines. Based on the review, we formulated a set of recommendations for gathering evidence of model credibility. Results Evidence of model credibility derives from examining: 1 the process of model development, 2 the performance of a model, and 3 the quality of decisions based on the model. Many important issues in model validation are insufficiently addressed by current guidelines. These issues include a detailed evaluation of different data sources, graphical representation of models, computer programming, model calibration, between-model comparisons, sensitivity analysis, and predictive validity. The role of external data in model validation depends on the purpose of the model (e.g., decision analysis versus prediction. More research is needed on the methods of comparing the quality of decisions based on different models. Conclusion As the role of simulation modeling in population health is increasing and models are becoming more complex, there is a need for further improvements in model validation methodology and common standards for evaluating model credibility.
Matsumoto, Hiroki; Iyono, Atsushi; Yamamoto, Isao; Kohata, Masaki; Okei, Kazuhide; Tsuji, Shuhei; Nakatsuka, Takao; Ochi, Nobuaki
2010-03-01
A compact extensive air shower (EAS) array of eight plastic scintillators viewed by HAMAMATSU H7195 photomultiplier tubes covering a total area of 2 m2 is built in the rooftop of the Faculty of Technology building, Okayama University of Science, and operated since April 2006. We have installed a shift register system in our EAS array to record EAS particle arrival time within 5 μs. We have also performed detector simulations based on the database obtained from the AIRES simulator and developed the procedures to estimate the primary cosmic ray energy from Linsley's method. Applying this method to our EAS data and the simulation result, we derived the energy spectrum from 1016 to 1019.5 eV. Consequently, we obtained the power-law index of -3.2(+0.46 -0.8) in the primary energy range of 1016 to 1018.5 eV, and obtained that a change around 1018 eV appeared if not taking account of the zenith angle distribution of primary cosmic rays. We also showed the improvement of energy resolution by applying the restriction of zenith angle of primary cosmic rays in our simulation, as well as the potential of Linsley's method with a mini array.
A Non-Cut Cell Immersed Boundary Method for Use in Icing Simulations
Sarofeen, Christian M.; Noack, Ralph W.; Kreeger, Richard E.
2013-01-01
This paper describes a computational fluid dynamic method used for modelling changes in aircraft geometry due to icing. While an aircraft undergoes icing, the accumulated ice results in a geometric alteration of the aerodynamic surfaces. In computational simulations for icing, it is necessary that the corresponding geometric change is taken into consideration. The method used, herein, for the representation of the geometric change due to icing is a non-cut cell Immersed Boundary Method (IBM). Computational cells that are in a body fitted grid of a clean aerodynamic geometry that are inside a predicted ice formation are identified. An IBM is then used to change these cells from being active computational cells to having properties of viscous solid bodies. This method has been implemented in the NASA developed node centered, finite volume computational fluid dynamics code, FUN3D. The presented capability is tested for two-dimensional airfoils including a clean airfoil, an iced airfoil, and an airfoil in harmonic pitching motion about its quarter chord. For these simulations velocity contours, pressure distributions, coefficients of lift, coefficients of drag, and coefficients of pitching moment about the airfoil's quarter chord are computed and used for comparison against experimental results, a higher order panel method code with viscous effects, XFOIL, and the results from FUN3D's original solution process. The results of the IBM simulations show that the accuracy of the IBM compares satisfactorily with the experimental results, XFOIL results, and the results from FUN3D's original solution process.
Matin, Rastin; Hernandez, Anier; Misztal, Marek; Mathiesen, Joachim
2015-04-01
Many hydrodynamic phenomena ranging from flows at micron scale in porous media, large Reynolds numbers flows, non-Newtonian and multiphase flows have been simulated on computers using the lattice Boltzmann (LB) method. By solving the Lattice Boltzmann Equation on unstructured meshes in three dimensions, we have developed methods to efficiently model the fluid flow in real rock samples. We use this model to study the spatio-temporal statistics of the velocity field inside three-dimensional real geometries and investigate its relation to the, in general, anomalous transport of passive tracers for a wide range of Peclet and Reynolds numbers. We extend this model by free-energy based method, which allows us to simulate binary systems with large-density ratios in a thermodynamically consistent way and track the interface explicitly. In this presentation we will present our recent results on both anomalous transport and multiphase segregation.
Hybrid Lattice Boltzmann Method for the Simulation of Blending Process in Static Mixers
Latt, Jonas; Kontaxakis, Dimitrios; Chatagny, Laurent; Muggli, Felix; Chopard, Bastien
2013-12-01
A lattice Boltzmann method is proposed to simulate the blending of two fluids in static, laminar mixers. The method uses a mesh-based algorithm to solve for the fluid flow, and a meshless technique to trace the interface between the blended fluids. This hybrid approach is highly accurate, because the position of the interface can be traced beyond the resolution of the grid. The numerical diffusion is negligible in this model, and it is possible to reproduce mixing patterns that contain more than one hundred striations with high fidelity. The implementation of this method in the massively parallel library Palabos is presented, and simulation results are compared with experimental data to emphasize the accuracy of the results.
Energy Technology Data Exchange (ETDEWEB)
Cannamela, C
2007-09-15
This work is devoted to the evaluation of mathematical expectations in the context of structural reliability. We seek a failure probability estimate (that we assume low), taking into account the uncertainty of influential parameters of the System. Our goal is to reach a good compromise between the accuracy of the estimate and the associated computational cost. This approach is used to estimate the failure probability of fuel particles from a HTR-type nuclear reactor. This estimate is obtain by means of costly numerical simulations. We consider different probabilistic methods to tackle the problem. First, we consider a variance reducing Monte Carlo method: importance sampling. For the parametric case, we propose adaptive algorithms in order to build a series of probability densities that will eventually converge to optimal importance density. We then present several estimates of the mathematical expectation based on this series of densities. Next, we consider a multi-level method using Monte Carlo Markov Chain algorithm. Finally, we turn our attention to the related problem of quantile estimation (non extreme) of physical output from a large-scale numerical code. We propose a controlled stratification method. The random input parameters are sampled in specific regions obtained from surrogate of the response. The estimation of the quantile is then computed from this sample. (author)
Energy Technology Data Exchange (ETDEWEB)
Rieben, Robert N. [Univ. of California, Davis, CA (United States)
2004-01-01
The goal of this dissertation is two-fold. The first part concerns the development of a numerical method for solving Maxwell's equations on unstructured hexahedral grids that employs both high order spatial and high order temporal discretizations. The second part involves the use of this method as a computational tool to perform high fidelity simulations of various electromagnetic devices such as optical transmission lines and photonic crystal structures to yield a level of accuracy that has previously been computationally cost prohibitive. This work is based on the initial research of Daniel White who developed a provably stable, charge and energy conserving method for solving Maxwell's equations in the time domain that is second order accurate in both space and time. The research presented here has involved the generalization of this procedure to higher order methods. High order methods are capable of yielding far more accurate numerical results for certain problems when compared to corresponding h-refined first order methods , and often times at a significant reduction in total computational cost. The first half of this dissertation presents the method as well as the necessary mathematics required for its derivation. The second half addresses the implementation of the method in a parallel computational environment, its validation using benchmark problems, and finally its use in large scale numerical simulations of electromagnetic transmission devices.
Directory of Open Access Journals (Sweden)
Sean Zeiger
2017-06-01
Full Text Available Accurate mean areal precipitation (MAP estimates are essential input forcings for hydrologic models. However, the selection of the most accurate method to estimate MAP can be daunting because there are numerous methods to choose from (e.g., proximate gauge, direct weighted average, surface-fitting, and remotely sensed methods. Multiple methods (n = 19 were used to estimate MAP with precipitation data from 11 distributed monitoring sites, and 4 remotely sensed data sets. Each method was validated against the hydrologic model simulated stream flow using the Soil and Water Assessment Tool (SWAT. SWAT was validated using a split-site method and the observed stream flow data from five nested-scale gauging sites in a mixed-land-use watershed of the central USA. Cross-validation results showed the error associated with surface-fitting and remotely sensed methods ranging from −4.5 to −5.1%, and −9.8 to −14.7%, respectively. Split-site validation results showed the percent bias (PBIAS values that ranged from −4.5 to −160%. Second order polynomial functions especially overestimated precipitation and subsequent stream flow simulations (PBIAS = −160 in the headwaters. The results indicated that using an inverse-distance weighted, linear polynomial interpolation or multiquadric function method to estimate MAP may improve SWAT model simulations. Collectively, the results highlight the importance of spatially distributed observed hydroclimate data for precipitation and subsequent steam flow estimations. The MAP methods demonstrated in the current work can be used to reduce hydrologic model uncertainty caused by watershed physiographic differences.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
Energy Technology Data Exchange (ETDEWEB)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.; Pasquali, Andrea; Schönherr, Martin; Kim, Kyungjoo; Perego, Mauro; Parks, Michael L.; Trask, Nathaniel; Balhoff, Matthew T.; Richmond, Marshall C.; Geier, Martin; Krafczyk, Manfred; Luo, Li-Shi; Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB), lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Directory of Open Access Journals (Sweden)
Danilo ePezo
2014-11-01
Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Simulating the proton transfer in gramicidin A by a sequential dynamical Monte Carlo method.
Till, Mirco S; Essigke, Timm; Becker, Torsten; Ullmann, G Matthias
2008-10-23
The large interest in long-range proton transfer in biomolecules is triggered by its importance for many biochemical processes such as biological energy transduction and drug detoxification. Since long-range proton transfer occurs on a microsecond time scale, simulating this process on a molecular level is still a challenging task and not possible with standard simulation methods. In general, the dynamics of a reactive system can be described by a master equation. A natural way to describe long-range charge transfer in biomolecules is to decompose the process into elementary steps which are transitions between microstates. Each microstate has a defined protonation pattern. Although such a master equation can in principle be solved analytically, it is often too demanding to solve this equation because of the large number of microstates. In this paper, we describe a new method which solves the master equation by a sequential dynamical Monte Carlo algorithm. Starting from one microstate, the evolution of the system is simulated as a stochastic process. The energetic parameters required for these simulations are determined by continuum electrostatic calculations. We apply this method to simulate the proton transfer through gramicidin A, a transmembrane proton channel, in dependence on the applied membrane potential and the pH value of the solution. As elementary steps in our reaction, we consider proton uptake and release, proton transfer along a hydrogen bond, and rotations of water molecules that constitute a proton wire through the channel. A simulation of 8 mus length took about 5 min on an Intel Pentium 4 CPU with 3.2 GHz. We obtained good agreement with experimental data for the proton flux through gramicidin A over a wide range of pH values and membrane potentials. We find that proton desolvation as well as water rotations are equally important for the proton transfer through gramicidin A at physiological membrane potentials. Our method allows to simulate long
Simulation of flow past two tandem cylinders using deterministic vortex method
Directory of Open Access Journals (Sweden)
Huang Guo
2012-01-01
Full Text Available The vortex method is a direct numerical simulation method for solving the Navier-Stokes equations. In order to reveal the influence of Reynolds number and distances between the cylinders, the incompressible flow past a pair of tandem cylinders is solved on the base of the vortex method. The results show that for the flow past two tandem cylinders, there is a critical distance of the tandem cylinders. Over the critical distance, the flow field will have a sudden change, and the drag coefficient, lift coefficient and Strouhal number will also change dramatically. The critical distance will diminish as the Reynolds number rises.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
Directory of Open Access Journals (Sweden)
Javier Cubas
2015-01-01
Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
An Ellipsoidal Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 1
Shivarama, Ravishankar; Fahrenthold, Eric P.
2004-01-01
A number of coupled particle-element and hybrid particle-element methods have been developed for the simulation of hypervelocity impact problems, to avoid certain disadvantages associated with the use of pure continuum based or pure particle based methods. To date these methods have employed spherical particles. In recent work a hybrid formulation has been extended to the ellipsoidal particle case. A model formulation approach based on Lagrange's equations, with particles entropies serving as generalized coordinates, avoids the angular momentum conservation problems which have been reported with ellipsoidal smooth particle hydrodynamics models.
Strychalski, Wanda; Adalsteinsson, David; Elston, Timothy C
2010-01-01
Cells use signaling networks consisting of multiple interacting proteins to respond to changes in their environment. In many situations, such as chemotaxis, spatial and temporal information must be transmitted through the network. Recent computational studies have emphasized the importance of cellular geometry in signal transduction, but have been limited in their ability to accurately represent complex cell morphologies. We present a finite volume method that addresses this problem. Our method uses Cartesian cut cells and is second order in space and time. We use our method to simulate several models of signaling systems in realistic cell morphologies obtained from live cell images and examine the effects of geometry on signal transduction.
Wang, Jianmin; Bai, Rumeng; Zhou, Ye; Zhao, Guang
2017-06-01
In this paper, an arbitrary distance optical transmission simulation method for free-space optical communication system is presented. Based on this method, direct point-to-point performance tests between two optical terminals can be realized in the laboratory, and the test results are equivalent to those on-orbit. A theoretical analysis of this method is presented in this paper. Verification experiments showed that there is a good linearity between the incoming power density and output photocurrent of the carbon nanotube (CNT); the relative power difference between the CNT and the charge-coupled device (CCD) camera is 4.75%, which can be ignored compared with the link redundancy.
Simulation of vesicle using level set method solved by high order finite element
Directory of Open Access Journals (Sweden)
Doyeux Vincent
2013-01-01
Full Text Available We present a numerical method to simulate vesicles in fluid flows. This method consists of writing all the properties of the membrane as interfacial forces between two fluids. The main advantage of this approach is that the vesicle and the fluid models may be decoupled easily. A level set method has been implemented to track the interface. Finite element discretization has been used with arbitrarily high order polynomial approximation. Several polynomial orders have been tested in order to get a better accuracy. A validation on equilibrium shapes and “tank treading” motion of vesicle have been presented.
Directory of Open Access Journals (Sweden)
A. J. Komkoua Mbienda
2013-01-01
Lee and Kesler (LK, and Ambrose-Walton (AW methods for estimating vapor pressures ( are tested against experimental data for a set of volatile organic compounds (VOC. required to determine gas-particle partitioning of such organic compounds is used as a parameter for simulating the dynamic of atmospheric aerosols. Here, we use the structure-property relationships of VOC to estimate . The accuracy of each of the aforementioned methods is also assessed for each class of compounds (hydrocarbons, monofunctionalized, difunctionalized, and tri- and more functionalized volatile organic species. It is found that the best method for each VOC depends on its functionality.
Chen, Jiefu; Zeng, Shubin; Dong, Qiuzhao; Huang, Yueqin
2017-02-01
An axisymmetric semianalytical finite element method is proposed and employed for rapid simulations of electromagnetic telemetry in layered underground formation. In this method, the layered media is decomposed into several subdomains and the interfaces between subdomains are discretized by conventional finite elements. Then a Riccati equation based high precision integration scheme is applied to exploit the homogeneity along the vertical direction in each layer. This semianalytical finite element scheme is very efficient in modeling electromagnetic telemetry in layered formation. Numerical examples as well as a field case with water based mud as drilling fluid are given to demonstrate the validity and effectiveness of this method.
Spiking neural network simulation: numerical integration with the Parker-Sochacki method.
Stewart, Robert D; Bair, Wyeth
2009-08-01
Mathematical neuronal models are normally expressed using differential equations. The Parker-Sochacki method is a new technique for the numerical integration of differential equations applicable to many neuronal models. Using this method, the solution order can be adapted according to the local conditions at each time step, enabling adaptive error control without changing the integration timestep. The method has been limited to polynomial equations, but we present division and power operations that expand its scope. We apply the Parker-Sochacki method to the Izhikevich 'simple' model and a Hodgkin-Huxley type neuron, comparing the results with those obtained using the Runge-Kutta and Bulirsch-Stoer methods. Benchmark simulations demonstrate an improved speed/accuracy trade-off for the method relative to these established techniques.
Application of volume of fluid method for simulation of a droplet impacting a fiber
Directory of Open Access Journals (Sweden)
M. Khalili
2016-06-01
Full Text Available In the present work, impact of a Newtonian drop on horizontal thin fibers with circular cross section is simulated in 2D views. The numerical simulations of the phenomena are carried out using volume of fluid (VOF method for tracking the free surface motion. Impacting of a Newtonian droplet on a circular thin fiber (350μm radius investigated numerically. The main focus of this simulation is to acquire threshold radius and velocity of a drop which is entirely captured by the fiber. The model agrees well with the experiments and demonstrates the threshold radius decreased generally with the increase of impact velocity. In other words, for velocity larger than threshold velocity of capture perhaps only a small portion of fluid is stuck on the solid and the rest of the drop is ejected for impact velocity smaller than critical velocity the drop is totally captured. This threshold velocity has been determined when the impact is centered.
A Coupling Simulation Between Soil Scour and Seepage Flow by Using a Stabilized ISPH Method
Directory of Open Access Journals (Sweden)
Nogami Tomotaka
2016-01-01
Full Text Available In 2011, the example that breakwaters collapsed because of the basic ground’s destabilization was reported by Tohoku-Kanto earthquake tsunami. Fluid-Structure-Soil coupling simulation is desired for a systematic comprehension of the breakwater collapse mechanism, and it may help to develop next disaster prevention method. In this study, A particle simulation tool based on the SPH has been modified and improved to analyze seepage flow and soil scouring. In seepage flow analysis, as a first step, this simulation treat the surface flow and seepage flow interactions by using governing equation. In the scouring analysis, soil scour is judged by an empirical criteria based on quicksand quantity formula.
A Multi Level Multi Domain Method for Particle In Cell Plasma Simulations
Innocenti, M E; Markidis, S; Beck, A; Vapirev, A
2012-01-01
A novel adaptive technique for electromagnetic Particle In Cell (PIC) plasma simulations is presented here. Two main issues are identified in designing adaptive techniques for PIC simulation: first, the choice of the size of the particle shape function in progressively refined grids, with the need to avoid the exertion of self-forces on particles, and, second, the necessity to comply with the strict stability constraints of the explicit PIC algorithm. The adaptive implementation presented responds to these demands with the introduction of a Multi Level Multi Domain (MLMD) system (where a cloud of self-similar domains is fully simulated with both fields and particles) and the use of an Implicit Moment PIC method as baseline algorithm for the adaptive evolution. Information is exchanged between the levels with the projection of the field information from the refined to the coarser levels and the interpolation of the boundary conditions for the refined levels from the coarser level fields. Particles are bound to...
Menshutkin, V V; Kazanskiĭ, A B; Levchenko, V F
2010-01-01
The history of rise and development of evolutionary methods in Saint Petersburg school of biological modelling is traced and analyzed. Some pioneering works in simulation of ecological and evolutionary processes, performed in St.-Petersburg school became an exemplary ones for many followers in Russia and abroad. The individual-based approach became the crucial point in the history of the school as an adequate instrument for construction of models of biological evolution. This approach is natural for simulation of the evolution of life-history parameters and adaptive processes in populations and communities. In some cases simulated evolutionary process was used for solving a reverse problem, i. e., for estimation of uncertain life-history parameters of population. Evolutionary computations is one more aspect of this approach application in great many fields. The problems and vistas of ecological and evolutionary modelling in general are discussed.
Simulation of the Flow past a Circular Cylinder Using an Unsteady Panel Method
DEFF Research Database (Denmark)
Ramos García, Néstor; Sarlak Chivaee, Hamid; Andersen, Søren Juhl
2017-01-01
In the present work, an in-house UnSteady Double Wake Model (USDWM) is developed for simulating general flow problems behind bodies. The model is presented and used to simulate flows past a circular cylinder at subcritical, supercritical, and transcritical flows. The flow model is a two...... with experiments and Unsteady Reynolds-Averaged Navier Stokes (URANS) simulations and show good agreement in terms of the vortex shedding characteristics, drag, and pressure coefficients for the different flow regimes.......-dimensional panel method which uses the unsteady double wake technique to model flow separation and its dynamics. In the present work the separation location is obtained from experimental data and fixed in time. The highly unsteady flow field behind the cylinder is analyzed in detail. The results are compared...
Poikela, Paula; Ruokamo, Heli; Teräs, Marianne
2015-02-01
Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study
Directory of Open Access Journals (Sweden)
In Sung Cho
2017-08-01
Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute
Seigneurin, A; Labarère, J; Duffy, S W; Colonna, M
2015-12-01
Estimating overdiagnosis associated with breast cancer screening may use annual incidence rates of cancer. We simulated populations invited to screening programmes to assess two lead-time adjustment methods. Overdiagnosis estimates were computed using the compensatory drop method, which considered the decrease in incidence of cancers among older age groups no longer offered screening, and the method based on the decrease in incidence of late-stage cancers. The true value of overdiagnosis was 0% in all the data sets simulated. The compensatory drop method yielded an overdiagnosis estimate of -0.1% (95% credibility interval -0.5% to 0.5%) when participation rates among the population and risk of cancers were constant. However, if participation rates increased with calendar year as well as risk of cancer with birth cohorts, the overdiagnosis estimated was 11.0% (10.5-11.6%). Using the method based on the incidence of early- and late-stage cancers, overdiagnosis estimates were 8.9% (8.5-9.3%) and 17.6% (17.4-17.9%) when participation rates and risks of cancer were constant or increased with time, respectively. Adjustment for lead time based on the compensatory drop method is accurate only when participation rates and risks of cancer remain constant, whereas the adjustment method based on the incidence of early- and late-stage cancers results in overestimating overdiagnosis regardless of stability of participation rates and breast cancer risk. Copyright © 2015 Elsevier Ltd. All rights reserved.
Simulation of two-phase flow in horizontal fracture networks with numerical manifold method
Ma, G. W.; Wang, H. D.; Fan, L. F.; Wang, B.
2017-10-01
The paper presents simulation of two-phase flow in discrete fracture networks with numerical manifold method (NMM). Each phase of fluids is considered to be confined within the assumed discrete interfaces in the present method. The homogeneous model is modified to approach the mixed fluids. A new mathematical cover formation for fracture intersection is proposed to satisfy the mass conservation. NMM simulations of two-phase flow in a single fracture, intersection, and fracture network are illustrated graphically and validated by the analytical method or the finite element method. Results show that the motion status of discrete interface significantly depends on the ratio of mobility of two fluids rather than the value of the mobility. The variation of fluid velocity in each fracture segment and the driven fluid content are also influenced by the ratio of mobility. The advantages of NMM in the simulation of two-phase flow in a fracture network are demonstrated in the present study, which can be further developed for practical engineering applications.
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
Directory of Open Access Journals (Sweden)
Jingjing He
2017-09-01
Full Text Available This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.
Simulation of Oxy-Fuel Pulse Detonation using a Space-Time CESE Method
Karra, Shashank; Hauth, Jeremiah; Apte, Sourabh
2017-11-01
Pulse detonation system using oxy-fuel combustion can be used for direct power extraction especially when combined with magnetohydrodynamics (MHD). In the present work, we investigate use of a space-time conservation element-solution element (CE/SE) method for simulation of oxy-methane pulse detonation waves. A CE/SE method results in a consistent multi-dimensional formulation for unstructured tetrahedral meshes by providing flux conservation in space and time, and eliminating the need for complex Reimann solvers to capture shocks. As the first step, a CE/SE method solving the Euler equations is implemented and verified for standard sod shock-tube problem to show very good predictive capability. The Euler solver is extended to account for single-step as well as reduced reaction mechanisms for oxy-fuel combustion. A revised Jones-Lindstedt (JL-R) reaction mechanism accounting for radicals such as O, OH, and H is used as a reduced mechanism to simulate detonation waves from methane-oxygen combustion. Detailed verification and validation is conducted to evaluate the effectiveness of the CE/SE method. The approach is being further developed for simulation of compressible reacting flows on unstructured grids. The authors gratefully acknoweldge NETL, DOE for funding this project.
A New Method for Urban Storm Flood Inundation Simulation with Fine CD-TIN Surface
Directory of Open Access Journals (Sweden)
Zhifeng Li
2014-05-01
Full Text Available Urban storm inundation, which frequently has dramatic impacts on city safety and social life, is an emergent and difficult issue. Due to the complexity of urban surfaces and the variety of spatial modeling elements, the lack of detailed hydrological data and accurate urban surface models compromise the study and implementation of urban storm inundation simulations. This paper introduces a Constrained Delaunay Triangular Irregular Network (CD-TIN to model fine urban surfaces (based on detailed ground sampling data and subsequently employs a depression division method that refers to Fine Constrained Features (FCFs to construct computational urban water depressions. Storm-runoff yield is placed through mass conservation to calculate the volume of rainfall, runoff and drainage. The water confluences between neighboring depressions are provided when the water level exceeds the outlet of a certain depression. Numerical solutions achieved through a dichotomy are introduced to obtain the water level. Therefore, the continuous inundation process can be divided into different time intervals to obtain a series of inundation scenarios. The main campus of Beijing Normal University (BNU was used as a case study to simulate the “7.21” storm inundation event to validate the usability and suitability of the proposed methods. In comparing the simulation results with in-situ observations, the proposed method is accurate and effective, with significantly lower drainage data requirements being obtained. The proposed methods will also be useful for urban drainage design and city inundation emergency preparations.
Method and system for simulating heat and mass transfer in cooling towers
Bharathan, Desikan; Hassani, A. Vahab
1997-01-01
The present invention is a system and method for simulating the performance of a cooling tower. More precisely, the simulator of the present invention predicts values related to the heat and mass transfer from a liquid (e.g., water) to a gas (e.g., air) when provided with input data related to a cooling tower design. In particular, the simulator accepts input data regarding: (a) cooling tower site environmental characteristics; (b) cooling tower operational characteristics; and (c) geometric characteristics of the packing used to increase the surface area within the cooling tower upon which the heat and mass transfer interactions occur. In providing such performance predictions, the simulator performs computations related to the physics of heat and mass transfer within the packing. Thus, instead of relying solely on trial and error wherein various packing geometries are tested during construction of the cooling tower, the packing geometries for a proposed cooling tower can be simulated for use in selecting a desired packing geometry for the cooling tower.
Bernal, Javier; Torres-Jimenez, Jose
2015-01-01
SAGRAD (Simulated Annealing GRADient), a Fortran 77 program for computing neural networks for classification using batch learning, is discussed. Neural network training in SAGRAD is based on a combination of simulated annealing and Møller's scaled conjugate gradient algorithm, the latter a variation of the traditional conjugate gradient method, better suited for the nonquadratic nature of neural networks. Different aspects of the implementation of the training process in SAGRAD are discussed, such as the efficient computation of gradients and multiplication of vectors by Hessian matrices that are required by Møller's algorithm; the (re)initialization of weights with simulated annealing required to (re)start Møller's algorithm the first time and each time thereafter that it shows insufficient progress in reaching a possibly local minimum; and the use of simulated annealing when Møller's algorithm, after possibly making considerable progress, becomes stuck at a local minimum or flat area of weight space. Outlines of the scaled conjugate gradient algorithm, the simulated annealing procedure and the training process used in SAGRAD are presented together with results from running SAGRAD on two examples of training data.
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
Directory of Open Access Journals (Sweden)
Kłos Sławomir
2015-12-01
Full Text Available This paper proposes the application of computer simulation methods to support decision making regarding intermediate buffer allocations in a series-parallel production line. The simulation model of the production system is based on a real example of a manufacturing company working in the automotive industry. Simulation experiments were conducted for different allocations of buffer capacities and different numbers of employees. The production system consists of three technological operations with intermediate buffers between each operation. The technological operations are carried out using machines and every machine can be operated by one worker. Multi-work in the production system is available (one operator operates several machines. On the basis of the simulation experiments, the relationship between system throughput, buffer allocation and the number of employees is analyzed. Increasing the buffer capacity results in an increase in the average product lifespan. Therefore, in the article a new index is proposed that includes the throughput of the manufacturing system and product life span. Simulation experiments were performed for different configurations of technological operations.
Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method.
Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao
2016-01-01
This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures.
Radiation-transport method to simulate noncontinuum gas flows for MEMS devices.
Energy Technology Data Exchange (ETDEWEB)
Gallis, Michail A.; Torczynski, John Robert
2004-01-01
A Micro Electro Mechanical System (MEMS) typically consists of micron-scale parts that move through a gas at atmospheric or reduced pressure. In this situation, the gas-molecule mean free path is comparable to the geometric features of the microsystem, so the gas flow is noncontinuum. When mean-free-path effects cannot be neglected, the Boltzmann equation must be used to describe the gas flow. Solution of the Boltzmann equation is difficult even for the simplest case because of its sevenfold dimensionality (one temporal dimension, three spatial dimensions, and three velocity dimensions) and because of the integral nature of the collision term. The Direct Simulation Monte Carlo (DSMC) method is the method of choice to simulate high-speed noncontinuum flows. However, since DSMC uses computational molecules to represent the gas, the inherent statistical noise must be minimized by sampling large numbers of molecules. Since typical microsystem velocities are low (< 1 m/s) compared to molecular velocities ({approx}400 m/s), the number of molecular samples required to achieve 1% precision can exceed 1010 per cell. The Discrete Velocity Gas (DVG) method, an approach motivated by radiation transport, provides another way to simulate noncontinuum gas flows. Unlike DSMC, the DVG method restricts molecular velocities to have only certain discrete values. The transport of the number density of a velocity state is governed by a discrete Boltzmann equation that has one temporal dimension and three spatial dimensions and a polynomial collision term. Specification and implementation of DVG models are discussed, and DVG models are applied to Couette flow and to Fourier flow. While the DVG results for these benchmark problems are qualitatively correct, the errors in the shear stress and the heat flux can be order-unity even for DVG models with 88 velocity states. It is concluded that the DVG method, as described herein, is not sufficiently accurate to simulate the low-speed gas flows
Standardization is superior to traditional methods of teaching open vascular simulation.
Bath, Jonathan; Lawrence, Peter; Chandra, Ankur; O'Connell, Jessica; Uijtdehaage, Sebastian; Jimenez, Juan Carlos; Davis, Gavin; Hiatt, Jonathan
2011-01-01
Standardizing surgical skills teaching has been proposed as a method to rapidly attain technical competence. This study compared acquisition of vascular skills by standardized vs traditional teaching methods. The study randomized 18 first-year surgical residents to a standardized or traditional group. Participants were taught technical aspects of vascular anastomosis using femoral anastomosis simulation (Limbs & Things, Savannah, Ga), supplemented with factual information. One expert instructor taught a standardized anastomosis technique using the same method each time to one group over four sessions, while, similar to current vascular training, four different expert instructors each taught one session to the other (traditional) group. Knowledge and technical skill were assessed at study completion by an independent vascular expert using Objective Structured Assessment of Technical Skill (OSATS) performance metrics. Participants also provided a written evaluation of the study experience. The standardized group had significantly higher mean overall technical (95.7% vs 75.8%; P = .038) and global skill scores (83.4% vs 67%; P = .006). Tissue handling, efficiency of motion, overall technical skill, and flow of operation were rated significantly higher in the standardized group (mean range, 88%-96% vs 67.6%-77.6%; P teaching methods on performance outcome. Findings from this report suggest that for simulation training, standardized may be more effective than traditional methods of teaching. Transferability of simulator-acquired skills to the clinical setting will be required before open simulation can be unequivocally recommended as a major component of resident technical skill training. Copyright © 2011 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
A Local Order Parameter-Based Method for Simulation of Free Energy Barriers in Crystal Nucleation.
Eslami, Hossein; Khanjari, Neda; Müller-Plathe, Florian
2017-03-14
While global order parameters have been widely used as reaction coordinates in nucleation and crystallization studies, their use in nucleation studies is claimed to have a serious drawback. In this work, a local order parameter is introduced as a local reaction coordinate to drive the simulation from the liquid phase to the solid phase and vice versa. This local order parameter holds information regarding the order in the first- and second-shell neighbors of a particle and has different well-defined values for local crystallites and disordered neighborhoods but is insensitive to the type of the crystal structure. The order parameter is employed in metadynamics simulations to calculate the solid-liquid phase equilibria and free energy barrier to nucleation. Our results for repulsive soft spheres and the Lennard-Jones potential, LJ(12-6), reveal better-resolved solid and liquid basins compared with the case in which a global order parameter is used. It is also shown that the configuration space is sampled more efficiently in the present method, allowing a more accurate calculation of the free energy barrier and the solid-liquid interfacial free energy. Another feature of the present local order parameter-based method is that it is possible to apply the bias potential to regions of interest in the order parameter space, for example, on the largest nucleus in the case of nucleation studies. In the present scheme for metadynamics simulation of the nucleation in supercooled LJ(12-6) particles, unlike the cases in which global order parameters are employed, there is no need to have an estimate of the size of the critical nucleus and to refine the results with the results of umbrella sampling simulations. The barrier heights and the nucleation pathway obtained from this method agree very well with the results of former umbrella sampling simulations.
Energy Technology Data Exchange (ETDEWEB)
Le Ber, L.; Calmon, P. [CEA/Saclay, STA, 91 - Gif-sur-Yvette (France); Abittan, E. [Electricite de France (EDF-GDL), 93 - Saint-Denis (France)
2001-07-01
The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)
Directory of Open Access Journals (Sweden)
Chiara Biscarini
2013-01-01
Full Text Available The numerical simulation of fast-moving fronts originating from dam or levee breaches is a challenging task for small scale engineering projects. In this work, the use of fully three-dimensional Navier-Stokes (NS equations and lattice Boltzmann method (LBM is proposed for testing the validity of, respectively, macroscopic and mesoscopic mathematical models. Macroscopic simulations are performed employing an open-source computational fluid dynamics (CFD code that solves the NS combined with the volume of fluid (VOF multiphase method to represent free-surface flows. The mesoscopic model is a front-tracking experimental variant of the LBM. In the proposed LBM the air-gas interface is represented as a surface with zero thickness that handles the passage of the density field from the light to the dense phase and vice versa. A single set of LBM equations represents the liquid phase, while the free surface is characterized by an additional variable, the liquid volume fraction. Case studies show advantages and disadvantages of the proposed LBM and NS with specific regard to the computational efficiency and accuracy in dealing with the simulation of flows through complex geometries. In particular, the validation of the model application is developed by simulating the flow propagating through a synthetic urban setting and comparing results with analytical and experimental laboratory measurements.
Bahçecitapar, Melike Kaya
2017-07-01
Determining sample size necessary for correct results is a crucial step in the design of longitudinal studies. Simulation-based statistical power calculation is a flexible approach to determine number of subjects and repeated measures of longitudinal studies especially in complex design. Several papers have provided sample size/statistical power calculations for longitudinal studies incorporating data analysis by linear mixed effects models (LMMs). In this study, different estimation methods (methods based on maximum likelihood (ML) and restricted ML) with different iterative algorithms (quasi-Newton and ridge-stabilized Newton-Raphson) in fitting LMMs to generated longitudinal data for simulation-based power calculation are compared. This study examines statistical power of F-test statistics for parameter representing difference in responses over time from two treatment groups in the LMM with a longitudinal covariate. The most common procedures in SAS, such as PROC GLIMMIX using quasi-Newton algorithm and PROC MIXED using ridge-stabilized algorithm are used for analyzing generated longitudinal data in simulation. It is seen that both procedures present similar results. Moreover, it is found that the magnitude of the parameter of interest in the model for simulations affect statistical power calculations in both procedures substantially.
The Fractional Step Method Applied to Simulations of Natural Convective Flows
Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)
2002-01-01
This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The
An Implicit Monte Carlo Method for Simulation of Impurity Transport in Divertor Plasma
Suzuki, Akiko; Takizuka, Tomonori; Shimizu, Katsuhiro; Hayashi, Nobuhiko; Hatayama, Akiyoshi; Ogasawara, Masatada
1997-02-01
A new "implicit" Monte Carlo (IMC) method has been developed to simulate ionization and recombination processes of impurity ions in divertor plasmas. The IMC method takes into account many ionization and recombination processes during a time step Δ t. The time step is not limited by a condition, Δ t≪ τ min(τ min; the minimum characteristic time of atomic processes), which is forced to be adopted in conventional Monte Carlo methods. We incorporate this method into a one-dimensional impurity transport model. In this transport calculation, impurity ions are followed with the time step about 10 times larger than that used in conventional methods. The average charge state of impurities, , and the radiative cooling rate, L( Te), are calculated at the electron temperature Tein divertor plasmas. These results are compared with thosed obtained from the simple noncoronal model.
Furuichi, Mikito; Kameyama, Masanori; Kageyama, Akira
2008-05-01
Toward the unified simulation of the large deformation of a rigid viscoelastic material (plate) and the convection of a viscous fluid (mantle), an Eulerian scheme with a semi-Lagrangian method is developed. The scheme adopts the CIP-CSLR method for advection terms of staggered grid system in three dimensions. The positive transported profile of a positive quantity is assured by flux corrections in the dimensional splitting method. The Jaumann co-rotational effect of the stress tensor is also integrated into the semi-Lagrangian treatment. This co-rotated semi-Lagrangian method is combined with an exponential time differencing method in the time development of the Maxwell constitutive model. The large time step comparable to, or larger than, the Maxwell relaxation time is successfully realized. Validation tests are performed for the three-dimensional Rayleigh-Taylor instability of a viscoelastic material with jump discontinuity of the mass density and other material properties.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
Energy Technology Data Exchange (ETDEWEB)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.
Truss Structure Optimization with Subset Simulation and Augmented Lagrangian Multiplier Method
Directory of Open Access Journals (Sweden)
Feng Du
2017-11-01
Full Text Available This paper presents a global optimization method for structural design optimization, which integrates subset simulation optimization (SSO and the dynamic augmented Lagrangian multiplier method (DALMM. The proposed method formulates the structural design optimization as a series of unconstrained optimization sub-problems using DALMM and makes use of SSO to find the global optimum. The combined strategy guarantees that the proposed method can automatically detect active constraints and provide global optimal solutions with finite penalty parameters. The accuracy and robustness of the proposed method are demonstrated by four classical truss sizing problems. The results are compared with those reported in the literature, and show a remarkable statistical performance based on 30 independent runs.
On simulating flow with multiple time scales using a method of averages
Energy Technology Data Exchange (ETDEWEB)
Margolin, L.G. [Los Alamos National Lab., NM (United States)
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his new method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.
Efficient effective-energy method for lattice-Green's-function simulations of fracture
Canel, L. M.; Carlsson, A. E.; Thomson, Robb
1995-07-01
This paper discusses a method for finding equilibria within the lattice-Green's-function formulation. The method involves the creation of an energy functional expressed just in terms of a small subset of the (>106) total number of degrees of freedom. It is much more efficient and robust numerically than former methods of solution of the Green's-function equations, particularly when the subset becomes scrO(103). The energy functional may be used in conjuction with state of the art conjugate gradient, quasi-Newton or simulated annealing methods to find minimum-energy configurations and compare their energies. In addition, if constraints are placed on the allowed relations between a few of the degrees of freedom then the method may be used to find the energies of unstable equilibria and hence activation energies.
Matin, Rastin; Misztal, Marek K.; Hernandez-Garcia, Anier; Mathiesen, Joachim
2015-11-01
Many hydrodynamic phenomena such as flows at micron scale in porous media, large Reynolds numbers flows, non-Newtonian and multiphase flows have been simulated numerically using the lattice Boltzmann method. By solving the Lattice Boltzmann Equation on three-dimensional unstructured meshes, we efficiently model single-phase fluid flow in real rock samples. We use the flow field to estimate the permeability and further investigate the anomalous dispersion of passive tracers in porous media. By extending our single-phase model with a free-energy based method, we are able to simulate binary systems with moderate density ratios in a thermodynamically consistent way. In this presentation we will present our recent results on both anomalous transport and multiphase segregation.
Large eddy simulation of turbulent mixing by using 3D decomposition method
Energy Technology Data Exchange (ETDEWEB)
Issakhov, Alibek, E-mail: aliisahov@mail.ru [al-Farabi Kazakh National University, Almaty (Kazakhstan)
2011-12-22
Parallel implementation of algorithm of numerical solution of Navier-Stokes equations for large eddy simulation (LES) of turbulence is presented in this research. The dynamic Smagorinsky model is applied for sub-grid simulation of turbulence. The numerical algorithm was worked out using a scheme of splitting on physical parameters. At the first stage it is supposed that carrying over of movement amount takes place only due to convection and diffusion. Intermediate field of velocity is determined by method of fractional steps by using Thomas algorithm (tridiagonal matrix algorithm). At the second stage the determined intermediate field of velocity is used for determination of the field of pressure. Three dimensional Poisson equation for the field of pressure is solved using over relaxation method.
Simulation tests of the optimization method of Hopfield and Tank using neural networks
Paielli, Russell A.
1988-01-01
The method proposed by Hopfield and Tank for using the Hopfield neural network with continuous valued neurons to solve the traveling salesman problem is tested by simulation. Several researchers have apparently been unable to successfully repeat the numerical simulation documented by Hopfield and Tank. However, as suggested to the author by Adams, it appears that the reason for those difficulties is that a key parameter value is reported erroneously (by four orders of magnitude) in the original paper. When a reasonable value is used for that parameter, the network performs generally as claimed. Additionally, a new method of using feedback to control the input bias currents to the amplifiers is proposed and successfully tested. This eliminates the need to set the input currents by trial and error.
Large-eddy simulations of a S826 airfoil with the Discontinuous Galerkin Method
DEFF Research Database (Denmark)
Frère, A.; Chivaee, Hamid Sarlak; Mikkelsen, Robert Flemming
2014-01-01
The aim of the present work is to improve the understanding of low Reynolds flow physics by performing Large-Eddy Simulations (LES) of the NREL S826 airfoil. The paper compares the results obtained with a novel high order code based on the Discontinuous Galerkin Method (ArgoDG) and a recent...... experiment performed at the Technical University of Denmark. Chordwise pressure evolutions, integrated lift and drag forces are compared at Reynolds number 4.104 and angles of attack (AoA) 10 and 12 degrees. Important differences are observed between the simulations and the experiment. These differences are......, however, partially explained by the strong sensitivity to the tunnel environment. To overcome this source of error, the ArgoDG LES results are also compared to LES performed with the Finite Volume Method (FVM) code EllipSys3D, a well established wind turbine Computational Fluid Dynamics (CFD) code...
Electrostatic plasma simulation by Particle-In-Cell method using ANACONDA package
Blandón, J. S.; Grisales, J. P.; Riascos, H.
2017-06-01
Electrostatic plasma is the most representative and basic case in plasma physics field. One of its main characteristics is its ideal behavior, since it is assumed be in thermal equilibrium state. Through this assumption, it is possible to study various complex phenomena such as plasma oscillations, waves, instabilities or damping. Likewise, computational simulation of this specific plasma is the first step to analyze physics mechanisms on plasmas, which are not at equilibrium state, and hence plasma is not ideal. Particle-In-Cell (PIC) method is widely used because of its precision for this kind of cases. This work, presents PIC method implementation to simulate electrostatic plasma by Python, using ANACONDA packages. The code has been corroborated comparing previous theoretical results for three specific phenomena in cold plasmas: oscillations, Two-Stream instability (TSI) and Landau Damping(LD). Finally, parameters and results are discussed.
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...... bursts of path simulations with extrapolation of a number of macroscopic state variables forward in time. The new microscopic state, consistent with the extrapolated variables, is obtained by a matching operator that minimises the perturbation caused by the extrapolation. We provide a proof...... of the convergence of this method, in the absence of statistical error, and we analyse various strategies for matching, as an operator on probability measures. Finally, we present numerical experiments that illustrate the effects of the different approximations on the resulting error in macroscopic predictions....
Integrated Building Energy Design of a Danish Office Building Based on Monte Carlo Simulation Method
DEFF Research Database (Denmark)
Sørensen, Mathias Juul; Myhre, Sindre Hammer; Hansen, Kasper Kingo
2017-01-01
and improve the collaboration efficiency. Monte Carlo Simulation method is adopted to simulate both the energy performance and indoor climate of the building. Building physics parameters, including characteristics of facades, walls, windows, etc., are taken into consideration, and thousands of combinations......The focus on reducing buildings energy consumption is gradually increasing, and the optimization of a building’s performance and maximizing its potential leads to great challenges between architects and engineers. In this study, we collaborate with a group of architects on a design project of a new...... office building located in Aarhus, Denmark. Building geometry, floor plans and employee schedules were obtained from the architects which is the basis for this study. This study aims to simplify the iterative design process that is based on the traditional trial and error method in the late design phases...
Application of a SPH Coupled FEM Method for Simulation of Trimming of Aluminum Autobody Sheet
Directory of Open Access Journals (Sweden)
Bohdal Łukasz
2016-03-01
Full Text Available In this paper, the applications of mesh-free SPH (Smoothed Particle Hydrodynamics continuum method to the simulation and analysis of trimming process is presented. In dealing with shearing simulations for example of blanking, piercing or slitting, existing literatures apply finite element method (FEM to analysis of this processes. Presented in this work approach and its application to trimming of aluminum autobody sheet allows for a complex analysis of physical phenomena occurring during the process without significant deterioration in the quality of the finite element mesh during large deformation. This allows for accurate representation of the loss of cohesion of the material under the influence of cutting tools. An analysis of state of stress, strain and fracture mechanisms of the material is presented. In experimental studies, an advanced vision-based technology based on digital image correlation (DIC for monitoring the cutting process is used.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
Method to simulate and analyse induced stresses for laser crystal packaging technologies.
Ribes-Pleguezuelo, Pol; Zhang, Site; Beckert, Erik; Eberhardt, Ramona; Wyrowski, Frank; Tünnermann, Andreas
2017-03-20
A method to simulate induced stresses for a laser crystal packaging technique and the consequent study of birefringent effects inside the laser cavities has been developed. The method has been implemented by thermo-mechanical simulations implemented with ANSYS 17.0. ANSYS results were later imported in VirtualLab Fusion software where input/output beams in terms of wavelengths and polarization were analysed. The study has been built in the context of a low-stress soldering technique implemented for glass or crystal optics packaging's called the solderjet bumping technique. The outcome of the analysis showed almost no difference between the input and output laser beams for the laser cavity constructed with an yttrium aluminum garnet active laser crystal, a second harmonic generator beta-barium borate, and the output laser mirror made of fused silica assembled by the low-stress solderjet bumping technique.
Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods
Energy Technology Data Exchange (ETDEWEB)
Deschamps, T; Schwartz, P; Trebotich, D; Colella, P; Saloner, D; Malladi, R
2004-12-09
In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured mesh that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.
Directory of Open Access Journals (Sweden)
G Boroni
2017-03-01
Full Text Available Lattice Boltzmann Method (LBM has shown great potential in fluid simulations, but performance issues and difficulties to manage complex boundary conditions have hindered a wider application. The upcoming of Graphic Processing Units (GPU Computing offered a possible solution for the performance issue, and methods like the Immersed Boundary (IB algorithm proved to be a flexible solution to boundaries. Unfortunately, the implicit IB algorithm makes the LBM implementation in GPU a non-trivial task. This work presents a fully parallel GPU implementation of LBM in combination with IB. The fluid-boundary interaction is implemented via GPU kernels, using execution configurations and data structures specifically designed to accelerate each code execution. Simulations were validated against experimental and analytical data showing good agreement and improving the computational time. Substantial reductions of calculation rates were achieved, lowering down the required time to execute the same model in a CPU to about two magnitude orders.
Modeling and simulation of ocean wave propagation using lattice Boltzmann method
Nuraiman, Dian
2017-10-01
In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.
Comparison of texture synthesis methods for content generation in ultrasound simulation for training
Mattausch, Oliver; Ren, Elizabeth; Bajka, Michael; Vanhoey, Kenneth; Goksel, Orcun
2017-03-01
Navigation and interpretation of ultrasound (US) images require substantial expertise, the training of which can be aided by virtual-reality simulators. However, a major challenge in creating plausible simulated US images is the generation of realistic ultrasound speckle. Since typical ultrasound speckle exhibits many properties of Markov Random Fields, it is conceivable to use texture synthesis for generating plausible US appearance. In this work, we investigate popular classes of texture synthesis methods for generating realistic US content. In a user study, we evaluate their performance for reproducing homogeneous tissue regions in B-mode US images from small image samples of similar tissue and report the best-performing synthesis methods. We further show that regression trees can be used on speckle texture features to learn a predictor for US realism.
Developpement dune methode de simulation de pompage au sein d'un compresseur multi-etage
Dumas, Martial
Surge is an unsteady phenomenon which appears when a compressor operates at a mass flow that is too low relative to its design point. This aerodynamic instability is characterized by large oscillations in pressure and mass flow, resulting in a sudden drop in power delivered by a gas turbine engine and possibly important damage to engine components. The methodology developed in this thesis allows for the simulations of the flow behavior inside a multi-stage compressor during surge and, by extension, predict at the design phase the time variation of aerodynamic forces on the blades and of the pressure and temperature at bleed locations inside the compressors for turbine cooling. While the compressor is the component of interest and the trigger for surge, the flow behavior during this event is also dependent on other engine components (combustion chamber, turbine, ducts). However, the simulation of the entire gas turbine engine cannot be carried out in a practical manner with existing computational technologies. The approach taken consists of coupling 3-D RANS CFD simulations of the compressor with 1-D equations modeling the behavior of the other components applied as dynamic boundary conditions. The method was put into practice in a commercial RANS CFD code (ANSYS CFX) whose integrated options facilitated the implementation of the 1-D equations into the dynamic boundary conditions of the computational domain. In addition, in order to limit computational time, only one blade passage was simulated per blade row to capture surge which is essentially a one-dimensional phenomenon. This methodology was applied to several compressor geometries with distinct features. Simulations on a low-speed (incompressible) three-stage axial compressor allowed for a validation with experimental data, which showed that the pressure and mass flow oscillations are captured well. This comparison also highlighted the strong dependence of the oscillation frequency on the volume of the
A mass conserving level set method for detailed numerical simulation of liquid atomization
Energy Technology Data Exchange (ETDEWEB)
Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)
2015-10-01
An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.
IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD
Directory of Open Access Journals (Sweden)
A. S. Potapov
2014-09-01
Full Text Available The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability. Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition. Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.
Convergence of methods for coupling of microscopic and mesoscopic reaction–diffusion simulations
Flegg, Mark B.
2015-05-01
© 2015 Elsevier Inc. In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step δ. t (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered:. (i)δt→0 and h is fixed;(ii)δt→0 and h→0 such that δt/h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.
New method for qualitative simulations of water resources systems. 2. Applications
Energy Technology Data Exchange (ETDEWEB)
Antunes, M.P.; Seixas, M.J.; Camara, A.S.; Pinheiro, M.
1987-11-01
SLIN (Simulacao Linguistica) is a new method for qualitative dynamic simulation. As was presented previously, SLIN relies upon a categorical representation of variables which are manipulated by logical rules. Two applications to water resources systems are included to illustrate SLIN's potential usefulness: the environmental impact evaluation of a hydropower plant and the assessment of oil dispersion in the sea after a tanker wreck.
Method for Lumped Parameter simulation of Digital Displacement pumps/motors based on CFD
DEFF Research Database (Denmark)
Rømer, Daniel; Johansen, Per; Pedersen, Henrik C.
2013-01-01
the design and control of digital displacement machines, there is a need for simulation models, preferably models with low computational cost. Therefore, a low computational cost generic lumped parameter model of digital displacement machine is presented, including a method for determining the needed model...... parameters based on steady CFD results, in order to take detailed geometry information into account. The response of the lumped parameter model is compared to a computational expensive transient CFD model for an example geometry....
Energy Technology Data Exchange (ETDEWEB)
Castillo, Victor Manuel [Univ. of California, Davis, CA (United States)
1999-01-01
A collocation method using cubic splines is developed and applied to simulate steady and time-dependent, including turbulent, thermally convecting flows for two-dimensional compressible fluids. The state variables and the fluxes of the conserved quantities are approximated by cubic splines in both space direction. This method is shown to be numerically conservative and to have a local truncation error proportional to the fourth power of the grid spacing. A ''dual-staggered'' Cartesian grid, where energy and momentum are updated on one grid and mass density on the other, is used to discretize the flux form of the compressible Navier-Stokes equations. Each grid-line is staggered so that the fluxes, in each direction, are calculated at the grid midpoints. This numerical method is validated by simulating thermally convecting flows, from steady to turbulent, reproducing known results. Once validated, the method is used to investigate many aspects of thermal convection with high numerical accuracy. Simulations demonstrate that multiple steady solutions can coexist at the same Rayleigh number for compressible convection. As a system is driven further from equilibrium, a drop in the time-averaged dimensionless heat flux (and the dimensionless internal entropy production rate) occurs at the transition from laminar-periodic to chaotic flow. This observation is consistent with experiments of real convecting fluids. Near this transition, both harmonic and chaotic solutions may exist for the same Rayleigh number. The chaotic flow loses phase-space information at a greater rate, while the periodic flow transports heat (produces entropy) more effectively. A linear sum of the dimensionless forms of these rates connects the two flow morphologies over the entire range for which they coexist. For simulations of systems with higher Rayleigh numbers, a scaling relation exists relating the dimensionless heat flux to the two-seventh's power of the Rayleigh number
Field simulation of axisymmetric plasma screw pinches by alternating-direction-implicit methods
Energy Technology Data Exchange (ETDEWEB)
Lambert, Michael Allen [Univ. of California, Davis, CA (United States)
1996-06-01
An axisymmetric plasma screw pinch is an axisymmetric column of ionized gaseous plasma radially confined by forces from axial and azimuthal currents driven in the plasma and its surroundings. This dissertation is a contribution to detailed, high resolution computer simulation of dynamic plasma screw pinches in 2-d rz-coordinates. The simulation algorithm combines electron fluid and particle-in-cell (PIC) ion models to represent the plasma in a hybrid fashion. The plasma is assumed to be quasineutral; along with the Darwin approximation to the Maxwell equations, this implies application of Ampere`s law without displacement current. Electron inertia is assumed negligible so that advective terms in the electron momentum equation are ignored. Electrons and ions have separate scalar temperatures, and a scalar plasma electrical resistivity is assumed. Altemating-direction-implicit (ADI) methods are used to advance the electron fluid drift velocity and the magnetic fields in the simulation. The ADI methods allow time steps larger than allowed by explicit methods. Spatial regions where vacuum field equations have validity are determined by a cutoff density that invokes the quasineutral vacuum Maxwell equations (Darwin approximation). In this dissertation, the algorithm was first checked against ideal MM stability theory, and agreement was nicely demonstrated. However, such agreement is not a new contribution to the research field. Contributions to the research field include new treatments of the fields in vacuum regions of the pinch simulation. The new treatments predict a level of magnetohydrodynamic turbulence near the bulk plasma surface that is higher than predicted by other methods.
Two Methods For Simulating the Strong-Strong Beam-Beam Interaction in Hadron Colliders
Energy Technology Data Exchange (ETDEWEB)
Warnock, Robert L.
2002-11-11
We present and compare the method of weighted macro particle tracking and the Perron-Frobenius operator technique for simulating the time evolution of two beams coupled via the collective beam-beam interaction in 2-D and 4-D (transverse) phase space. The coherent dipole modes, with and without lattice nonlinearities and external excitation, are studied by means of the Vlasov-Poisson system.
Akhmatskaya, Elena; Fernández-Pendás, Mario; Radivojević, Tijana; Sanz-Serna, J M
2017-10-24
The modified Hamiltonian Monte Carlo (MHMC) methods, i.e., importance sampling methods that use modified Hamiltonians within a Hybrid Monte Carlo (HMC) framework, often outperform in sampling efficiency standard techniques such as molecular dynamics (MD) and HMC. The performance of MHMC may be enhanced further through the rational choice of the simulation parameters and by replacing the standard Verlet integrator with more sophisticated splitting algorithms. Unfortunately, it is not easy to identify the appropriate values of the parameters that appear in those algorithms. We propose a technique, that we call MAIA (Modified Adaptive Integration Approach), which, for a given simulation system and a given time step, automatically selects the optimal integrator within a useful family of two-stage splitting formulas. Extended MAIA (or e-MAIA) is an enhanced version of MAIA, which additionally supplies a value of the method-specific parameter that, for the problem under consideration, keeps the momentum acceptance rate at a user-desired level. The MAIA and e-MAIA algorithms have been implemented, with no computational overhead during simulations, in MultiHMC-GROMACS, a modified version of the popular software package GROMACS. Tests performed on well-known molecular models demonstrate the superiority of the suggested approaches over a range of integrators (both standard and recently developed), as well as their capacity to improve the sampling efficiency of GSHMC, a noticeable method for molecular simulation in the MHMC family. GSHMC combined with e-MAIA shows a remarkably good performance when compared to MD and HMC coupled with the appropriate adaptive integrators.
Optimal design of a DC MHD pump by simulated annealing method
Directory of Open Access Journals (Sweden)
Bouali Khadidja
2014-01-01
Full Text Available In this paper a design methodology of a magnetohydrodynamic pump is proposed. The methodology is based on direct interpretation of the design problem as an optimization problem. The simulated annealing method is used for an optimal design of a DC MHD pump. The optimization procedure uses an objective function which can be the minimum of the mass. The constraints are both of geometrics and electromagnetic in type. The obtained results are reported.
Simulation of Corrosion Process for Structure with the Cellular Automata Method
Chen, M. C.; Wen, Q. Q.
2017-06-01
In this paper, from the mesoscopic point of view, under the assumption of metal corrosion damage evolution being a diffusive process, the cellular automata (CA) method was proposed to simulate numerically the uniform corrosion damage evolution of outer steel tube of concrete filled steel tubular columns subjected to corrosive environment, and the effects of corrosive agent concentration, dissolution probability and elapsed etching time on the corrosion damage evolution were also investigated. It was shown that corrosion damage increases nonlinearly with increasing elapsed etching time, and the longer the etching time, the more serious the corrosion damage; different concentration of corrosive agents had different impacts on the corrosion damage degree of the outer steel tube, but the difference between the impacts was very small; the heavier the concentration, the more serious the influence. The greater the dissolution probability, the more serious the corrosion damage of the outer steel tube, but with the increase of dissolution probability, the difference between its impacts on the corrosion damage became smaller and smaller. To validate present method, corrosion damage measurements for concrete filled square steel tubular columns (CFSSTCs) sealed at both their ends and immersed fully in a simulating acid rain solution were conducted, and Faraday’s law was used to predict their theoretical values. Meanwhile, the proposed CA mode was applied for the simulation of corrosion damage evolution of the CFSSTCs. It was shown by the comparisons of results from the three methods aforementioned that they were in good agreement, implying that the proposed method used for the simulation of corrosion damage evolution of concrete filled steel tubular columns is feasible and effective. It will open a new approach to study and evaluate further the corrosion damage, loading capacity and lifetime prediction of concrete filled steel tubular structures.
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.
Bhattacharya, Amitabh; Kesarkar, Tejas
2016-10-01
A combination of finite difference (FD) and boundary integral (BI) methods is used to formulate an efficient solver for simulating unsteady Stokes flow around particles. The two-dimensional (2D) unsteady Stokes equation is being solved on a Cartesian grid using a second order FD method, while the 2D steady Stokes equation is being solved near the particle using BI method. The two methods are coupled within the viscous boundary layer, a few FD grid cells away from the particle, where solutions from both FD and BI methods are valid. We demonstrate that this hybrid method can be used to accurately solve for the flow around particles with irregular shapes, even though radius of curvature of the particle surface is not resolved by the FD grid. For dilute particle concentrations, we construct a virtual envelope around each particle and solve the BI problem for the flow field located between the envelope and the particle. The BI solver provides velocity boundary condition to the FD solver at "boundary" nodes located on the FD grid, adjacent to the particles, while the FD solver provides the velocity boundary condition to the BI solver at points located on the envelope. The coupling between FD method and BI method is implicit at every time step. This method allows us to formulate an O(N) scheme for dilute suspensions, where N is the number of particles. For semidilute suspensions, where particles may cluster, an envelope formation method has been formulated and implemented, which enables solving the BI problem for each individual particle cluster, allowing efficient simulation of hydrodynamic interaction between particles even when they are in close proximity. The method has been validated against analytical results for flow around a periodic array of cylinders and for Jeffrey orbit of a moving ellipse in shear flow. Simulation of multiple force-free irregular shaped particles in the presence of shear in a 2D slit flow has been conducted to demonstrate the robustness of
Study on the Growth of Holes in Cold Spraying via Numerical Simulation and Experimental Methods
Directory of Open Access Journals (Sweden)
Guosheng Huang
2016-12-01
Full Text Available Cold spraying is a promising method for rapid prototyping due to its high deposition efficiency and high-quality bonding characteristic. However, many researchers have noticed that holes cannot be replenished and will grow larger and larger once formed, which will significantly decrease the deposition efficiency. No work has yet been done on this problem. In this paper, a computational simulation method was used to investigate the origins of these holes and the reasons for their growth. A thick copper coating was deposited around the pre-drilled, micro-size holes using a cold spraying method on copper substrate to verify the simulation results. The results indicate that the deposition efficiency inside the hole decreases as the hole become deeper and narrower. The repellant force between the particles perpendicular to the impaction direction will lead to porosity if the particles are too close. There is a much lower flattening ratio for successive particles if they are too close at the same location, because the momentum energy contributes to the former particle’s deformation. There is a high probability that the above two phenomena, resulting from high powder-feeding rate, will form the original hole, which will grow larger and larger once it is formed. It is very important to control the powder feeding rate, but the upper limit is yet to be determined by further simulation and experimental investigation.
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.
2014-05-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.
Finite-Element Methods for Real-Time Simulation of Surgery
Basdogan, Cagatay
2003-01-01
Two finite-element methods have been developed for mathematical modeling of the time-dependent behaviors of deformable objects and, more specifically, the mechanical responses of soft tissues and organs in contact with surgical tools. These methods may afford the computational efficiency needed to satisfy the requirement to obtain computational results in real time for simulating surgical procedures as described in Simulation System for Training in Laparoscopic Surgery (NPO-21192) on page 31 in this issue of NASA Tech Briefs. Simulation of the behavior of soft tissue in real time is a challenging problem because of the complexity of soft-tissue mechanics. The responses of soft tissues are characterized by nonlinearities and by spatial inhomogeneities and rate and time dependences of material properties. Finite-element methods seem promising for integrating these characteristics of tissues into computational models of organs, but they demand much central-processing-unit (CPU) time and memory, and the demand increases with the number of nodes and degrees of freedom in a given finite-element model. Hence, as finite-element models become more realistic, it becomes more difficult to compute solutions in real time. In both of the present methods, one uses approximate mathematical models trading some accuracy for computational efficiency and thereby increasing the feasibility of attaining real-time up36 NASA Tech Briefs, October 2003 date rates. The first of these methods is based on modal analysis. In this method, one reduces the number of differential equations by selecting only the most significant vibration modes of an object (typically, a suitable number of the lowest-frequency modes) for computing deformations of the object in response to applied forces.
Zhou, Yuhong; de By, Rolf; Augustijn, Ellen-Wien
2006-10-01
Geographic Information Science (GIS) has provided the methodological and technical supports for modeling and simulation in the geographical domain. However, research methods on building complex simulations in which agents behave and interact in discrete time and space are lacking. The existing simulation systems/software are application-oriented and do not provide a theoretical (conceptual) view. The simulation theories and methods that exist do not incorporate spatial issues, which are the key to linking GIS with simulation theory and practice. This paper introduces a method for developing a conceptual theoretical framework for a spatial simulation system which can potentially be integrated with GIS. Firstly, based on classical discrete event simulation and fresh agent technology, a simulation theory is proposed, which is represented by a conceptual simulation model using UML-based visual syntax. In this theoretical framework, spatial issues including spatial setting, spatial constraints, spatial effects and spatial awareness are emphasized. Next, a testing scenario in the microscopic traffic simulation domain is set up to examine the feasibility of the simulation philosophy. Finally, the method is evaluated from the aspects of feasibility, uncertainty and applicability.
Mino, Yasushi; Shinto, Hiroyuki; Sakai, Shohei; Matsuyama, Hideto
2017-04-01
A computational method for the simulation of particulate flows that can efficiently treat the particle-fluid boundary in systems containing many particles was developed based on the smoothed-profile lattice Boltzmann method (SPLBM). In our proposed method, which we call the improved SPLBM (iSPLBM), for an accurate and stable simulation of particulate flows, the hydrodynamic force on a moving solid particle is exactly formulated with consideration of the effect of internal fluid mass. To validate the accuracy and stability of iSPLBM, we conducted numerical simulations of several particulate flow systems and compared our results with those of other simulations and some experiments. In addition, we performed simulations on flotation of many lightweight particles with a wide range of particle size distribution, the results of which demonstrated the effectiveness of iSPLBM. Our proposed model is a promising method to accurately and stably simulate extensive particulate flows.
Anupindi, Kameswararao; Delorme, Yann; Shetty, Dinesh A.; Frankel, Steven H.
2013-12-01
Computational fluid dynamics (CFD) simulations are becoming a reliable tool to understand hemodynamics, disease progression in pathological blood vessels and to predict medical device performance. Immersed boundary method (IBM) emerged as an attractive methodology because of its ability to efficiently handle complex moving and rotating geometries on structured grids. However, its application to study blood flow in complex, branching, patient-specific anatomies is scarce. This is because of the dominance of grid nodes in the exterior of the fluid domain over the useful grid nodes in the interior, rendering an inevitable memory and computational overhead. In order to alleviate this problem, we propose a novel multiblock based IBM that preserves the simplicity and effectiveness of the IBM on structured Cartesian meshes and enables handling of complex, anatomical geometries at a reduced memory overhead by minimizing the grid nodes in the exterior of the fluid domain. As pathological and medical device hemodynamics often involve complex, unsteady transitional or turbulent flow fields, a scale resolving turbulence model such as large eddy simulation (LES) is used in the present work. The proposed solver (here after referred as WenoHemo), is developed by enhancing an existing in-house high-order incompressible flow solver that was previously validated for its numerics and several LES models by Shetty et al. (2010) [33]. In the present work, WenoHemo is systematically validated for additional numerics introduced, such as IBM and the multiblock approach, by simulating laminar flow over a sphere and laminar flow over a backward facing step respectively. Then, we validate the entire solver methodology by simulating laminar and transitional flow in abdominal aortic aneurysm (AAA). Finally, we perform blood flow simulations in the challenging clinically relevant thoracic aortic aneurysm (TAA), to gain insights into the type of fluid flow patterns that exist in pathological
Long-time atomistic simulations with the Parallel Replica Dynamics method
Perez, Danny
Molecular Dynamics (MD) -- the numerical integration of atomistic equations of motion -- is a workhorse of computational materials science. Indeed, MD can in principle be used to obtain any thermodynamic or kinetic quantity, without introducing any approximation or assumptions beyond the adequacy of the interaction potential. It is therefore an extremely powerful and flexible tool to study materials with atomistic spatio-temporal resolution. These enviable qualities however come at a steep computational price, hence limiting the system sizes and simulation times that can be achieved in practice. While the size limitation can be efficiently addressed with massively parallel implementations of MD based on spatial decomposition strategies, allowing for the simulation of trillions of atoms, the same approach usually cannot extend the timescales much beyond microseconds. In this article, we discuss an alternative parallel-in-time approach, the Parallel Replica Dynamics (ParRep) method, that aims at addressing the timescale limitation of MD for systems that evolve through rare state-to-state transitions. We review the formal underpinnings of the method and demonstrate that it can provide arbitrarily accurate results for any definition of the states. When an adequate definition of the states is available, ParRep can simulate trajectories with a parallel speedup approaching the number of replicas used. We demonstrate the usefulness of ParRep by presenting different examples of materials simulations where access to long timescales was essential to access the physical regime of interest and discuss practical considerations that must be addressed to carry out these simulations. Work supported by the United States Department of Energy (U.S. DOE), Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division.
An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems
Kuwahara, Hiroyuki
2011-01-01
Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.
Snyder, Christopher W; Vandromme, Marianne J; Tyra, Sharon L; Porterfield, John R; Clements, Ronald H; Hawn, Mary T
2011-02-01
Virtual reality (VR) simulators and Web-based instructional videos are valuable supplemental training resources in surgical programs, but it is unclear how to optimally integrate them into minimally invasive surgical training. Medical students were randomized to proficiency-based training on VR laparoscopy and endoscopy simulators by two different methods: proctored training (automated simulator feedback plus human expert feedback) or independent training (simulator feedback alone). After achieving simulator proficiency, trainees performed a series of laparoscopic and endoscopic tasks in a live porcine model. Prior to their entry into the animal lab, all trainees watched an instructional video of the procedure and were randomly assigned to either observe or not observe the actual procedure before performing it themselves. The joint effects of VR training method and procedure observation on time to successful task completion were evaluated with Cox regression models. Thirty-two students (16 proctored, 16 independent) completed VR training. Cox regression modeling with adjustment for relevant covariates demonstrated no significant difference in the likelihood of successful task completion for independent versus proctored training [Hazard Ratio (HR) 1.28; 95% Confidence Interval (CI) 0.96-1.72; p=0.09]. Trainees who observed the actual procedure were more likely to be successful than those who watched the instructional video alone (HR 1.47; 95% CI 1.09-1.98; p=0.01). Proctored VR training is no more effective than independent training with respect to surgical performance. Therefore, time-consuming human expert feedback during VR training may be unnecessary. Instructional videos, while useful, may not be adequate substitutes for actual observation when trainees are learning minimally invasive surgical procedures.