Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
.... The following methods are reviewed: matrix operations, ordinary and partial differential system of equations, Lagrangian operations, Fourier transforms, Taylor Series, Finite Difference Methods, implicit and explicit finite element...
Math-Based Simulation Tools and Methods
National Research Council Canada - National Science Library
Arepally, Sudhakar
2007-01-01
...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...
The afforestation problem: a heuristic method based on simulated annealing
DEFF Research Database (Denmark)
Vidal, Rene Victor Valqui
1992-01-01
This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....
A particle-based method for granular flow simulation
Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua
2012-01-01
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
A particle-based method for granular flow simulation
Chang, Yuanzhang
2012-03-16
We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang
2011-03-01
We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.
Study of Flapping Flight Using Discrete Vortex Method Based Simulations
Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.
2013-12-01
In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.
Discrete simulation system based on artificial intelligence methods
Energy Technology Data Exchange (ETDEWEB)
Futo, I; Szeredi, J
1982-01-01
A discrete event simulation system based on the AI language Prolog is presented. The system called t-Prolog extends the traditional possibilities of simulation languages toward automatic problem solving by using backtrack in time and automatic model modification depending on logical deductions. As t-Prolog is an interactive tool, the user has the possibility to interrupt the simulation run to modify the model or to force it to return to a previous state for trying possible alternatives. It admits the construction of goal-oriented or goal-seeking models with variable structure. Models are defined in a restricted version of the first order predicate calculus using Horn clauses. 21 references.
A simulation based engineering method to support HAZOP studies
DEFF Research Database (Denmark)
Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge
2012-01-01
the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...
Limitations in simulator time-based human reliability analysis methods
International Nuclear Information System (INIS)
Wreathall, J.
1989-01-01
Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical
A calculation method for RF couplers design based on numerical simulation by microwave studio
International Nuclear Information System (INIS)
Wang Rong; Pei Yuanji; Jin Kai
2006-01-01
A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)
Mock ECHO: A Simulation-Based Medical Education Method.
Fowler, Rebecca C; Katzman, Joanna G; Comerci, George D; Shelley, Brian M; Duhigg, Daniel; Olivas, Cynthia; Arnold, Thomas; Kalishman, Summers; Monnette, Rebecca; Arora, Sanjeev
2018-04-16
This study was designed to develop a deeper understanding of the learning and social processes that take place during the simulation-based medical education for practicing providers as part of the Project ECHO® model, known as Mock ECHO training. The ECHO model is utilized to expand access to care of common and complex diseases by supporting the education of primary care providers with an interprofessional team of specialists via videoconferencing networks. Mock ECHO trainings are conducted through a train the trainer model targeted at leaders replicating the ECHO model at their organizations. Trainers conduct simulated teleECHO clinics while participants gain skills to improve communication and self-efficacy. Three focus groups, conducted between May 2015 and January 2016 with a total of 26 participants, were deductively analyzed to identify common themes related to simulation-based medical education and interdisciplinary education. Principal themes generated from the analysis included (a) the role of empathy in community development, (b) the value of training tools as guides for learning, (c) Mock ECHO design components to optimize learning, (d) the role of interdisciplinary education to build community and improve care delivery, (e) improving care integration through collaboration, and (f) development of soft skills to facilitate learning. Mock ECHO trainings offer clinicians the freedom to learn in a noncritical environment while emphasizing real-time multidirectional feedback and encouraging knowledge and skill transfer. The success of the ECHO model depends on training interprofessional healthcare providers in behaviors needed to lead a teleECHO clinic and to collaborate in the educational process. While building a community of practice, Mock ECHO provides a safe opportunity for a diverse group of clinician experts to practice learned skills and receive feedback from coparticipants and facilitators.
An experiment teaching method based on the Optisystem simulation platform
Zhu, Jihua; Xiao, Xuanlu; Luo, Yuan
2017-08-01
The experiment teaching of optical communication system is difficult to achieve because of expensive equipment. The Optisystem is optical communication system design software, being able to provide such a simulation platform. According to the characteristic of the OptiSystem, an approach of experiment teaching is put forward in this paper. It includes three gradual levels, the basics, the deeper looks and the practices. Firstly, the basics introduce a brief overview of the technology, then the deeper looks include demoes and example analyses, lastly the practices are going on through the team seminars and comments. A variety of teaching forms are implemented in class. The fact proves that this method can not only make up the laboratory but also motivate the students' learning interest and improve their practical abilities, cooperation abilities and creative spirits. On the whole, it greatly raises the teaching effect.
The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.
Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin
2016-09-10
A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.
Hybrid method based on embedded coupled simulation of vortex particles in grid based solution
Kornev, Nikolai
2017-09-01
The paper presents a novel hybrid approach developed to improve the resolution of concentrated vortices in computational fluid mechanics. The method is based on combination of a grid based and the grid free computational vortex (CVM) methods. The large scale flow structures are simulated on the grid whereas the concentrated structures are modeled using CVM. Due to this combination the advantages of both methods are strengthened whereas the disadvantages are diminished. The procedure of the separation of small concentrated vortices from the large scale ones is based on LES filtering idea. The flow dynamics is governed by two coupled transport equations taking two-way interaction between large and fine structures into account. The fine structures are mapped back to the grid if their size grows due to diffusion. Algorithmic aspects of the hybrid method are discussed. Advantages of the new approach are illustrated on some simple two dimensional canonical flows containing concentrated vortices.
High viscosity fluid simulation using particle-based method
Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua
2011-01-01
the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn
Sakamoto, Shinichi; Otsuru, Toru
2014-01-01
This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.
Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C
2017-10-01
Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3
To improve training methods in an engine room simulator-based training
Lin, Chingshin
2016-01-01
The simulator based training are used widely in both industry and school education to reduce the accidents nowadays. This study aims to suggest the improved training methods to increase the effectiveness of engine room simulator training. The effectiveness of training in engine room will be performance indicators and the self-evaluation by participants. In the first phase of observation, the aim is to find out the possible shortcomings of current training methods based on train...
Hybrid statistics-simulations based method for atom-counting from ADF STEM images
Energy Technology Data Exchange (ETDEWEB)
De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)
2017-06-15
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.
Biasing transition rate method based on direct MC simulation for probabilistic safety assessment
Institute of Scientific and Technical Information of China (English)
Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang
2017-01-01
Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.
3D simulation of friction stir welding based on movable cellular automaton method
Eremina, Galina M.
2017-12-01
The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimation of functional failure probability of passive systems based on subset simulation method
International Nuclear Information System (INIS)
Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing
2012-01-01
In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)
Sultan, A. Z.; Hamzah, N.; Rusdi, M.
2018-01-01
The implementation of concept attainment method based on simulation was used to increase student’s interest in the subjects Engineering of Mechanics in second semester of academic year 2016/2017 in Manufacturing Engineering Program, Department of Mechanical PNUP. The result of the implementation of this learning method shows that there is an increase in the students’ learning interest towards the lecture material which is summarized in the form of interactive simulation CDs and teaching materials in the form of printed books and electronic books. From the implementation of achievement method of this simulation based concept, it is noted that the increase of student participation in the presentation and discussion as well as the deposit of individual assignment of significant student. With the implementation of this method of learning the average student participation reached 89%, which before the application of this learning method only reaches an average of 76%. And also with previous learning method, for exam achievement of A-grade under 5% and D-grade above 8%. After the implementation of the new learning method (simulation based-concept attainment method) the achievement of Agrade has reached more than 30% and D-grade below 1%.
A novel energy conversion based method for velocity correction in molecular dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)
2017-05-01
Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.
A novel energy conversion based method for velocity correction in molecular dynamics simulations
International Nuclear Information System (INIS)
Jin, Hanhui; Liu, Ningning; Ku, Xiaoke; Fan, Jianren
2017-01-01
Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore
2014-01-01
, unbounded particle-mesh based vortex method is used to simulate the instability, transition to turbulence and eventual destruction of a single vortex ring. From the simulation data a novel method on analyzing the dynamics of the enstrophy is presented based on the alignment of the vorticity vector...... with the principal axis of the strain rate tensor. We find that the dynamics of the enstrophy density is dominated by the local flow deformation and axis of rotation, which is used to infer some concrete tendencies related to the topology of the vorticity field....
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Directory of Open Access Journals (Sweden)
Hong Yan
2018-03-01
Full Text Available This research presented a novel method using 3D simulation methods to design customized garments for physically disabled people with scoliosis (PDPS. The proposed method is based on the virtual human model created from 3D scanning, permitting to simulate the consumer’s morphological shape with atypical physical deformations. Next, customized 2D and 3D virtual garment prototyping tools will be used to create products through interactions. The proposed 3D garment design method is based on the concept of knowledge-based design, using the design knowledge and process already applied to normal body shapes successfully. The characters of the PDPS and the relationship between human body and garment are considered in the prototyping process. As a visualized collaborative design process, the communication between designer and consumer is ensured, permitting to adapt the finished product to disabled people afflicted with severe scoliosis.
Methods for simulation-based analysis of fluid-structure interaction.
Energy Technology Data Exchange (ETDEWEB)
Barone, Matthew Franklin; Payne, Jeffrey L.
2005-10-01
Methods for analysis of fluid-structure interaction using high fidelity simulations are critically reviewed. First, a literature review of modern numerical techniques for simulation of aeroelastic phenomena is presented. The review focuses on methods contained within the arbitrary Lagrangian-Eulerian (ALE) framework for coupling computational fluid dynamics codes to computational structural mechanics codes. The review treats mesh movement algorithms, the role of the geometric conservation law, time advancement schemes, wetted surface interface strategies, and some representative applications. The complexity and computational expense of coupled Navier-Stokes/structural dynamics simulations points to the need for reduced order modeling to facilitate parametric analysis. The proper orthogonal decomposition (POD)/Galerkin projection approach for building a reduced order model (ROM) is presented, along with ideas for extension of the methodology to allow construction of ROMs based on data generated from ALE simulations.
Work in process level definition: a method based on computer simulation and electre tri
Directory of Open Access Journals (Sweden)
Isaac Pergher
2014-09-01
Full Text Available This paper proposes a method for defining the levels of work in progress (WIP in productive environments managed by constant work in process (CONWIP policies. The proposed method combines the approaches of Computer Simulation and Electre TRI to support estimation of the adequate level of WIP and is presented in eighteen steps. The paper also presents an application example, performed on a metalworking company. The research method is based on Computer Simulation, supported by quantitative data analysis. The main contribution of the paper is its provision of a structured way to define inventories according to demand. With this method, the authors hope to contribute to the establishment of better capacity plans in production environments.
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
A general parallelization strategy for random path based geostatistical simulation methods
Mariethoz, Grégoire
2010-07-01
The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.
A simulation training evaluation method for distribution network fault based on radar chart
Directory of Open Access Journals (Sweden)
Yuhang Xu
2018-01-01
Full Text Available In order to solve the problem of automatic evaluation of dispatcher fault simulation training in distribution network, a simulation training evaluation method based on radar chart for distribution network fault is proposed. The fault handling information matrix is established to record the dispatcher fault handling operation sequence and operation information. The four situations of the dispatcher fault isolation operation are analyzed. The fault handling anti-misoperation rule set is established to describe the rules prohibiting dispatcher operation. Based on the idea of artificial intelligence reasoning, the feasibility of dispatcher fault handling is described by the feasibility index. The relevant factors and evaluation methods are discussed from the three aspects of the fault handling result feasibility, the anti-misoperation correctness and the operation process conciseness. The detailed calculation formula is given. Combining the independence and correlation between the three evaluation angles, a comprehensive evaluation method of distribution network fault simulation training based on radar chart is proposed. The method can comprehensively reflect the fault handling process of dispatchers, and comprehensively evaluate the fault handling process from various angles, which has good practical value.
A novel method for energy harvesting simulation based on scenario generation
Wang, Zhe; Li, Taoshen; Xiao, Nan; Ye, Jin; Wu, Min
2018-06-01
Energy harvesting network (EHN) is a new form of computer networks. It converts ambient energy into usable electric energy and supply the electrical energy as a primary or secondary power source to the communication devices. However, most of the EHN uses the analytical probability distribution function to describe the energy harvesting process, which cannot accurately identify the actual situation for the lack of authenticity. We propose an EHN simulation method based on scenario generation in this paper. Firstly, instead of setting a probability distribution in advance, it uses optimal scenario reduction technology to generate representative scenarios in single period based on the historical data of the harvested energy. Secondly, it uses homogeneous simulated annealing algorithm to generate optimal daily energy harvesting scenario sequences to get a more accurate simulation of the random characteristics of the energy harvesting network. Then taking the actual wind power data as an example, the accuracy and stability of the method are verified by comparing with the real data. Finally, we cite an instance to optimize the network throughput, which indicate the feasibility and effectiveness of the method we proposed from the optimal solution and data analysis in energy harvesting simulation.
Comparison of HRA methods based on WWER-1000 NPP real and simulated accident scenarios
International Nuclear Information System (INIS)
Petkov, Gueorgui
2010-01-01
Full text: Adequate treatment of human interactions in probabilistic safety analysis (PSA) studies is a key to the understanding of accident sequences and their relative importance in overall risk. Human interactions with machines have long been recognized as important contributors to the safe operation of nuclear power plants (NPP). Human interactions affect the ordering of dominant accident sequences and hence have a significant effect on the risk of NPP. By virtue of the ability to combine the treatment of both human and hardware reliability in real accidents, NPP fullscope, multifunctional and computer-based simulators provide a unique way of developing an understanding of the importance of specific human actions for overall plant safety. Context dependent human reliability assessment (HRA) models, such as the holistic decision tree (HDT) and performance evaluation of teamwork (PET) methods, are the so-called second generation HRA techniques. The HDT model has been used for a number of PSA studies. The PET method reflects promising prospects for dealing with dynamic aspects of human performance. The paper presents a comparison of the two HRA techniques for calculation of post-accident human error probability in the PSA. The real and simulated event training scenario 'turbine's stop after loss of feedwater' based on standard PSA model assumptions is designed for WWER-1000 computer simulator and their detailed boundary conditions are described and analyzed. The error probability of post-accident individual actions will be calculated by means of each investigated technique based on student's computer simulator training archives
Face-based smoothed finite element method for real-time simulation of soft tissue
Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane
2017-03-01
In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.
Application of State Quantization-Based Methods in HEP Particle Transport Simulation
Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo
2017-10-01
Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.
A rule based method for context sensitive threshold segmentation in SPECT using simulation
International Nuclear Information System (INIS)
Fleming, John S.; Alaamer, Abdulaziz S.
1998-01-01
Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)
International Nuclear Information System (INIS)
Koch, Stephan
2009-01-01
This thesis is concerned with the numerical simulation of electromagnetic fields in the quasi-static approximation which is applicable in many practical cases. Main emphasis is put on higher-order finite element methods. Quasi-static applications can be found, e.g., in accelerator physics in terms of the design of magnets required for beam guidance, in power engineering as well as in high-voltage engineering. Especially during the first design and optimization phase of respective devices, numerical models offer a cheap alternative to the often costly assembly of prototypes. However, large differences in the magnitude of the material parameters and the geometric dimensions as well as in the time-scales of the electromagnetic phenomena involved lead to an unacceptably long simulation time or to an inadequately large memory requirement. Under certain circumstances, the simulation itself and, in turn, the desired design improvement becomes even impossible. In the context of this thesis, two strategies aiming at the extension of the range of application for numerical simulations based on the finite element method are pursued. The first strategy consists in parallelizing existing methods such that the computation can be distributed over several computers or cores of a processor. As a consequence, it becomes feasible to simulate a larger range of devices featuring more degrees of freedom in the numerical model than before. This is illustrated for the calculation of the electromagnetic fields, in particular of the eddy-current losses, inside a superconducting dipole magnet developed at the GSI Helmholtzzentrum fuer Schwerionenforschung as a part of the FAIR project. As the second strategy to improve the efficiency of numerical simulations, a hybrid discretization scheme exploiting certain geometrical symmetries is established. Using this method, a significant reduction of the numerical effort in terms of required degrees of freedom for a given accuracy is achieved. The
2014-01-01
Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Directory of Open Access Journals (Sweden)
Danilo ePezo
2014-11-01
Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.
Diffusion approximation-based simulation of stochastic ion channels: which method to use?
Pezo, Danilo; Soudry, Daniel; Orio, Patricio
2014-01-01
To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914
Directory of Open Access Journals (Sweden)
Ahmed Kibria
2015-01-01
Full Text Available The reliability modeling of a module in a turbine engine requires knowledge of its failure rate, which can be estimated by identifying statistical distributions describing the percentage of failure per component within the turbine module. The correct definition of the failure statistical behavior per component is highly dependent on the engineer skills and may present significant discrepancies with respect to the historical data. There is no formal methodology to approach this problem and a large number of labor hours are spent trying to reduce the discrepancy by manually adjusting the distribution’s parameters. This paper addresses this problem and provides a simulation-based optimization method for the minimization of the discrepancy between the simulated and the historical percentage of failures for turbine engine components. The proposed methodology optimizes the parameter values of the component’s failure statistical distributions within the component’s likelihood confidence bounds. A complete testing of the proposed method is performed on a turbine engine case study. The method can be considered as a decision-making tool for maintenance, repair, and overhaul companies and will potentially reduce the cost of labor associated to finding the appropriate value of the distribution parameters for each component/failure mode in the model and increase the accuracy in the prediction of the mean time to failures (MTTF.
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
Directory of Open Access Journals (Sweden)
Jingjing He
2017-09-01
Full Text Available This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions.
Comparing methods of targeting obesity interventions in populations: An agent-based simulation.
Beheshti, Rahmatollah; Jalalpour, Mehdi; Glass, Thomas A
2017-12-01
Social networks as well as neighborhood environments have been shown to effect obesity-related behaviors including energy intake and physical activity. Accordingly, harnessing social networks to improve targeting of obesity interventions may be promising to the extent this leads to social multiplier effects and wider diffusion of intervention impact on populations. However, the literature evaluating network-based interventions has been inconsistent. Computational methods like agent-based models (ABM) provide researchers with tools to experiment in a simulated environment. We develop an ABM to compare conventional targeting methods (random selection, based on individual obesity risk, and vulnerable areas) with network-based targeting methods. We adapt a previously published and validated model of network diffusion of obesity-related behavior. We then build social networks among agents using a more realistic approach. We calibrate our model first against national-level data. Our results show that network-based targeting may lead to greater population impact. We also present a new targeting method that outperforms other methods in terms of intervention effectiveness at the population level.
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Directory of Open Access Journals (Sweden)
Ye. S. Sherina
2014-01-01
Full Text Available This research has been aimed to carry out a study of peculiarities that arise in a numerical simulation of the electrical impedance tomography (EIT problem. Static EIT image reconstruction is sensitive to a measurement noise and approximation error. A special consideration has been given to reducing of the approximation error, which originates from numerical implementation drawbacks. This paper presents in detail two numerical approaches for solving EIT forward problem. The finite volume method (FVM on unstructured triangular mesh is introduced. In order to compare this approach, the finite element (FEM based forward solver was implemented, which has gained the most popularity among researchers. The calculated potential distribution with the assumed initial conductivity distribution has been compared to the analytical solution of a test Neumann boundary problem and to the results of problem simulation by means of ANSYS FLUENT commercial software. Two approaches to linearized EIT image reconstruction are discussed. Reconstruction of the conductivity distribution is an ill-posed problem, typically requiring a large amount of computation and resolved by minimization techniques. The objective function to be minimized is constructed of measured voltage and calculated boundary voltage on the electrodes. A classical modified Newton type iterative method and the stochastic differential evolution method are employed. A software package has been developed for the problem under investigation. Numerical tests were conducted on simulated data. The obtained results could be helpful to researches tackling the hardware and software issues for medical applications of EIT.
Models and Methods for Adaptive Management of Individual and Team-Based Training Using a Simulator
Lisitsyna, L. S.; Smetyuh, N. P.; Golikov, S. P.
2017-05-01
Research of adaptive individual and team-based training has been analyzed and helped find out that both in Russia and abroad, individual and team-based training and retraining of AASTM operators usually includes: production training, training of general computer and office equipment skills, simulator training including virtual simulators which use computers to simulate real-world manufacturing situation, and, as a rule, the evaluation of AASTM operators’ knowledge determined by completeness and adequacy of their actions under the simulated conditions. Such approach to training and re-training of AASTM operators stipulates only technical training of operators and testing their knowledge based on assessing their actions in a simulated environment.
Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle
2017-10-01
Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.
Evaluation of FTIR-based analytical methods for the analysis of simulated wastes
International Nuclear Information System (INIS)
Rebagay, T.V.; Cash, R.J.; Dodd, D.A.; Lockrem, L.L.; Meacham, J.E.; Winkelman, W.D.
1994-01-01
Three FTIR-based analytical methods that have potential to characterize simulated waste tank materials have been evaluated. These include: (1) fiber optics, (2) modular transfer optic using light guides equipped with non-contact sampling peripherals, and (3) photoacoustic spectroscopy. Pertinent instrumentation and experimental procedures for each method are described. The results show that the near-infrared (NIR) region of the infrared spectrum is the region of choice for the measurement of moisture in waste simulants. Differentiation of the NIR spectrum, as a preprocessing steps, will improve the analytical result. Preliminary data indicate that prominent combination bands of water and the first overtone band of the ferrocyanide stretching vibration may be utilized to measure water and ferrocyanide species simultaneously. Both near-infrared and mid-infrared spectra must be collected, however, to measure ferrocyanide species unambiguously and accurately. For ease of sample handling and the potential for field or waste tank deployment, the FTIR-Fiber Optic method is preferred over the other two methods. Modular transfer optic using light guides and photoacoustic spectroscopy may be used as backup systems and for the validation of the fiber optic data
Timetable-based simulation method for choice set generation in large-scale public transport networks
DEFF Research Database (Denmark)
Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker
2016-01-01
The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...
Method for distributed agent-based non-expert simulation of manufacturing process behavior
Ivezic, Nenad; Potok, Thomas E.
2004-11-30
A method for distributed agent based non-expert simulation of manufacturing process behavior on a single-processor computer comprises the steps of: object modeling a manufacturing technique having a plurality of processes; associating a distributed agent with each the process; and, programming each the agent to respond to discrete events corresponding to the manufacturing technique, wherein each discrete event triggers a programmed response. The method can further comprise the step of transmitting the discrete events to each agent in a message loop. In addition, the programming step comprises the step of conditioning each agent to respond to a discrete event selected from the group consisting of a clock tick message, a resources received message, and a request for output production message.
Dynamic RCS Simulation of a Missile Target Group Based on the High-frequency Asymptotic Method
Directory of Open Access Journals (Sweden)
Zhao Tao
2014-04-01
Full Text Available To simulate dynamic Radar Cross Section (RCS of missile target group, an efficient RCS prediction approach is proposed based on the high-frequency asymptotic theory. The minimal energy trajectory and coordinate transformation is used to get trajectories of the missile, decoys and roll booster, and establish the dynamic scene for the separate procedure of the target group, and the dynamic RCS including specular reflection, edge diffraction and multi-reflection from the target group are obtained by Physical Optics (PO, Equivalent Edge Currents (EEC and Shooting-and-Bouncing Ray (SBR methods. Compared with the dynamic RCS result with the common interpolation method, the proposed method is consistent with the common method when the targets in the scene are far away from each other and each target is not sheltered by others in the incident direction. When the target group is densely distributed and the shelter effect can not be neglected, the interpolation method is extremely difficult to realize, whereas the proposed method is successful.
Numerical Simulation of Recycled Concrete Using Convex Aggregate Model and Base Force Element Method
Directory of Open Access Journals (Sweden)
Yijiang Peng
2016-01-01
Full Text Available By using the Base Force Element Method (BFEM on potential energy principle, a new numerical concrete model, random convex aggregate model, is presented in this paper to simulate the experiment under uniaxial compression for recycled aggregate concrete (RAC which can also be referred to as recycled concrete. This model is considered as a heterogeneous composite which is composed of five mediums, including natural coarse aggregate, old mortar, new mortar, new interfacial transition zone (ITZ, and old ITZ. In order to simulate the damage processes of RAC, a curve damage model was adopted as the damage constitutive model and the strength theory of maximum tensile strain was used as the failure criterion in the BFEM on mesomechanics. The numerical results obtained in this paper which contained the uniaxial compressive strengths, size effects on strength, and damage processes of RAC are in agreement with experimental observations. The research works show that the random convex aggregate model and the BFEM with the curve damage model can be used for simulating the relationship between microstructure and mechanical properties of RAC.
The Seepage Simulation of Single Hole and Composite Gas Drainage Based on LB Method
Chen, Yanhao; Zhong, Qiu; Gong, Zhenzhao
2018-01-01
Gas drainage is the most effective method to prevent and solve coal mine gas power disasters. It is very important to study the seepage flow law of gas in fissure coal gas. The LB method is a simplified computational model based on micro-scale, especially for the study of seepage problem. Based on fracture seepage mathematical model on the basis of single coal gas drainage, using the LB method during coal gas drainage of gas flow numerical simulation, this paper maps the single-hole drainage gas, symmetric slot and asymmetric slot, the different width of the slot combined drainage area gas flow under working condition of gas cloud of gas pressure, flow path diagram and flow velocity vector diagram, and analyses the influence on gas seepage field under various working conditions, and also discusses effective drainage method of the center hole slot on both sides, and preliminary exploration that is related to the combination of gas drainage has been carried on as well.
Network reliability analysis of complex systems using a non-simulation-based method
International Nuclear Information System (INIS)
Kim, Youngsuk; Kang, Won-Hee
2013-01-01
Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.
Simulation on Temperature Field of Radiofrequency Lesions System Based on Finite Element Method
International Nuclear Information System (INIS)
Xiao, D; Qian, Z; Li, W; Qian, L
2011-01-01
This paper mainly describes the way to get the volume model of damaged region according to the simulation on temperature field of radiofrequency ablation lesion system in curing Parkinson's disease based on finite element method. This volume model reflects, to some degree, the shape and size of the damaged tissue during the treatment with all tendencies in different time or core temperature. By using Pennes equation as heat conduction equation of radiofrequency ablation of biological tissue, the author obtains the temperature distribution field of biological tissue in the method of finite element for solving equations. In order to establish damage models at temperature points of 60 deg. C, 65 deg. C, 70 deg. C, 75 deg. C, 80 deg. C, 85 deg. C and 90 deg. C while the time points are 30s, 60s, 90s and 120s, Parkinson's disease model of nuclei is reduced to uniform, infinite model with RF pin at the origin. Theoretical simulations of these models are displayed, focusing on a variety of conditions about the effective lesion size on horizontal and vertical. The results show the binary complete quadratic non-linear joint temperature-time models of the maximum damage diameter and maximum height. The models can comprehensively reflect the degeneration of target tissue caused by radio frequency temperature and duration. This lay the foundation for accurately monitor of clinical RF treatment of Parkinson's disease in the future.
Simulation-based investigation of the paired-gear method in cod-end selectivity studies
DEFF Research Database (Denmark)
Herrmann, Bent; Frandsen, Rikke; Holst, René
2007-01-01
In this paper, the paired-gear and covered cod-end methods for estimating the selectivity of trawl cod-ends are compared. A modified version of the cod-end selectivity simulator PRESEMO is used to simulate the data that would be collected from a paired-gear experiment where the test cod-end also ...
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.
2014-01-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must
Method for Lumped Parameter simulation of Digital Displacement pumps/motors based on CFD
DEFF Research Database (Denmark)
Rømer, Daniel; Johansen, Per; Pedersen, Henrik C.
2013-01-01
Digital displacement fluid power pumps/motors offers improved efficiency and performance compared to traditional variable displacement pump/motors. These improvements are made possible by using efficient electronically controlled seat valves and careful design of the flow geometry. To optimize...... the design and control of digital displacement machines, there is a need for simulation models, preferably models with low computational cost. Therefore, a low computational cost generic lumped parameter model of digital displacement machine is presented, including a method for determining the needed model...... parameters based on steady CFD results, in order to take detailed geometry information into account. The response of the lumped parameter model is compared to a computational expensive transient CFD model for an example geometry....
An evolutionary programming based simulated annealing method for solving the unit commitment problem
Energy Technology Data Exchange (ETDEWEB)
Christober Asir Rajan, C. [Department of EEE, Pondicherry Engineering College, Pondicherry 605014 (India); Mohan, M.R. [Department of EEE, Anna University, Chennai 600 025 (India)
2007-09-15
This paper presents a new approach to solve the short-term unit commitment problem using an evolutionary programming based simulated annealing method. The objective of this paper is to find the generation scheduling such that the total operating cost can be minimized, when subjected to a variety of constraints. This also means that it is desirable to find the optimal generating unit commitment in the power system for the next H hours. Evolutionary programming, which happens to be a global optimisation technique for solving unit commitment Problem, operates on a system, which is designed to encode each unit's operating schedule with regard to its minimum up/down time. In this, the unit commitment schedule is coded as a string of symbols. An initial population of parent solutions is generated at random. Here, each schedule is formed by committing all the units according to their initial status (''flat start''). Here the parents are obtained from a pre-defined set of solution's, i.e. each and every solution is adjusted to meet the requirements. Then, a random recommitment is carried out with respect to the unit's minimum down times. And SA improves the status. The best population is selected by evolutionary strategy. The Neyveli Thermal Power Station (NTPS) Unit-II in India demonstrates the effectiveness of the proposed approach; extensive studies have also been performed for different power systems consists of 10, 26, 34 generating units. Numerical results are shown comparing the cost solutions and computation time obtained by using the Evolutionary Programming method and other conventional methods like Dynamic Programming, Lagrangian Relaxation and Simulated Annealing and Tabu Search in reaching proper unit commitment. (author)
Poikela, Paula; Ruokamo, Heli; Teräs, Marianne
2015-02-01
Nursing educators must ensure that nursing students acquire the necessary competencies; finding the most purposeful teaching methods and encouraging learning through meaningful learning opportunities is necessary to meet this goal. We investigated student learning in a simulated nursing practice using videography. The purpose of this paper is to examine how two different teaching methods presented students' meaningful learning in a simulated nursing experience. The 6-hour study was divided into three parts: part I, general information; part II, training; and part III, simulated nursing practice. Part II was delivered by two different methods: a computer-based simulation and a lecture. The study was carried out in the simulated nursing practice in two universities of applied sciences, in Northern Finland. The participants in parts II and I were 40 first year nursing students; 12 student volunteers continued to part III. Qualitative analysis method was used. The data were collected using video recordings and analyzed by videography. The students who used a computer-based simulation program were more likely to report meaningful learning themes than those who were first exposed to lecture method. Educators should be encouraged to use computer-based simulation teaching in conjunction with other teaching methods to ensure that nursing students are able to receive the greatest educational benefits. Copyright © 2014 Elsevier Ltd. All rights reserved.
Berti, Claudio; Gillespie, Dirk; Bardhan, Jaydeep P; Eisenberg, Robert S; Fiegna, Claudio
2012-07-01
Particle-based simulation represents a powerful approach to modeling physical systems in electronics, molecular biology, and chemical physics. Accounting for the interactions occurring among charged particles requires an accurate and efficient solution of Poisson's equation. For a system of discrete charges with inhomogeneous dielectrics, i.e., a system with discontinuities in the permittivity, the boundary element method (BEM) is frequently adopted. It provides the solution of Poisson's equation, accounting for polarization effects due to the discontinuity in the permittivity by computing the induced charges at the dielectric boundaries. In this framework, the total electrostatic potential is then found by superimposing the elemental contributions from both source and induced charges. In this paper, we present a comparison between two BEMs to solve a boundary-integral formulation of Poisson's equation, with emphasis on the BEMs' suitability for particle-based simulations in terms of solution accuracy and computation speed. The two approaches are the collocation and qualocation methods. Collocation is implemented following the induced-charge computation method of D. Boda et al. [J. Chem. Phys. 125, 034901 (2006)]. The qualocation method is described by J. Tausch et al. [IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 20, 1398 (2001)]. These approaches are studied using both flat and curved surface elements to discretize the dielectric boundary, using two challenging test cases: a dielectric sphere embedded in a different dielectric medium and a toy model of an ion channel. Earlier comparisons of the two BEM approaches did not address curved surface elements or semiatomistic models of ion channels. Our results support the earlier findings that for flat-element calculations, qualocation is always significantly more accurate than collocation. On the other hand, when the dielectric boundary is discretized with curved surface elements, the
Directory of Open Access Journals (Sweden)
Jia-Cheng Yu
2018-02-01
Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.
Energy Technology Data Exchange (ETDEWEB)
Wu Hong [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, Beihang University, Beijing 100191 (China); Wang Jiao, E-mail: wangjiao@sjp.buaa.edu.cn [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, Beihang University, Beijing 100191 (China); Tao Zhi [National Key Laboratory of Science and Technology on Aero-Engine Aero-Thermodynamics, Beihang University, Beijing 100191 (China)
2011-12-15
Highlights: Black-Right-Pointing-Pointer A double MRT-LBM is used to study heat transfer in turbulent channel flow. Black-Right-Pointing-Pointer Turbulent Pr is modeled by dynamic subgrid scale model. Black-Right-Pointing-Pointer Temperature gradients are calculated by the non-equilibrium temperature distribution moments. - Abstract: In this paper, a large eddy simulation based on the lattice Boltzmann framework is carried out to simulate the heat transfer in a turbulent channel flow, in which the temperature can be regarded as a passive scalar. A double multiple relaxation time (DMRT) thermal lattice Boltzmann model is employed. While applying DMRT, a multiple relaxation time D3Q19 model is used to simulate the flow field, and a multiple relaxation time D3Q7 model is used to simulate the temperature field. The dynamic subgrid stress model, in which the turbulent eddy viscosity and the turbulent Prandtl number are dynamically computed, is integrated to describe the subgrid effect. Not only the strain rate but also the temperature gradient is calculated locally by the non-equilibrium moments. The Reynolds number based on the shear velocity and channel half height is 180. The molecular Prandtl numbers are set to be 0.025 and 0.71. Statistical quantities, such as the average velocity, average temperature, Reynolds stress, root mean square (RMS) velocity fluctuations, RMS temperature and turbulent heat flux are obtained and compared with the available data. The results demonstrate great reliability of DMRT-LES in studying turbulence.
Zhang, Xue-Ying; Wen, Zong-Guo
2014-11-01
To evaluate the reduction potential of industrial water pollutant emissions and to study the application of technology simulation in pollutant control and environment management, an Industrial Reduction Potential Analysis and Environment Management (IRPAEM) model was developed based on coupling of "material-process-technology-product". The model integrated bottom-up modeling and scenario analysis method, and was applied to China's paper industry. Results showed that under CM scenario, the reduction potentials of waster water, COD and ammonia nitrogen would reach 7 x 10(8) t, 39 x 10(4) t and 0.3 x 10(4) t, respectively in 2015, 13.8 x 10(8) t, 56 x 10(4) t and 0.5 x 10(4) t, respectively in 2020. Strengthening the end-treatment would still be the key method to reduce emissions during 2010-2020, while the reduction effect of structure adjustment would be more obvious during 2015-2020. Pollution production could basically reach the domestic or international advanced level of clean production in 2015 and 2020; the index of wastewater and ammonia nitrogen would basically meet the emission standards in 2015 and 2020 while COD would not.
Multiscale Simulation of Porous Ceramics Based on Movable Cellular Automaton Method
Smolin, A.; Smolin, I.; Eremina, G.; Smolina, I.
2017-10-01
The paper presents a model for simulating mechanical behaviour of multiscale porous ceramics based on movable cellular automaton method, which is a novel particle method in computational mechanics of solid. The initial scale of the proposed approach corresponds to the characteristic size of the smallest pores in the ceramics. At this scale, we model uniaxial compression of several representative samples with an explicit account of pores of the same size but with the random unique position in space. As a result, we get the average values of Young’s modulus and strength, as well as the parameters of the Weibull distribution of these properties at the current scale level. These data allow us to describe the material behaviour at the next scale level were only the larger pores are considered explicitly, while the influence of small pores is included via the effective properties determined at the previous scale level. If the pore size distribution function of the material has N maxima we need to perform computations for N - 1 levels in order to get the properties from the lowest scale up to the macroscale step by step. The proposed approach was applied to modelling zirconia ceramics with bimodal pore size distribution. The obtained results show correct behaviour of the model sample at the macroscale.
Mathematical Modeling and Simulation of SWRO Process Based on Simultaneous Method
Directory of Open Access Journals (Sweden)
Aipeng Jiang
2014-01-01
Full Text Available Reverse osmosis (RO technique is one of the most efficient ways for seawater desalination to solve the shortage of freshwater. For prediction and analysis of the performance of seawater reverse osmosis (SWRO process, an accurate and detailed model based on the solution-diffusion and mass transfer theory is established. Since the accurate formulation of the model includes many differential equations and strong nonlinear equations (differential and algebraic equations, DAEs, to solve the problem efficiently, the simultaneous method through orthogonal collocation on finite elements and large scale solver were used to obtain the solutions. The model was fully discretized into NLP (nonlinear programming with large scale variables and equations, and then the NLP was solved by large scale solver of IPOPT. Validation of the formulated model and solution method is verified by case study on a SWRO plant. Then simulation and analysis are carried out to demonstrate the performance of reverse osmosis process; operational conditions such as feed pressure and feed flow rate as well as feed temperature are also analyzed. This work is of significant meaning for the detailed understanding of RO process and future energy saving through operational optimization.
A method to solve the aircraft magnetic field model basing on geomagnetic environment simulation
International Nuclear Information System (INIS)
Lin, Chunsheng; Zhou, Jian-jun; Yang, Zhen-yu
2015-01-01
In aeromagnetic survey, it is difficult to solve the aircraft magnetic field model by flying for some unman controlled or disposable aircrafts. So a model solving method on the ground is proposed. The method simulates the geomagnetic environment where the aircraft is flying and creates the background magnetic field samples which is the same as the magnetic field arose by aircraft’s maneuvering. Then the aircraft magnetic field model can be solved by collecting the magnetic field samples. The method to simulate the magnetic environment and the method to control the errors are presented as well. Finally, an experiment is done for verification. The result shows that the model solving precision and stability by the method is well. The calculated model parameters by the method in one district can be used in worldwide districts as well. - Highlights: • A method to solve the aircraft magnetic field model on the ground is proposed. • The method solves the model by simulating dynamic geomagnetic environment as in the real flying. • The way to control the error of the method was analyzed. • An experiment is done for verification
International Nuclear Information System (INIS)
Lee, Sumi; Song, Doosam
2010-01-01
Drastic urbanization and manhattanization are causing various problems in wind environment. This study suggests a CFD simulation method to evaluate wind environment in the early design stage of high-rise buildings. The CFD simulation of this study is not a traditional in-depth simulation, but a method to immediately evaluate wind environment for each design alternative and provide guidelines for design modification. Thus, the CFD simulation of this study to evaluate wind environments uses BIM-based CFD tools to utilize building models in the design stage. This study examined previous criteria to evaluate wind environment for pedestrians around buildings and selected evaluation criteria applicable to the CFD simulation method of this study. Furthermore, proper mesh generation method and CPU time were reviewed to find a meaningful CFD simulation result for determining optimal design alternative from the perspective of wind environment in the design stage. In addition, this study is to suggest a wind environment evaluation method through a BIM-based CFD simulation.
Energy Technology Data Exchange (ETDEWEB)
Marcondes, Francisco [Federal University of Ceara, Fortaleza (Brazil). Dept. of Metallurgical Engineering and Material Science], e-mail: marcondes@ufc.br; Varavei, Abdoljalil; Sepehrnoori, Kamy [The University of Texas at Austin (United States). Petroleum and Geosystems Engineering Dept.], e-mails: varavei@mail.utexas.edu, kamys@mail.utexas.edu
2010-07-01
An element-based finite-volume approach in conjunction with unstructured grids for naturally fractured compositional reservoir simulation is presented. In this approach, both the discrete fracture and the matrix mass balances are taken into account without any additional models to couple the matrix and discrete fractures. The mesh, for two dimensional domains, can be built of triangles, quadrilaterals, or a mix of these elements. However, due to the available mesh generator to handle both matrix and discrete fractures, only results using triangular elements will be presented. The discrete fractures are located along the edges of each element. To obtain the approximated matrix equation, each element is divided into three sub-elements and then the mass balance equations for each component are integrated along each interface of the sub-elements. The finite-volume conservation equations are assembled from the contribution of all the elements that share a vertex, creating a cell vertex approach. The discrete fracture equations are discretized only along the edges of each element and then summed up with the matrix equations in order to obtain a conservative equation for both matrix and discrete fractures. In order to mimic real field simulations, the capillary pressure is included in both matrix and discrete fracture media. In the implemented model, the saturation field in the matrix and discrete fractures can be different, but the potential of each phase in the matrix and discrete fracture interface needs to be the same. The results for several naturally fractured reservoirs are presented to demonstrate the applicability of the method. (author)
GEM simulation methods development
International Nuclear Information System (INIS)
Tikhonov, V.; Veenhof, R.
2002-01-01
A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment
Mixed finite element-based fully conservative methods for simulating wormhole propagation
Kou, Jisheng; Sun, Shuyu; Wu, Yuanqing
2015-01-01
Wormhole propagation during reactive dissolution of carbonates plays a very important role in the product enhancement of oil and gas reservoir. Because of high velocity and nonuniform porosity, the Darcy–Forchheimer model is applicable for this problem instead of conventional Darcy framework. We develop a mixed finite element scheme for numerical simulation of this problem, in which mixed finite element methods are used not only for the Darcy–Forchheimer flow equations but also for the solute transport equation by introducing an auxiliary flux variable to guarantee full mass conservation. In theoretical analysis aspects, based on the cut-off operator of solute concentration, we construct an analytical function to control and handle the change of porosity with time; we treat the auxiliary flux variable as a function of velocity and establish its properties; we employ the coupled analysis approach to deal with the fully coupling relation of multivariables. From this, the stability analysis and a priori error estimates for velocity, pressure, concentration and porosity are established in different norms. Numerical results are also given to verify theoretical analysis and effectiveness of the proposed scheme.
Mixed finite element-based fully conservative methods for simulating wormhole propagation
Kou, Jisheng
2015-10-11
Wormhole propagation during reactive dissolution of carbonates plays a very important role in the product enhancement of oil and gas reservoir. Because of high velocity and nonuniform porosity, the Darcy–Forchheimer model is applicable for this problem instead of conventional Darcy framework. We develop a mixed finite element scheme for numerical simulation of this problem, in which mixed finite element methods are used not only for the Darcy–Forchheimer flow equations but also for the solute transport equation by introducing an auxiliary flux variable to guarantee full mass conservation. In theoretical analysis aspects, based on the cut-off operator of solute concentration, we construct an analytical function to control and handle the change of porosity with time; we treat the auxiliary flux variable as a function of velocity and establish its properties; we employ the coupled analysis approach to deal with the fully coupling relation of multivariables. From this, the stability analysis and a priori error estimates for velocity, pressure, concentration and porosity are established in different norms. Numerical results are also given to verify theoretical analysis and effectiveness of the proposed scheme.
Extracting dimer structures from simulations of organic-based materials using QM/MM methods
Energy Technology Data Exchange (ETDEWEB)
Pérez-Jiménez, A.J., E-mail: aj.perez@ua.es; Sancho-García, J.C., E-mail: jc.sancho@ua.es
2015-09-28
Highlights: • DFT geometries of isolated dimers in organic crystals differ from experimental ones. • This can be corrected using QM/MM geometry optimizations. • The QM = B3LYP–D3(ZD)/cc-pVDZ and MM = GAFF combination works reasonably well. - Abstract: The functionality of weakly bound organic materials, either in Nanoelectronics or in Materials Science, is known to be strongly affected by their morphology. Theoretical predictions of the underlying structure–property relationships are frequently based on calculations performed on isolated dimers, but the optimized structure of the latter may significantly differ from experimental data even when dispersion-corrected methods are used for it. Here, we address this problem on two organic crystals, namely coronene and 5,6,11,12-tetrachlorotetracene, concluding that it is caused by the absence of the surrounding monomers present in the crystal, and that it can be efficiently cured when the dimer is embedded into a general Quantum Mechanics/Molecular Mechanics (QM/MM) geometry optimization scheme. We also investigate how the size of the MM region affects the results. These findings may be helpful for the simulation of the morphology of active materials in crystalline or glassy samples.
Reduction of very large reaction mechanisms using methods based on simulation error minimization
Energy Technology Data Exchange (ETDEWEB)
Nagy, Tibor; Turanyi, Tamas [Institute of Chemistry, Eoetvoes University (ELTE), P.O. Box 32, H-1518 Budapest (Hungary)
2009-02-15
A new species reduction method called the Simulation Error Minimization Connectivity Method (SEM-CM) was developed. According to the SEM-CM algorithm, a mechanism building procedure is started from the important species. Strongly connected sets of species, identified on the basis of the normalized Jacobian, are added and several consistent mechanisms are produced. The combustion model is simulated with each of these mechanisms and the mechanism causing the smallest error (i.e. deviation from the model that uses the full mechanism), considering the important species only, is selected. Then, in several steps other strongly connected sets of species are added, the size of the mechanism is gradually increased and the procedure is terminated when the error becomes smaller than the required threshold. A new method for the elimination of redundant reactions is also presented, which is called the Principal Component Analysis of Matrix F with Simulation Error Minimization (SEM-PCAF). According to this method, several reduced mechanisms are produced by using various PCAF thresholds. The reduced mechanism having the least CPU time requirement among the ones having almost the smallest error is selected. Application of SEM-CM and SEM-PCAF together provides a very efficient way to eliminate redundant species and reactions from large mechanisms. The suggested approach was tested on a mechanism containing 6874 irreversible reactions of 345 species that describes methane partial oxidation to high conversion. The aim is to accurately reproduce the concentration-time profiles of 12 major species with less than 5% error at the conditions of an industrial application. The reduced mechanism consists of 246 reactions of 47 species and its simulation is 116 times faster than using the full mechanism. The SEM-CM was found to be more effective than the classic Connectivity Method, and also than the DRG, two-stage DRG, DRGASA, basic DRGEP and extended DRGEP methods. (author)
Energy Technology Data Exchange (ETDEWEB)
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Strong source heat transfer simulations based on a GalerKin/Gradient - least - squares method
International Nuclear Information System (INIS)
Franca, L.P.; Carmo, E.G.D. do.
1989-05-01
Heat conduction problems with temperature-dependent strong sources are modeled by an equation with a laplacian term, a linear term and a given source distribution term. When the linear-temperature-dependent source term is much larger than the laplacian term, we have a singular perturbation problem. In this case, boundary layers are formed to satisfy the Dirichlet boundary conditions. Although this is an elliptic equation, the standard Galerkin method solution is contaminated by spurious oscillations in the neighborhood of the boundary layers. Herein we employ a Galerkin/Gradient-least-squares method which eliminates all pathological phenomena of the Galerkin method. The method is constructed by adding to the Galerkin method a mesh-dependent term obtained by the least-squares form of the gradient of the Euler-Lagrange equation. Error estimates, numerical simulations in one-and multi-dimensions are given that attest the good stability and accuracy properties of the method [pt
Directory of Open Access Journals (Sweden)
Fan Yuxin
2014-12-01
Full Text Available A fluid–structure interaction method combining a nonlinear finite element algorithm with a preconditioning finite volume method is proposed in this paper to simulate parachute transient dynamics. This method uses a three-dimensional membrane–cable fabric model to represent a parachute system at a highly folded configuration. The large shape change during parachute inflation is computed by the nonlinear Newton–Raphson iteration and the linear system equation is solved by the generalized minimal residual (GMRES method. A membrane wrinkling algorithm is also utilized to evaluate the special uniaxial tension state of membrane elements on the parachute canopy. In order to avoid large time expenses during structural nonlinear iteration, the implicit Hilber–Hughes–Taylor (HHT time integration method is employed. For the fluid dynamic simulations, the Roe and HLLC (Harten–Lax–van Leer contact scheme has been modified and extended to compute flow problems at all speeds. The lower–upper symmetric Gauss–Seidel (LU-SGS approximate factorization is applied to accelerate the numerical convergence speed. Finally, the test model of a highly folded C-9 parachute is simulated at a prescribed speed and the results show similar characteristics compared with experimental results and previous literature.
Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing
2007-06-01
Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.
[Simulation of water and carbon fluxes in harvard forest area based on data assimilation method].
Zhang, Ting-Long; Sun, Rui; Zhang, Rong-Hua; Zhang, Lei
2013-10-01
Model simulation and in situ observation are the two most important means in studying the water and carbon cycles of terrestrial ecosystems, but have their own advantages and shortcomings. To combine these two means would help to reflect the dynamic changes of ecosystem water and carbon fluxes more accurately. Data assimilation provides an effective way to integrate the model simulation and in situ observation. Based on the observation data from the Harvard Forest Environmental Monitoring Site (EMS), and by using ensemble Kalman Filter algorithm, this paper assimilated the field measured LAI and remote sensing LAI into the Biome-BGC model to simulate the water and carbon fluxes in Harvard forest area. As compared with the original model simulated without data assimilation, the improved Biome-BGC model with the assimilation of the field measured LAI in 1998, 1999, and 2006 increased the coefficient of determination R2 between model simulation and flux observation for the net ecosystem exchange (NEE) and evapotranspiration by 8.4% and 10.6%, decreased the sum of absolute error (SAE) and root mean square error (RMSE) of NEE by 17.7% and 21.2%, and decreased the SAE and RMSE of the evapotranspiration by 26. 8% and 28.3%, respectively. After assimilated the MODIS LAI products of 2000-2004 into the improved Biome-BGC model, the R2 between simulated and observed results of NEE and evapotranspiration was increased by 7.8% and 4.7%, the SAE and RMSE of NEE were decreased by 21.9% and 26.3%, and the SAE and RMSE of evapotranspiration were decreased by 24.5% and 25.5%, respectively. It was suggested that the simulation accuracy of ecosystem water and carbon fluxes could be effectively improved if the field measured LAI or remote sensing LAI was integrated into the model.
Real-time tumor ablation simulation based on the dynamic mode decomposition method
Bourantas, George C.
2014-05-01
Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.
Energy Technology Data Exchange (ETDEWEB)
Papadopoulos, Alessandro Vittorio, E-mail: alessandro.papadopoulos@control.lth.se [Lund University, Department of Automatic Control (Sweden); Leva, Alberto, E-mail: alberto.leva@polimi.it [Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioingegneria (Italy)
2015-06-15
The presence of different time scales in a dynamic model significantly hampers the efficiency of its simulation. In multibody systems the fact is particularly relevant, as the mentioned time scales may be very different, due, for example, to the coexistence of mechanical components controled by electronic drive units, and may also appear in conjunction with significant nonlinearities. This paper proposes a systematic technique, based on the principles of dynamic decoupling, to partition a model based on the time scales that are relevant for the particular simulation studies to be performed and as transparently as possible for the user. In accordance with said purpose, peculiar to the technique is its neat separation into two parts: a structural analysis of the model, which is general with respect to any possible simulation scenario, and a subsequent decoupled integration, which can conversely be (easily) tailored to the study at hand. Also, since the technique does not aim at reducing but rather at partitioning the model, the state space and the physical interpretation of the dynamic variables are inherently preserved. Moreover, the proposed analysis allows us to define some novel indices relative to the separability of the system, thereby extending the idea of “stiffness” in a way that is particularly keen to its use for the improvement of simulation efficiency, be the envisaged integration scheme monolithic, parallel, or even based on cosimulation. Finally, thanks to the way the analysis phase is conceived, the technique is naturally applicable to both linear and nonlinear models. The paper contains a methodological presentation of the proposed technique, which is related to alternatives available in the literature so as to evidence the peculiarities just sketched, and some application examples illustrating the achieved advantages and motivating the major design choice from an operational viewpoint.
A numerical simulation of wheel spray for simplified vehicle model based on discrete phase method
Directory of Open Access Journals (Sweden)
Xingjun Hu
2015-07-01
Full Text Available Road spray greatly affects vehicle body soiling and driving safety. The study of road spray has attracted increasing attention. In this article, computational fluid dynamics software with widely used finite volume method code was employed to investigate the numerical simulation of spray induced by a simplified wheel model and a modified square-back model proposed by the Motor Industry Research Association. Shear stress transport k-omega turbulence model, discrete phase model, and Eulerian wall-film model were selected. In the simulation process, the phenomenon of breakup and coalescence of drops were considered, and the continuous and discrete phases were treated as two-way coupled in momentum and turbulent motion. The relationship between the vehicle external flow structure and body soiling was also discussed.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine
In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has...... proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...
A New Hybrid Viscoelastic Soft Tissue Model based on Meshless Method for Haptic Surgical Simulation
Bao, Yidong; Wu, Dongmei; Yan, Zhiyuan; Du, Zhijiang
2013-01-01
This paper proposes a hybrid soft tissue model that consists of a multilayer structure and many spheres for surgical simulation system based on meshless. To improve accuracy of the model, tension is added to the three-parameter viscoelastic structure that connects the two spheres. By using haptic device, the three-parameter viscoelastic model (TPM) produces accurate deformationand also has better stress-strain, stress relaxation and creep properties. Stress relaxation and creep formulas have been obtained by mathematical formula derivation. Comparing with the experimental results of the real pig liver which were reported by Evren et al. and Amy et al., the curve lines of stress-strain, stress relaxation and creep of TPM are close to the experimental data of the real liver. Simulated results show that TPM has better real-time, stability and accuracy. PMID:24339837
Full wave simulation of waves in ECRIS plasmas based on the finite element method
Energy Technology Data Exchange (ETDEWEB)
Torrisi, G. [INFN - Laboratori Nazionali del Sud, via S. Sofia 62, 95123, Catania, Italy and Università Mediterranea di Reggio Calabria, Dipartimento di Ingegneria dell' Informazione, delle Infrastrutture e dell' Energia Sostenibile (DIIES), Via Graziella, I (Italy); Mascali, D.; Neri, L.; Castro, G.; Patti, G.; Celona, L.; Gammino, S.; Ciavola, G. [INFN - Laboratori Nazionali del Sud, via S. Sofia 62, 95123, Catania (Italy); Di Donato, L. [Università degli Studi di Catania, Dipartimento di Ingegneria Elettrica Elettronica ed Informatica (DIEEI), Viale Andrea Doria 6, 95125 Catania (Italy); Sorbello, G. [INFN - Laboratori Nazionali del Sud, via S. Sofia 62, 95123, Catania, Italy and Università degli Studi di Catania, Dipartimento di Ingegneria Elettrica Elettronica ed Informatica (DIEEI), Viale Andrea Doria 6, 95125 Catania (Italy); Isernia, T. [Università Mediterranea di Reggio Calabria, Dipartimento di Ingegneria dell' Informazione, delle Infrastrutture e dell' Energia Sostenibile (DIIES), Via Graziella, I-89100 Reggio Calabria (Italy)
2014-02-12
This paper describes the modeling and the full wave numerical simulation of electromagnetic waves propagation and absorption in an anisotropic magnetized plasma filling the resonant cavity of an electron cyclotron resonance ion source (ECRIS). The model assumes inhomogeneous, dispersive and tensorial constitutive relations. Maxwell's equations are solved by the finite element method (FEM), using the COMSOL Multiphysics{sup ®} suite. All the relevant details have been considered in the model, including the non uniform external magnetostatic field used for plasma confinement, the local electron density profile resulting in the full-3D non uniform magnetized plasma complex dielectric tensor. The more accurate plasma simulations clearly show the importance of cavity effect on wave propagation and the effects of a resonant surface. These studies are the pillars for an improved ECRIS plasma modeling, that is mandatory to optimize the ion source output (beam intensity distribution and charge state, especially). Any new project concerning the advanced ECRIS design will take benefit by an adequate modeling of self-consistent wave absorption simulations.
Energy Technology Data Exchange (ETDEWEB)
Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)
2010-07-01
Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)
Numerical methods and inversion algorithms in reservoir simulation based on front tracking
Energy Technology Data Exchange (ETDEWEB)
Haugse, Vidar
1999-04-01
This thesis uses front tracking to analyse laboratory experiments on multiphase flow in porous media. New methods for parameter estimation for two- and three-phase relative permeability experiments have been developed. Up scaling of heterogeneous and stochastic porous media is analysed. Numerical methods based on front tracking is developed and analysed. Such methods are efficient for problems involving steep changes in the physical quantities. Multi-dimensional problems are solved by combining front tracking with dimensional splitting. A method for adaptive grid refinement is developed.
FEM simulation of friction testing method based on combined forward rod-backward can extrusion
DEFF Research Database (Denmark)
Nakamura, T; Bay, Niels; Zhang, Z. L
1997-01-01
A new friction testing method by combined forward rod-backward can extrusion is proposed in order to evaluate frictional characteristics of lubricants in forging processes. By this method the friction coefficient mu and the friction factor m can be estimated along the container wall and the conical...... curves are obtained by rigid-plastic FEM simulations in a combined forward rod-backward can extrusion process for a reduction in area R-b = 25, 50 and 70 percent in the backward can extrusion. It is confirmed that the friction factor m(p) on the punch nose in the backward cart extrusion has almost...... in a mechanical press with aluminium alloy A6061 as the workpiece material and different kinds of lubricants. They confirm the analysis resulting in reasonable values for the friction coefficient and the friction factor....
Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.
Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H
2015-11-01
Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
DEFF Research Database (Denmark)
Yan, Wei; Belkadi, Abdelkrim; Michelsen, Michael Locht
2013-01-01
Flash calculation can be a time-consuming part in compositional reservoir simulations, and several approaches have been proposed to speed it up. One recent approach is the shadow-region method that reduces the computation time mainly by skipping stability analysis for a large portion...... of the compositions in the single-phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be used with the initial estimates from the previous step. Another approach is the compositional-space adaptive-tabulation (CSAT) approach, which is based on tie-line table look-up (TTL). It saves...... be made. Comparison between the shadow-region approach and the approximation approach, including TTL and TDBA, has been made with a slimtube simulator by which the simulation temperature and the simulation pressure are set constant. It is shown that TDBA can significantly improve the speed in the two...
Directory of Open Access Journals (Sweden)
Xueli Chen
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
Methods of channeling simulation
International Nuclear Information System (INIS)
Barrett, J.H.
1989-06-01
Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs
A physics based method for combining multiple anatomy models with application to medical simulation.
Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David
2009-01-01
We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.
International Nuclear Information System (INIS)
Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-01-01
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL 1 and HVL 2 ) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL 1 and HVL 2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL 1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types
Energy Technology Data Exchange (ETDEWEB)
Prasanth, P S; Kakkassery, Jose K; Vijayakumar, R, E-mail: y3df07@nitc.ac.in, E-mail: josekkakkassery@nitc.ac.in, E-mail: vijay@nitc.ac.in [Department of Mechanical Engineering, National Institute of Technology Calicut, Kozhikode - 673 601, Kerala (India)
2012-04-01
A modified phenomenological model is constructed for the simulation of rarefied flows of polyatomic non-polar gas molecules by the direct simulation Monte Carlo (DSMC) method. This variable hard sphere-based model employs a constant rotational collision number, but all its collisions are inelastic in nature and at the same time the correct macroscopic relaxation rate is maintained. In equilibrium conditions, there is equi-partition of energy between the rotational and translational modes and it satisfies the principle of reciprocity or detailed balancing. The present model is applicable for moderate temperatures at which the molecules are in their vibrational ground state. For verification, the model is applied to the DSMC simulations of the translational and rotational energy distributions in nitrogen gas at equilibrium and the results are compared with their corresponding Maxwellian distributions. Next, the Couette flow, the temperature jump and the Rayleigh flow are simulated; the viscosity and thermal conductivity coefficients of nitrogen are numerically estimated and compared with experimentally measured values. The model is further applied to the simulation of the rotational relaxation of nitrogen through low- and high-Mach-number normal shock waves in a novel way. In all cases, the results are found to be in good agreement with theoretically expected and experimentally observed values. It is concluded that the inelastic collision of polyatomic molecules can be predicted well by employing the constructed variable hard sphere (VHS)-based collision model.
An extensive study on a simple method estimating response spectrum based on a simulated spectrum
International Nuclear Information System (INIS)
Sato, H.; Komazaki, M.; Ohori, M.
1977-01-01
The basic description of the procedure will be briefly described in the paper. Corresponding to peaks of the response spectrum for the earthquake motion the component of the respective ground predominant period was taken. The acceleration amplification factor of a building structure for the respective predominant period above taken was obtained from the spectrum for the simulated earthquake with single predominant period. The rate of the respective component in summing these amplification factors was given by satisfying the ratio among the magnitude of the peaks of the spectrum. The summation was made by the principle of the square root of sum of squares. The procedure was easily applied to estimate the spectrum of the building appendage structure. The method is attempted to extend for multi-storey building structure and appendage to this building. Analysis is made as for a two storey structure system the mode of which for the first natural frequency is that the amplitude ratio of the upper mass to the lower is 2 to 1, so that the mode shape is a reversed triangle. The behavior of the system is dealt with by the normal coordinate. The amplification factors due to two ground predominant periods are estimated for the system with the first natural frequency. In this procedure the method developed for the single-degree-of-freedom system is directly applicable. The same method is used for the system with the second natural frequency. Thus estimated amplification factor for the mode of the respective natural frequency is summed again due to the principle of the square root of sum of squares after multiplying the excitation coefficient of each mode by the corresponding factor
Simulation based sequential Monte Carlo methods for discretely observed Markov processes
Neal, Peter
2014-01-01
Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...
Integrated Building Energy Design of a Danish Office Building Based on Monte Carlo Simulation Method
DEFF Research Database (Denmark)
Sørensen, Mathias Juul; Myhre, Sindre Hammer; Hansen, Kasper Kingo
2017-01-01
The focus on reducing buildings energy consumption is gradually increasing, and the optimization of a building’s performance and maximizing its potential leads to great challenges between architects and engineers. In this study, we collaborate with a group of architects on a design project of a new...... office building located in Aarhus, Denmark. Building geometry, floor plans and employee schedules were obtained from the architects which is the basis for this study. This study aims to simplify the iterative design process that is based on the traditional trial and error method in the late design phases...
Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel
2013-06-01
To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.
Directory of Open Access Journals (Sweden)
Zhengang Guo
2017-03-01
Full Text Available Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive collaboration method for Internet of Things-enabled production-logistics systems. The method combines the schedule of token sequences in the timed colored Petri net with real-time status of key production and logistics equipment. The key equipment is made ‘smart’ to actively publish or request logistics tasks. An integrated framework based on a cloud service platform is introduced to provide the basis for self-adaptive collaboration of production-logistics systems. A simulation experiment is conducted by using colored Petri nets (CPN Tools to validate the performance and applicability of the proposed method. Computational experiments demonstrate that the proposed method outperforms the event-driven method in terms of reductions of waiting time, makespan, and electricity consumption. This proposed method is also applicable to other manufacturing systems to implement production-logistics collaboration.
Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.
2015-04-01
Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.
Directory of Open Access Journals (Sweden)
Xia Xiaozhou
2013-01-01
Full Text Available In the frame of the extended finite element method, the exponent disconnected function is introduced to reflect the discontinuous characteristic of crack and the crack tip enrichment function which is made of triangular basis function, and the linear polar radius function is adopted to describe the displacement field distribution of elastoplastic crack tip. Where, the linear polar radius function form is chosen to decrease the singularity characteristic induced by the plastic yield zone of crack tip, and the triangle basis function form is adopted to describe the displacement distribution character with the polar angle of crack tip. Based on the displacement model containing the above enrichment displacement function, the increment iterative form of elastoplastic extended finite element method is deduced by virtual work principle. For nonuniform hardening material such as concrete, in order to avoid the nonsymmetry characteristic of stiffness matrix induced by the non-associate flowing of plastic strain, the plastic flowing rule containing cross item based on the least energy dissipation principle is adopted. Finally, some numerical examples show that the elastoplastic X-FEM constructed in this paper is of validity.
Risk-based transfer responses to climate change, simulated through autocorrelated stochastic methods
Kirsch, B.; Characklis, G. W.
2009-12-01
Maintaining municipal water supply reliability despite growing demands can be achieved through a variety of mechanisms, including supply strategies such as temporary transfers. However, much of the attention on transfers has been focused on market-based transfers in the western United States largely ignoring the potential for transfers in the eastern U.S. The different legal framework of the eastern and western U.S. leads to characteristic differences between their respective transfers. Western transfers tend to be agricultural-to-urban and involve raw, untreated water, with the transfer often involving a simple change in the location and/or timing of withdrawals. Eastern transfers tend to be contractually established urban-to-urban transfers of treated water, thereby requiring the infrastructure to transfer water between utilities. Utilities require the tools to be able to evaluate transfer decision rules and the resulting expected future transfer behavior. Given the long-term planning horizons of utilities, potential changes in hydrologic patterns due to climate change must be considered. In response, this research develops a method for generating a stochastic time series that reproduces the historic autocorrelation and can be adapted to accommodate future climate scenarios. While analogous in operation to an autoregressive model, this method reproduces the seasonal autocorrelation structure, as opposed to assuming the strict stationarity produced by an autoregressive model. Such urban-to-urban transfers are designed to be rare, transient events used primarily during times of severe drought, and incorporating Monte Carlo techniques allows for the development of probability distributions of likely outcomes. This research evaluates a system risk-based, urban-to-urban transfer agreement between three utilities in the Triangle region of North Carolina. Two utilities maintain their own surface water supplies in adjoining watersheds and look to obtain transfers via
Directory of Open Access Journals (Sweden)
Guanchao Jiang
2016-02-01
Full Text Available Background: The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Methods: Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. Results: The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%, updated the curriculum model and instructional methods (79.8%, strengthened faculty development (84.0%, improved educational resources (82.1%, and benefited all students (53.4%. Conclusions: The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.
The simulation of electrostatic coupling intra-body communication based on the finite-element method
Institute of Scientific and Technical Information of China (English)
Song Yong; Zhang Kai; Yang Guang; Zhu Kang; Hao Qun
2011-01-01
In this paper, investigation has been done in the computer simulation of the electrostatic coupling IBC by using the developed finite-element models, in which a. the incidence and reflection of electronic signal in the upper arm model were analyzed by using the theory of electromagnetic wave; b. the finite-element models of electrostatic coupling IBC were developed by using the electromagnetic analysis package of ANSYS software; c. the signal attenuation of electrostatic coupling IBC were simulated under the conditions of different signal frequencies, electrodes directions, electrodes sizes and transmission distances. Finally, some important conclusions are deduced on the basis of simulation results.
Peng, Di; Saravia, Florencia; Abbt-Braun, Gudrun; Horn, Harald
2016-01-01
Trihalomethanes (THM) are the most typical disinfection by-products (DBPs) found in public swimming pool water. DBPs are produced when organic and inorganic matter in water reacts with chemical disinfectants. The irregular contribution of substances from pool visitors and long contact time with disinfectant make the forecast of THM in pool water a challenge. In this work occurrence of THM in a public indoor swimming pool was investigated and correlated with the dissolved organic carbon (DOC). Daily sampling of pool water for 26 days showed a positive correlation between DOC and THM with a time delay of about two days, while THM and DOC didn't directly correlate with the number of visitors. Based on the results and mass-balance in the pool water, a simple simulation model for estimating THM concentration in indoor swimming pool water was proposed. Formation of THM from DOC, volatilization into air and elimination by pool water treatment were included in the simulation. Formation ratio of THM gained from laboratory analysis using native pool water and information from field study in an indoor swimming pool reduced the uncertainty of the simulation. The simulation was validated by measurements in the swimming pool for 50 days. The simulated results were in good compliance with measured results. This work provides a useful and simple method for predicting THM concentration and its accumulation trend for long term in indoor swimming pool water. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jiang, Guanchao; Chen, Hong; Wang, Qiming; Chi, Baorong; He, Qingnan; Xiao, Haipeng; Zhou, Qinghuan; Liu, Jing; Wang, Shan
2016-01-01
The National Clinical Skills Competition has been held in China for 5 consecutive years since 2010 to promote undergraduate education reform and improve the teaching quality. The effects of the simulation-based competition will be analyzed in this study. Participation in the competitions and the compilation of the questions used in the competition finals are summarized, and the influence and guidance quality are further analyzed. Through the nationwide distribution of questionnaires in medical colleges, the effects of the simulation-based competition on promoting undergraduate medical education reform were evaluated. The results show that approximately 450 students from more than 110 colleges (accounting for 81% of colleges providing undergraduate clinical medical education in China) participated in the competition each year. The knowledge, skills, and attitudes were comprehensively evaluated by simulation-based assessment. Eight hundred and eighty copies of the questionnaires were distributed to 110 participating medical schools in 2015. In total, 752 valid responses were received across 95 schools. The majority of the interviewees agreed or strongly agreed that competition promoted the adoption of advanced educational principles (76.8%), updated the curriculum model and instructional methods (79.8%), strengthened faculty development (84.0%), improved educational resources (82.1%), and benefited all students (53.4%). The National Clinical Skills Competition is widely accepted in China. It has effectively promoted the reform and development of undergraduate medical education in China.
A Wigner-based ray-tracing method for imaging simulations
Mout, B.M.; Wick, M.; Bociort, F.; Urbach, H.P.
2015-01-01
The Wigner Distribution Function (WDF) forms an alternative representation of the optical field. It can be a valuable tool for understanding and classifying optical systems. Furthermore, it possesses properties that make it suitable for optical simulations: both the intensity and the angular
International Nuclear Information System (INIS)
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-01-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm
Energy Technology Data Exchange (ETDEWEB)
Densmore, J.D., E-mail: jeffery.densmore@unnpp.gov [Bettis Atomic Power Laboratory, P.O. Box 79, West Mifflin, PA 15122 (United States); Park, H., E-mail: hkpark@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Wollaber, A.B., E-mail: wollaber@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Rauenzahn, R.M., E-mail: rick@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States); Knoll, D.A., E-mail: nol@lanl.gov [Fluid Dynamics and Solid Mechanics Group, Los Alamos National Laboratory, P.O. Box 1663, MS B216, Los Alamos, NM 87545 (United States)
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Directory of Open Access Journals (Sweden)
Jintao Song
2015-01-01
Full Text Available The foundation boundaries of numerical simulation models of hydraulic structures dominated by a vertical load are investigated. The method used is based on the stress formula for fundamental solutions to semi-infinite space body elastic mechanics under a vertical concentrated force. The limit method is introduced into the original formula, which is then partitioned and analyzed according to the direction of the depth extension of the foundation. The point load will be changed to a linear load with a length of 2a. Inverse proportion function assumptions are proposed at parameter a and depth l of the calculation points to solve the singularity questions of elastic stress in a semi-infinite space near the ground. Compared with the original formula, changing the point load to a linear load with a length of 2a is more reasonable. Finally, the boundary depth criterion of a hydraulic numerical simulation model is derived and applied to determine the depth boundary formula for gravity dam numerical simulations.
Directory of Open Access Journals (Sweden)
Zhao Dawei
2016-01-01
Full Text Available In recent years, a significant number of large-scale solar photovoltaic (PV plants have been put into operation or been under planning around the world. The model accuracy of solar PV plant is the key factor to investigate the mutual influences between solar PV plants and a power grid. However, this problem has not been well solved, especially in how to apply the real measurements to validate the models of the solar PV plants. Taking fast-responding generator method as an example, this paper presents a model validation methodology for solar PV plant via the hybrid data dynamic simulation. First, the implementation scheme of hybrid data dynamic simulation suitable for DIgSILENT PowerFactory software is proposed, and then an analysis model of solar PV plant integration based on IEEE 9 system is established. At last, model validation of solar PV plant is achieved by employing hybrid data dynamic simulation. The results illustrate the effectiveness of the proposed method in solar PV plant model validation.
Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.
2017-12-01
There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation
Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow
International Nuclear Information System (INIS)
Zheng, Lin; Zheng, Song; Zhai, Qinglan
2016-01-01
In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn–Hilliard equation which is solved in the frame work of LBE. The scalar convection–diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results. - Highlights: • A CSF LBE to thermocapillary flows. • Thermal layered Poiseuille flows. • Thermocapillary migration.
Continuous surface force based lattice Boltzmann equation method for simulating thermocapillary flow
Energy Technology Data Exchange (ETDEWEB)
Zheng, Lin, E-mail: lz@njust.edu.cn [School of Energy and Power Engineering, Nanjing University of Science and Technology, Nanjing 210094 (China); Zheng, Song [School of Mathematics and Statistics, Zhejiang University of Finance and Economics, Hangzhou 310018 (China); Zhai, Qinglan [School of Economics Management and Law, Chaohu University, Chaohu 238000 (China)
2016-02-05
In this paper, we extend a lattice Boltzmann equation (LBE) with continuous surface force (CSF) to simulate thermocapillary flows. The model is designed on our previous CSF LBE for athermal two phase flow, in which the interfacial tension forces and the Marangoni stresses as the results of the interface interactions between different phases are described by a conception of CSF. In this model, the sharp interfaces between different phases are separated by a narrow transition layers, and the kinetics and morphology evolution of phase separation would be characterized by an order parameter via Cahn–Hilliard equation which is solved in the frame work of LBE. The scalar convection–diffusion equation for temperature field is resolved by thermal LBE. The models are validated by thermal two layered Poiseuille flow, and two superimposed planar fluids at negligibly small Reynolds and Marangoni numbers for the thermocapillary driven convection, which have analytical solutions for the velocity and temperature. Then thermocapillary migration of two/three dimensional deformable droplet are simulated. Numerical results show that the predictions of present LBE agreed with the analytical solution/other numerical results. - Highlights: • A CSF LBE to thermocapillary flows. • Thermal layered Poiseuille flows. • Thermocapillary migration.
Czech Academy of Sciences Publication Activity Database
Marek, Pavel; Guštar, M.; Permaul, K.
1999-01-01
Roč. 14, č. 1 (1999), s. 105-118 ISSN 0266-8920 R&D Projects: GA ČR GA103/94/0562; GA ČR GV103/96/K034 Keywords : reliability * safety * failure * durability * Monte Carlo method Subject RIV: JM - Building Engineering Impact factor: 0.522, year: 1999
Directory of Open Access Journals (Sweden)
Polat Sendur
2017-01-01
Full Text Available Current practice of analytical and test methods related to the analysis, testing and improvement of vehicle vibrations is overviewed. The methods are illustrated on the determination and improvement of powertrain induced steering wheel vibration of a heavy commercial truck. More specifically, the transmissibility of powertrain idle vibration to cabin is investigated with respect to powertrain rigid body modes and modal alignment of the steering column/wheel system is considered. It is found out that roll mode of the powertrain is not separated from idle excitation for effective vibration isolation as well as steering wheel column mode is close to the 3rd engine excitation frequency order, which results in high vibration levels. Powertrain roll mode is optimized by tuning the powertrain mount stiffness to improve the performance. Steering column mode is also separated from the 3rd engine excitation frequency by the application of a mass absorber. It is concluded that the use of analytical and test methods to address the complex relation between design parameters and powertrain idle response is effective to optimize the system performance and evaluate the trade-offs in the vehicle design such as vibration performance and weight. Reference Number: www.asrongo.org/doi:4.2017.2.1.10
An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation
Directory of Open Access Journals (Sweden)
Wuming Zhang
2016-06-01
Full Text Available Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs from airborne LiDAR (light detection and ranging data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.
Amador, Julie M.
2017-01-01
The purpose of this study was to implement a Video Simulation Task in a mathematics methods teacher education course to engage preservice teachers in considering both the teaching and learning aspects of mathematics lesson delivery. Participants anticipated student and teacher thinking and created simulations, in which they acted out scenes on a…
Müller, C.; Seeger, M.; Schneider, R.; Johst, M.; Casper, M.
2009-04-01
Land use and land management changes affect runoff and erosion dynamics. So, measures within this scope are often directed towards the mitigation of natural hazards such as floods and landslides. However, the effects of these changes (e.g. in soil physics after reforestation or a less extensive agriculture) are i) detectable first many years later or ii) hardly observable with conventional methods. Therefore, sprinkling experiments are frequently used for process based investigations of near-surface hydrological response as well as rill and interrill erosion. In this study, two different sprinkling systems have been applied under different land use and at different scales to elucidate and quantify dominant processes of runoff generation, as well as to relate them to the detachment and transport of solids. The studies take place at the micro-scale basin Zemmer and Frankelbach in Germany. At the Zemmer basin the sprinkling experiments were performed on agricultural land while the experiments in Frankelbach were performed at reforested sites. The experiments were carried out i) with a small mobile rainfall simulator of high rainfall intensities (40 mm h-1) and ii) with a larger one covering a slope segment and simulating high rainfall amounts (120 mm in 3 days). Both methods show basically comparable results. On the agricultural sites clear differences could be observed between different soil management types: contrasting to the conventionally tilled soils, deep loosened soils (in combination with conservative tillage) do not produce overland flow, but tend to transfer more water by interflow processes, retaining large amounts in the subsoil. For the forested sites runoff shows a high variability as determined the larger and the smaller rainfall simulations. This variability is rather due to the different forest and soil types than to methodologically different settings of the sprinkling systems. Both rainfall simulation systems characterized the runoff behavior in a
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.
Directory of Open Access Journals (Sweden)
V. V. Zelentsov
2017-01-01
Full Text Available Significant amount of space debris available in the near-Earth space is a reason to protect space vehicles from the fragments of space debris. Existing empirical calculation methods do not allow us to estimate quality of developed protection. Experimental verification of protection requires complex and expensive installations that do not allow having a desirable impact velocity. The article proposes to use the ANSYS AUTODYN software environment – a software complex of the nonlinear dynamic analysis to evaluate quality of developed protection. The ANSYS AUTODYN environment is based on the integration methods of a system of equations of continuum mechanics. The SPH (smoothed particle method method is used as a solver. The SPH method is based on the area of sampling by a finite set of the Lagrangian particles that can be represented as the elementary volumes of the medium. In modeling the targets were under attack of 2 and 3 mm spheres and cylinders with 2 mm in bottom diameter and with generator of 2 and 3 mm. The apheres and cylinders are solid and hollow, with a wall thickness of 0.5 mm. The impact velocity of the particles with a target was assumed to be 7.5 km / s. The number of integration cycles in all cases of calculation was assumed to be 1000. The rate of flying debris fragments of the target material as a function of the h / d ratio (h - the thickness of the target, / d - the diameter of a sphere or a cylinder end is obtained. In simulation the sample picture obtained coincides both with results of experimental study carried out at the Tomsk State Technical University and with results described in the literature.
Directory of Open Access Journals (Sweden)
Carlos Morcillo-Herrera
2015-01-01
Full Text Available This paper presents a practical method for calculating the electrical energy generated by a PV panel (kWhr through MATLAB simulations based on the mathematical model of the cell, which obtains the “Mean Maximum Power Point” (MMPP in the characteristic V-P curve, in response to evaluating historical climate data at specific location. This five-step method calculates through MMPP per day, per month, or per year, the power yield by unit area, then electrical energy generated by PV panel, and its real conversion efficiency. To validate the method, it was applied to Sewage Treatment Plant for a Group of Drinking Water and Sewerage of Yucatan (JAPAY, México, testing 250 Wp photovoltaic panels of five different manufacturers. As a result, the performance, the real conversion efficiency, and the electricity generated by five different PV panels in evaluation were obtained and show the best technical-economic option to develop the PV generation project.
Directory of Open Access Journals (Sweden)
Liu Bing
2014-10-01
Full Text Available Earthquake action is the main external factor which influences long-term safe operation of civil construction, especially of the high-rise building. Applying time-history method to simulate earthquake response process of civil construction foundation surrounding rock is an effective method for the anti-knock study of civil buildings. Therefore, this paper develops a civil building earthquake disaster three-dimensional dynamic finite element numerical simulation system. The system adopts the explicit central difference method. Strengthening characteristics of materials under high strain rate and damage characteristics of surrounding rock under the action of cyclic loading are considered. Then, dynamic constitutive model of rock mass suitable for civil building aseismic analysis is put forward. At the same time, through the earthquake disaster of time-history simulation of Shenzhen Children’s Palace, reliability and practicability of system program is verified in the analysis of practical engineering problems.
International Nuclear Information System (INIS)
Huang, Jingying; Qin, Datong; Peng, Zhiyuan
2015-01-01
Highlights: • A two-degree-of-freedom lumped thermal model is developed for battery. • The battery thermal model is integrated with vehicle driving model. • Real-time battery thermal responses is obtained. • Active control of current by regenerative braking ratio adjustment is proposed. • More energy is recovered with smaller battery temperature rise. - Abstract: Battery thermal management is important for the safety and reliability of electric vehicle. Based on the parameters obtained from battery hybrid pulse power characterization test, a two-degree-of-freedom lumped thermal model is established. The battery model is then integrated with vehicle driving model to simulate real-time battery thermal responses. An active control method is proposed to reduce heat generation due to regenerative braking. The proposed control method not only subjects to the braking safety regulation, but also adjusts the regenerative braking ratio through a fuzzy controller. By comparing with other regenerative braking scenarios, the effectiveness of the proposed strategy has been validated. According to the results, the proposed control strategy suppresses battery temperature rise by modifying the charge current due to regenerative braking. The overlarge components of current are filtered out whereas the small ones are magnified. Therefore, with smaller battery temperature rise, more energy is recovered. Compared to the traditional passive heat dissipating, the proposed active methodology is feasible and provides a novel solution for electric vehicle battery thermal management.
New methods in plasma simulation
International Nuclear Information System (INIS)
Mason, R.J.
1990-01-01
The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs
Qi, Shouliang; Zhang, Baihua; Yue, Yong; Shen, Jing; Teng, Yueyang; Qian, Wei; Wu, Jianlin
2018-03-01
Tracheal Bronchus (TB) is a rare congenital anomaly characterized by the presence of an abnormal bronchus originating from the trachea or main bronchi and directed toward the upper lobe. The airflow pattern in tracheobronchial trees of TB subjects is critical, but has not been systemically studied. This study proposes to simulate the airflow using CT image based models and the computational fluid dynamics (CFD) method. Six TB subjects and three health controls (HC) are included. After the geometric model of tracheobronchial tree is extracted from CT images, the spatial distribution of velocity, wall pressure, wall shear stress (WSS) is obtained through CFD simulation, and the lobar distribution of air, flow pattern and global pressure drop are investigated. Compared with HC subjects, the main bronchus angle of TB subjects and the variation of volume are large, while the cross-sectional growth rate is small. High airflow velocity, wall pressure, and WSS are observed locally at the tracheal bronchus, but the global patterns of these measures are still similar to those of HC. The ratio of airflow into the tracheal bronchus accounts for 6.6-15.6% of the inhaled airflow, decreasing the ratio to the right upper lobe from 15.7-21.4% (HC) to 4.9-13.6%. The air into tracheal bronchus originates from the right dorsal near-wall region of the trachea. Tracheal bronchus does not change the global pressure drop which is dependent on multiple variables. Though the tracheobronchial trees of TB subjects present individualized features, several commonalities on the structural and airflow characteristics can be revealed. The observed local alternations might provide new insight into the reason of recurrent local infections, cough and acute respiratory distress related to TB.
International Nuclear Information System (INIS)
Cruz, C. M.; Pinera, I; Abreu, Y.; Leyva, A.
2007-01-01
Present work concerns with the implementation of a Monte Carlo based calculation algorithm describing particularly the occurrence of Atom Displacements induced by the Gamma Radiation interactions at a given target material. The Atom Displacement processes were considered only on the basis of single elastic scattering interactions among fast secondary electrons with matrix atoms, which are ejected from their crystalline sites at recoil energies higher than a given threshold energy. The secondary electron transport was described assuming typical approaches on this matter, where consecutive small angle scattering and very low energy transfer events behave as a continuously cuasi-classical electron state changes along a given path length delimited by two discrete high scattering angle and electron energy losses events happening on a random way. A limiting scattering angle was introduced and calculated according Moliere-Bethe-Goudsmit-Saunderson Electron Multiple Scattering, which allows splitting away secondary electrons single scattering processes from multiple one, according which a modified McKinley-Feshbach electron elastic scattering cross section arises. This distribution was statistically sampled and simulated in the framework of the Monte Carlo Method to perform discrete single electron scattering processes, particularly those leading to Atom Displacement events. The possibility of adding this algorithm to present existing open Monte Carlo code systems is analyze, in order to improve their capabilities. (Author)
International Nuclear Information System (INIS)
Capeluto, I. Guedi; Ochoa, Carlos E.
2014-01-01
Vast amounts of the European residential stock were built with limited consideration for energy efficiency, yet its refurbishment can help reach national energy reduction goals, decreasing environmental impact. Short-term retrofits with reduced interference to inhabitants can be achieved by upgrading facades with elements that enhance energy efficiency and user comfort. The European Union-funded Meefs Retrofitting (Multifunctional Energy Efficient Façade System) project aims to develop an adaptable mass-produced facade system for energy improvement in existing residential buildings throughout the continent. This article presents a simplified methodology to identify preferred strategies and combinations for the early design stages of such system. This was derived from studying weather characteristics of European regions and outlining climatic energy-saving strategies based on human thermal comfort. Strategies were matched with conceptual technologies like glazing, shading and insulation. The typical building stock was characterized from statistics of previous European projects. Six improvements and combinations were modelled using a simulation model, identifying and ranking preferred configurations. The methodology is summarized in a synoptic scheme identifying the energy rankings of each improvement and combination for the studied climates and façade orientations. - Highlights: • First results of EU project for new energy efficient façade retrofit system. • System consists of prefabricated elements with multiple options for flexibility. • Modular strategies were determined that adapt to different climates. • Technologies matching the strategies were identified. • Presents a method for use and application in different climates across Europe
Directory of Open Access Journals (Sweden)
Jong-Ho Nam
2013-06-01
Full Text Available Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the optimal route in the Arctic region is introduced. A transit model based on the simulated sea ice and environmental data numerically modeled in the Arctic is developed. By integrating the simulated data into a transit model, further applications such as route simulation, cost estimation or hindcast can be easily performed. An interactive simulation system that determines the optimal Arctic route using the transit model is developed. The simulation of optimal routes is carried out and the validity of the results is discussed.
International Nuclear Information System (INIS)
Lu, Pengyu; Gao, Qing; Wang, Yan
2016-01-01
Highlights: • A 1D/3D collaborative computing simulation method for vehicle thermal management. • Analyzing the influence of the thermodynamic systems and the engine compartment geometry on the vehicle performance. • Providing the basis for the matching energy consumptions of thermodynamic systems in the underhood. - Abstract: The vehicle integrated thermal management containing the engine cooling circuit, the air conditioning circuit, the turbocharged inter-cooled circuit, the engine lubrication circuit etc. is the important means of enhancing power performance, promoting economy, saving energy and reducing emission. In this study, a 1D/3D collaborative simulation method is proposed with the engine cooling circuit and air conditioning circuit being the research object. The mathematical characterizations of the multiple thermodynamic systems are achieved by 1D calculation and the underhood structure is described by 3D simulation. Through analyzing the engine compartment integrated heat transfer process, the model of the integrated thermal management system is formed after coupling the cooling circuit and air conditioning circuit. This collaborative simulation method establishes structured correlation of engine-cooling and air conditioning thermal dissipation in the engine compartment, comprehensively analyzing the engine working process and air condition operational process in order to research the interaction effect of them. In the calculation examples, to achieve the integrated optimization of multiple thermal systems design and performance prediction, by describing the influence of system thermomechanical parameters and operating duty to underhood heat transfer process, performance evaluation of the engine cooling circuit and the air conditioning circuit are realized.
Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng
2015-01-01
The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.
Directory of Open Access Journals (Sweden)
A. Yu. Bykov
2015-01-01
Full Text Available Modern practical task-solving techniques for designing information security systems in different purpose automated systems assume the solution of optimization tasks when choosing different elements of a security system. Formulations of mathematical programming tasks are rather often used, but in practical tasks it is not always analytically possible to set target function and (or restrictions in an explicit form. Sometimes, calculation of the target function value or checking of restrictions for the possible decision can be reduced to carrying out experiments on a simulation model of system. Similar tasks are considered within optimization-simulation approach and require the ad hoc methods of optimization considering the possible high computational effort of simulation.The article offers a modified recession vector method, which is used in tasks of discrete optimization to solve the similar problems. The method is applied when the task to be solved is to minimize the cost of selected information security tools in case of restriction on the maximum possible damage. The cost index is the linear function of the Boolean variables, which specify the selected security tools, with the restriction set as an "example simulator". Restrictions can be actually set implicitly. A validity of the possible solution is checked using a simulation model of the system.The offered algorithm of a method considers features of an objective. The main advantage of algorithm is that it requires a maximum of m+1 of steps where m is a dimensionality of the required vector of Boolean variables. The algorithm provides finding a local minimum by using the Hamming metrics in the discrete space; the radius of neighborhood is equal to 1. These statements are proved.The paper presents solution results of choosing security tools with the specified basic data.
Simulation Package based on Placet
D'Amico, T E; Leros, Nicolas; Schulte, Daniel
2001-01-01
The program PLACET is used to simulate transverse and longitudinal beam effects in the main linac, the drive-beam accelerator and the drive-beam decelerators of CLIC, as well as in the linac of CTF3. It provides different models of accelerating and decelerating structures, linear optics and thin multipoles. Several methods of beam-based alignment, including emittance tuning bumps and feedback, and different failure modes can be simulated. An interface to the beam-beam simulation code GUINEA-PIG exists. Currently, interfaces to MAD and TRANSPORT are under development and an extension to transfer lines and bunch compressors is also being made. In the future, the simulations will need to be performed by many users, which requires a simplified user interface. The paper describes the status of PLACET and plans for the futur
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Liebert, Cara A; Mazer, Laura; Bereknyei Merrell, Sylvia; Lin, Dana T; Lau, James N
2016-09-01
The flipped classroom, a blended learning paradigm that uses pre-session online videos reinforced with interactive sessions, has been proposed as an alternative to traditional lectures. This article investigates medical students' perceptions of a simulation-based, flipped classroom for the surgery clerkship and suggests best practices for implementation in this setting. A prospective cohort of students (n = 89), who were enrolled in the surgery clerkship during a 1-year period, was taught via a simulation-based, flipped classroom approach. Students completed an anonymous, end-of-clerkship survey regarding their perceptions of the curriculum. Quantitative analysis of Likert responses and qualitative analysis of narrative responses were performed. Students' perceptions of the curriculum were positive, with 90% rating it excellent or outstanding. The majority reported the curriculum should be continued (95%) and applied to other clerkships (84%). The component received most favorably by the students was the simulation-based skill sessions. Students rated the effectiveness of the Khan Academy-style videos the highest compared with other video formats (P flipped classroom in the surgery clerkship were overwhelmingly positive. The flipped classroom approach can be applied successfully in a surgery clerkship setting and may offer additional benefits compared with traditional lecture-based curricula. Copyright © 2016 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-08-15
Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.
International Nuclear Information System (INIS)
Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man
2015-01-01
Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media
Zhengang Guo; Yingfeng Zhang; Xibin Zhao; Xiaoyu Song
2017-01-01
Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive colla...
Chen, X.; Huang, G.
2017-12-01
In recent years, distributed hydrological models have been widely used in storm water management, water resources protection and so on. Therefore, how to evaluate the uncertainty of the model reasonably and efficiently becomes a hot topic today. In this paper, the soil and water assessment tool (SWAT) model is constructed for the study area of China's Feilaixia watershed, and the uncertainty of the runoff simulation is analyzed by GLUE method deeply. Taking the initial parameter range of GLUE method as the research core, the influence of different initial parameter ranges on model uncertainty is studied. In this paper, two sets of parameter ranges are chosen as the object of study, the first one (range 1) is recommended by SWAT-CUP and the second one (range 2) is calibrated by SUFI-2. The results showed that under the same number of simulations (10,000 times), the overall uncertainty obtained by the range 2 is less than the range 1. Specifically, the "behavioral" parameter sets for the range 2 is 10000 and for the range 1 is 4448. In the calibration and the validation, the ratio of P-factor to R-factor for range 1 is 1.387 and 1.391, and for range 2 is 1.405 and 1.462 respectively. In addition, the simulation result of range 2 is better with the NS and R2 slightly higher than range 1. Therefore, it can be concluded that using the parameter range calibrated by SUFI-2 as the initial parameter range for the GLUE is a way to effectively capture and evaluate the simulation uncertainty.
International Nuclear Information System (INIS)
Avilov, A.A.; Grigorevskij, A.V.; Dudnik, S.F.; Kiryukhin, N.M.; Klyukovich, V.A.; Sagalovich, V.V.
1989-01-01
Computational algorithm is developed for calculating thickness of films deposited by physical methods onto a backing of any shape, moving along a given trajectory. The sugegsted algorithm makes it possible to carry out direct simulation on film deposition process and to optimize sources arrangement for obtaining films with a required degree of uniformity. Condensate distribution on a rotating sphere was calculated and here presented. A satisfactory agreement of calculated values with experimental data on metal films obtained by electron-arc spraying, was established
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured
International Nuclear Information System (INIS)
Sun, Zhi-xue; Zhang, Xu; Xu, Yi; Yao, Jun; Wang, Hao-xuan; Lv, Shuhuan; Sun, Zhi-lei; Huang, Yong; Cai, Ming-yu; Huang, Xiaoxue
2017-01-01
The Enhanced Geothermal System (EGS) creates an artificial geothermal reservoir by hydraulic fracturing which allows heat transmission through the fractures by the circulating fluids as they extract heat from Hot Dry Rock (HDR). The technique involves complex thermal–hydraulic–mechanical (THM) coupling process. A numerical approach is presented in this paper to simulate and analyze the heat extraction process in EGS. The reservoir is regarded as fractured porous media consisting of rock matrix blocks and discrete fracture networks. Based on thermal non-equilibrium theory, the mathematical model of THM coupling process in fractured rock mass is used. The proposed model is validated by comparing it with several analytical solutions. An EGS case from Cooper Basin, Australia is simulated with 2D stochastically generated fracture model to study the characteristics of fluid flow, heat transfer and mechanical response in geothermal reservoir. The main parameters controlling the outlet temperature of EGS are also studied by sensitivity analysis. The results shows the significance of taking into account the THM coupling effects when investigating the efficiency and performance of EGS. - Highlights: • EGS reservoir comprising discrete fracture networks and matrix rock is modeled. • A THM coupling model is proposed for simulating the heat extraction in EGS. • The numerical model is validated by comparing with several analytical solutions. • A case study is presented for understanding the main characteristics of EGS. • The THM coupling effects are shown to be significant factors to EGS's running performance.
International Nuclear Information System (INIS)
Xu, Peng; Wang, Jianye; Yang, Minghan; Wang, Weitian; Bai, Yunqing; Song, Yong
2017-01-01
Highlights: • We development an operator support method based on intelligent dynamic interlock. • We offer an integrated aid system to reduce the working strength of operators. • The method can help operators avoid dangerous, irreversible operation. • This method can be used in the fusion research reactor in the further. - Abstract: In nuclear systems, operators have to carry out corrective actions when abnormal situations occur. However, operators might make mistakes under pressure. In order to avoid serious consequences of the human errors, a new method for operators support based on intelligent dynamic interlock was proposed. The new method based on full digital instrumentation and control system, contains real-time alarm analysis process, decision support process and automatic safety interlock process. Once abnormal conditions occur, necessary safety interlock parameter based on analysis of real-time alarm and decision support process can be loaded into human-machine interfaces and controllers automatically, and avoid human errors effectively. Furthermore, the new method can make recommendations for further use and development of this technique in nuclear power plant or fusion research reactor.
International Nuclear Information System (INIS)
Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji
2009-01-01
The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)
Matrix method for acoustic levitation simulation.
Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C
2011-08-01
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
DEFF Research Database (Denmark)
Behrens, Tim
. Simulations demonstrated the feasibility and robustness of the approach. The hybrid immersed boundary approach proved to be able to handle 3D airfoil sections with span-wise flap gaps. The flow around and in the wake of a deflected flap at a Reynolds number of 1.63mio was investigated for steady inflow......As the rotor diameter of wind turbines increases, turbine blades with distributed aerodynamic control surfaces promise significant load reductions. Therefore, they are coming into focus in relation to research in academia and industry. Trailing edge flaps are of particular interest in terms...... conditions. A control for two span-wise independent flaps was implemented and first load reductions could be achieved. The hybrid method has demonstrated to be a versatile tool in the research of moving trailing edge flaps. The results shall serve as the basis for future investigations of the unsteady flow...
Energy Technology Data Exchange (ETDEWEB)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.
2015-01-07
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
He, An; Gong, Jiaming; Shikazono, Naoki
2018-05-01
In the present study, a model is introduced to correlate the electrochemical performance of solid oxide fuel cell (SOFC) with the 3D microstructure reconstructed by focused ion beam scanning electron microscopy (FIB-SEM) in which the solid surface is modeled by the marching cubes (MC) method. Lattice Boltzmann method (LBM) is used to solve the governing equations. In order to maintain the geometries reconstructed by the MC method, local effective diffusivities and conductivities computed based on the MC geometries are applied in each grid, and partial bounce-back scheme is applied according to the boundary predicted by the MC method. From the tortuosity factor and overpotential calculation results, it is concluded that the MC geometry drastically improves the computational accuracy by giving more precise topology information.
AlAmrani, Mashael-Hasan; AlAmmar, Kamila-Ahmad; AlQahtani, Sarah-Saad; Salem, Olfat A
2017-10-10
Critical thinking and self-confidence are imperative to success in clinical practice. Educators should use teaching strategies that will help students enhance their critical thinking and self-confidence in complex content such as electrocardiogram interpretation. Therefore, teaching electrocardiogram interpretation to students is important for nurse educators. This study compares the effect of simulation-based and traditional teaching methods on the critical thinking and self-confidence of students during electrocardiogram interpretation sessions. Thirty undergraduate nursing students volunteered to participate in this study. The participants were divided into intervention and control groups, which were taught respectively using the simulation-based and traditional teaching programs. All of the participants were asked to complete the study instrumentpretest and posttest to measure their critical thinking and self-confidence. Improvement was observed in the control and experimental groups with respect to critical thinking and self-confidence, as evidenced by the results of the paired samples t test and the Wilcoxon signed-rank test (p .05). This study evaluated an innovative simulation-based teaching method for nurses. No significant differences in outcomes were identified between the simulator-based and traditional teaching methods, indicating that well-implemented educational programs that use either teaching method effectively promote critical thinking and self-confidence in nursing students. Nurse educators are encouraged to design educational plans with clear objectives to improve the critical thinking and self-confidence of their students. Future research should compare the effects of several teaching sessions using each method in a larger sample.
Numerical methods used in simulation
International Nuclear Information System (INIS)
Caseau, Paul; Perrin, Michel; Planchard, Jacques
1978-01-01
The fundamental numerical problem posed by simulation problems is the stability of the resolution diagram. The system of the most used equations is defined, since there is a family of models of increasing complexity with 3, 4 or 5 equations although only models with 3 and 4 equations have been used extensively. After defining what is meant by explicit or implicit, the best established stability results is given for one-dimension problems and then for two-dimension problems. It is shown that two types of discretisation may be defined: four and eight point diagrams (in one or two dimensions) and six and ten point diagrams (in one or two dimensions). To end, some results are given on problems that are not usually treated very much, i.e. non-asymptotic stability and the stability of diagrams based on finite elements [fr
Czech Academy of Sciences Publication Activity Database
Valentini, F.; Trávníček, Pavel; Califano, F.; Hellinger, Petr; Mangeney, A.
2007-01-01
Roč. 225, č. 1 (2007), s. 753-770 ISSN 0021-9991 Institutional research plan: CEZ:AV0Z30420517 Keywords : numerical simulations * hybrid simulations * Vlasov simulations Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 2.372, year: 2007
DEFF Research Database (Denmark)
Nakamura, T; Bay, Niels
1998-01-01
A new friction testing method based on combined forward conical can-backward straight can extrusion is proposed in order to evaluate friction characteristics in severe metal forming operations. By this method the friction coefficient along the conical punch surface is determined knowing...... the friction coefficient along the die wall. The latter is determined by a combined forward and backward can extrusion of straight cans. Calibration curves determining the relationship between punch travel, can heights, and friction coefficient for the two rests are calculated based on a rigid-plastic FEM...... analysis. Experimental friction tests are carried out in a mechanical press with aluminium alloy A6061 as the workpiece material and different kinds of lubricants. They confirm that the theoretical analysis results irt reasonable values for the friction coefficient....
Dattoli, Giuseppe
2005-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. A code devoted to the analysis of this type of problems should be fast and reliable: conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problem in accelerators. The extension of these method to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators implemented numerically in C++. We show that the integration procedure is capable of reproducing the onset of an instability and effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, parametric studies a...
International Nuclear Information System (INIS)
Miyamoto, Akira; Sato, Etsuko; Sato, Ryo; Inaba, Kenji; Hatakeyama, Nozomu
2014-01-01
In collaboration with experimental experts we have reported in the present conference (Hatakeyama, N. et al., “Experiment-integrated multi-scale, multi-physics computational chemistry simulation applied to corrosion behaviour of BWR structural materials”) the results of multi-scale multi-physics computational chemistry simulations applied to the corrosion behaviour of BWR structural materials. In macro-scale, a macroscopic simulator of anode polarization curve was developed to solve the spatially one-dimensional electrochemical equations on the material surface in continuum level in order to understand the corrosion behaviour of typical BWR structural material, SUS304. The experimental anode polarization behaviours of each pure metal were reproduced by fitting all the rates of electrochemical reactions and then the anode polarization curve of SUS304 was calculated by using the same parameters and found to reproduce the experimental behaviour successfully. In meso-scale, a kinetic Monte Carlo (KMC) simulator was applied to an actual-time simulation of the morphological corrosion behaviour under the influence of an applied voltage. In micro-scale, an ultra-accelerated quantum chemical molecular dynamics (UA-QCMD) code was applied to various metallic oxide surfaces of Fe 2 O 3 , Fe 3 O 4 , Cr 2 O 3 modelled as same as water molecules and dissolved metallic ions on the surfaces, then the dissolution and segregation behaviours were successfully simulated dynamically by using UA-QCMD. In this paper we describe details of the multi-scale, multi-physics computational chemistry method especially the UA-QCMD method. This method is approximately 10,000,000 times faster than conventional first-principles molecular dynamics methods based on density-functional theory (DFT), and the accuracy was also validated for various metals and metal oxides compared with DFT results. To assure multi-scale multi-physics computational chemistry simulation based on the UA-QCMD method for
Yang, Qidong; Zuo, Hongchao; Li, Weidong
2016-01-01
Improving the capability of land-surface process models to simulate soil moisture assists in better understanding the atmosphere-land interaction. In semi-arid regions, due to limited near-surface observational data and large errors in large-scale parameters obtained by the remote sensing method, there exist uncertainties in land surface parameters, which can cause large offsets between the simulated results of land-surface process models and the observational data for the soil moisture. In this study, observational data from the Semi-Arid Climate Observatory and Laboratory (SACOL) station in the semi-arid loess plateau of China were divided into three datasets: summer, autumn, and summer-autumn. By combing the particle swarm optimization (PSO) algorithm and the land-surface process model SHAW (Simultaneous Heat and Water), the soil and vegetation parameters that are related to the soil moisture but difficult to obtain by observations are optimized using three datasets. On this basis, the SHAW model was run with the optimized parameters to simulate the characteristics of the land-surface process in the semi-arid loess plateau. Simultaneously, the default SHAW model was run with the same atmospheric forcing as a comparison test. Simulation results revealed the following: parameters optimized by the particle swarm optimization algorithm in all simulation tests improved simulations of the soil moisture and latent heat flux; differences between simulated results and observational data are clearly reduced, but simulation tests involving the adoption of optimized parameters cannot simultaneously improve the simulation results for the net radiation, sensible heat flux, and soil temperature. Optimized soil and vegetation parameters based on different datasets have the same order of magnitude but are not identical; soil parameters only vary to a small degree, but the variation range of vegetation parameters is large.
Directory of Open Access Journals (Sweden)
Farzad Amirkhani
2017-03-01
The proposed method is implemented on classical job-shop problems with objective of makespan and results are compared with mixed integer programming model. Moreover, the appropriate dispatching priorities are achieved for dynamic job-shop problem minimizing a multi-objective criteria. The results show that simulation-based optimization are highly capable to capture the main characteristics of the shop and produce optimal/near-optimal solutions with highly credibility degree.
基于面元法的水平轴风力机数值模拟%Numerical Simulation of Horizontal Axis Wind Turbine Based on Panel Method
Institute of Scientific and Technical Information of China (English)
仇永兴; 康顺
2012-01-01
使用基于速度面元法的势流数值模拟方法，以NREL Phase VI为例进行了叶片气动载荷和风轮近尾流场的数值模拟。将势流数值模拟、叶素动量理论和计算流体力学CFD方法的计算结果与实验数据进行了对比分析。结果表明使用速度面元法计算风轮绕流场具有较高的计算精度和求解效率，为大规模风力机群的流场计算和出力预报提供支撑。%The aerodynamic loads and the near-wake flow fields of a wind turbine, NREL Phase VI, are simulated by numerical simulations using the velocity based panel method. The results of numerical simulations of potential flows are analyzed and compared with that of blade element momentum theory, computational fluid dynamics （CFD） and experimental data. It is shown that the velocity based panel method is more precise and efficient for the aerodynamic simulations of the wind turbine. The conclusions can be used as technical supports for the flow field calculation and the performance prediction of wind turbine groups.
Simulation-based surgical education.
Evgeniou, Evgenios; Loizou, Peter
2013-09-01
The reduction in time for training at the workplace has created a challenge for the traditional apprenticeship model of training. Simulation offers the opportunity for repeated practice in a safe and controlled environment, focusing on trainees and tailored to their needs. Recent technological advances have led to the development of various simulators, which have already been introduced in surgical training. The complexity and fidelity of the available simulators vary, therefore depending on our recourses we should select the appropriate simulator for the task or skill we want to teach. Educational theory informs us about the importance of context in professional learning. Simulation should therefore recreate the clinical environment and its complexity. Contemporary approaches to simulation have introduced novel ideas for teaching teamwork, communication skills and professionalism. In order for simulation-based training to be successful, simulators have to be validated appropriately and integrated in a training curriculum. Within a surgical curriculum, trainees should have protected time for simulation-based training, under appropriate supervision. Simulation-based surgical education should allow the appropriate practice of technical skills without ignoring the clinical context and must strike an adequate balance between the simulation environment and simulators. © 2012 The Authors. ANZ Journal of Surgery © 2012 Royal Australasian College of Surgeons.
Energy Technology Data Exchange (ETDEWEB)
Brunetti, Antonio; Golosio, Bruno [Universita degli Studi di Sassari, Dipartimento di Scienze Politiche, Scienze della Comunicazione e Ingegneria dell' Informazione, Sassari (Italy); Melis, Maria Grazia [Universita degli Studi di Sassari, Dipartimento di Storia, Scienze dell' Uomo e della Formazione, Sassari (Italy); Mura, Stefania [Universita degli Studi di Sassari, Dipartimento di Agraria e Nucleo di Ricerca sulla Desertificazione, Sassari (Italy)
2014-11-08
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)
International Nuclear Information System (INIS)
Brunetti, Antonio; Golosio, Bruno; Melis, Maria Grazia; Mura, Stefania
2015-01-01
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)
Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng
2015-03-01
Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.
Directory of Open Access Journals (Sweden)
Navid Hooshangi
2018-01-01
Full Text Available Agent-based modeling is a promising approach for developing simulation tools for natural hazards in different areas, such as during urban search and rescue (USAR operations. The present study aimed to develop a dynamic agent-based simulation model in post-earthquake USAR operations using geospatial information system and multi agent systems (GIS and MASs, respectively. We also propose an approach for dynamic task allocation and establishing collaboration among agents based on contract net protocol (CNP and interval-based Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS methods, which consider uncertainty in natural hazards information during agents’ decision-making. The decision-making weights were calculated by analytic hierarchy process (AHP. In order to implement the system, earthquake environment was simulated and the damage of the buildings and a number of injuries were calculated in Tehran’s District 3: 23%, 37%, 24% and 16% of buildings were in slight, moderate, extensive and completely vulnerable classes, respectively. The number of injured persons was calculated to be 17,238. Numerical results in 27 scenarios showed that the proposed method is more accurate than the CNP method in the terms of USAR operational time (at least 13% decrease and the number of human fatalities (at least 9% decrease. In interval uncertainty analysis of our proposed simulated system, the lower and upper bounds of uncertain responses are evaluated. The overall results showed that considering uncertainty in task allocation can be a highly advantageous in the disaster environment. Such systems can be used to manage and prepare for natural hazards.
Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye
2018-06-01
Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.
International Nuclear Information System (INIS)
Mutihac, R.; Mutihac, R.C.; Cicuttin, A.
2001-09-01
Parameter-search methods are problem-sensitive. All methods depend on some meta-parameters of their own, which must be determined experimentally in advance. A better choice of these intrinsic parameters for a certain parameter-search method may improve its performance. Moreover, there are various implementations of the same method, which may also affect its performance. The choice of the matching (error) function has a great impact on the search process in terms of finding the optimal parameter set and minimizing the computational cost. An initial assessment of the matching function ability to distinguish between good and bad models is recommended, before launching exhaustive computations. However, different runs of a parameter search method may result in the same optimal parameter set or in different parameter sets (the model is insufficiently constrained to accurately characterize the real system). Robustness of the parameter set is expressed by the extent to which small perturbations in the parameter values are not affecting the best solution. A parameter set that is not robust is unlikely to be physiologically relevant. Robustness can also be defined as the stability of the optimal parameter set to small variations of the inputs. When trying to estimate things like the minimum, or the least-squares optimal parameters of a nonlinear system, the existence of multiple local minima can cause problems with the determination of the global optimum. Techniques such as Newton's method, the Simplex method and Least-squares Linear Taylor Differential correction technique can be useful provided that one is lucky enough to start sufficiently close to the global minimum. All these methods suffer from the inability to distinguish a local minimum from a global one because they follow the local gradients towards the minimum, even if some methods are resetting the search direction when it is likely to get stuck in presumably a local minimum. Deterministic methods based on
Directory of Open Access Journals (Sweden)
J. R. Santillan
2016-09-01
Full Text Available In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS, zig-zag (ZZ, river banks-centerline (RBCL, and river banks-centerline-zig-zag (RBCLZZ, and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs
Pavlov, Al. A.; Shevchenko, A. M.; Khotyanovsky, D. V.; Pavlov, A. A.; Shmakov, A. S.; Golubev, M. P.
2017-10-01
We present a method for and results of determination of the field of integral density in the structure of flow corresponding to the Mach interaction of shock waves at Mach number M = 3. The optical diagnostics of flow was performed using an interference technique based on self-adjusting Zernike filters (SA-AVT method). Numerical simulations were carried out using the CFS3D program package for solving the Euler and Navier-Stokes equations. Quantitative data on the distribution of integral density on the path of probing radiation in one direction of 3D flow transillumination in the region of Mach interaction of shock waves were obtained for the first time.
James, Andrew I.; Jawitz, James W.; Munoz-Carpena, Rafael
2009-01-01
A model to simulate transport of materials in surface water and ground water has been developed to numerically approximate solutions to the advection-dispersion equation. This model, known as the Transport and Reaction Simulation Engine (TaRSE), uses an algorithm that incorporates a time-splitting technique where the advective part of the equation is solved separately from the dispersive part. An explicit finite-volume Godunov method is used to approximate the advective part, while a mixed-finite element technique is used to approximate the dispersive part. The dispersive part uses an implicit discretization, which allows it to run stably with a larger time step than the explicit advective step. The potential exists to develop algorithms that run several advective steps, and then one dispersive step that encompasses the time interval of the advective steps. Because the dispersive step is computationally most expensive, schemes can be implemented that are more computationally efficient than non-time-split algorithms. This technique enables scientists to solve problems with high grid Peclet numbers, such as transport problems with sharp solute fronts, without spurious oscillations in the numerical approximation to the solution and with virtually no artificial diffusion.
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Zhong, Bei-Jing; Dang, Shuai; Song, Ya-Na; Gong, Jing-Song
2012-02-01
Here, we propose both a comprehensive chemical mechanism and a reduced mechanism for a three-dimensional combustion simulation, describing the formation of polycyclic aromatic hydrocarbons (PAHs), in a direct-injection diesel engine. A soot model based on the reduced mechanism and a method of moments is also presented. The turbulent diffusion flame and PAH formation in the diesel engine were modelled using the reduced mechanism based on the detailed mechanism using a fixed wall temperature as a boundary condition. The spatial distribution of PAH concentrations and the characteristic parameters for soot formation in the engine cylinder were obtained by coupling a detailed chemical kinetic model with the three-dimensional computational fluid dynamic (CFD) model. Comparison of the simulated results with limited experimental data shows that the chemical mechanisms and soot model are realistic and correctly describe the basic physics of diesel combustion but require further development to improve their accuracy.
Simulation and Non-Simulation Based Human Reliability Analysis Approaches
Energy Technology Data Exchange (ETDEWEB)
Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Shirley, Rachel Elizabeth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2014-12-01
Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.
Simulation-based medical teaching and learning
Directory of Open Access Journals (Sweden)
Abdulmohsen H Al-Elq
2010-01-01
Full Text Available One of the most important steps in curriculum development is the introduction of simulation- based medical teaching and learning. Simulation is a generic term that refers to an artificial representation of a real world process to achieve educational goals through experiential learning. Simulation based medical education is defined as any educational activity that utilizes simulation aides to replicate clinical scenarios. Although medical simulation is relatively new, simulation has been used for a long time in other high risk professions such as aviation. Medical simulation allows the acquisition of clinical skills through deliberate practice rather than an apprentice style of learning. Simulation tools serve as an alternative to real patients. A trainee can make mistakes and learn from them without the fear of harming the patient. There are different types and classification of simulators and their cost vary according to the degree of their resemblance to the reality, or ′fidelity′. Simulation- based learning is expensive. However, it is cost-effective if utilized properly. Medical simulation has been found to enhance clinical competence at the undergraduate and postgraduate levels. It has also been found to have many advantages that can improve patient safety and reduce health care costs through the improvement of the medical provider′s competencies. The objective of this narrative review article is to highlight the importance of simulation as a new teaching method in undergraduate and postgraduate education.
Scenario-based table top simulations
DEFF Research Database (Denmark)
Broberg, Ole; Edwards, Kasper; Nielsen, J.
2012-01-01
This study developed and tested a scenario-based table top simulation method in a user-driven innovation setting. A team of researchers worked together with a user group of five medical staff members from the existing clinic. Table top simulations of a new clinic were carried out in a simple model...
Wang, Yi
2017-09-12
Reduced-order modeling approaches for gas flow in dual-porosity dual-permeability porous media are studied based on the proper orthogonal decomposition (POD) method combined with Galerkin projection. The typical modeling approach for non-porous-medium liquid flow problems is not appropriate for this compressible gas flow in a dual-continuum porous media. The reason is that non-zero mass transfer for the dual-continuum system can be generated artificially via the typical POD projection, violating the mass-conservation nature and causing the failure of the POD modeling. A new POD modeling approach is proposed considering the mass conservation of the whole matrix fracture system. Computation can be accelerated as much as 720 times with high precision (reconstruction errors as slow as 7.69 × 10−4%~3.87% for the matrix and 8.27 × 10−4%~2.84% for the fracture).
Wang, Yi; Sun, Shuyu; Yu, Bo
2017-01-01
Reduced-order modeling approaches for gas flow in dual-porosity dual-permeability porous media are studied based on the proper orthogonal decomposition (POD) method combined with Galerkin projection. The typical modeling approach for non-porous-medium liquid flow problems is not appropriate for this compressible gas flow in a dual-continuum porous media. The reason is that non-zero mass transfer for the dual-continuum system can be generated artificially via the typical POD projection, violating the mass-conservation nature and causing the failure of the POD modeling. A new POD modeling approach is proposed considering the mass conservation of the whole matrix fracture system. Computation can be accelerated as much as 720 times with high precision (reconstruction errors as slow as 7.69 × 10−4%~3.87% for the matrix and 8.27 × 10−4%~2.84% for the fracture).
International Nuclear Information System (INIS)
Noh, Yeelyong; Chang, Kwangpil; Seo, Yutaek; Chang, Daejun
2014-01-01
This study proposes a new methodology that combines dynamic process simulation (DPS) and Monte Carlo simulation (MCS) to determine the design pressure of fuel storage tanks on LNG-fueled ships. Because the pressure of such tanks varies with time, DPS is employed to predict the pressure profile. Though equipment failure and subsequent repair affect transient pressure development, it is difficult to implement these features directly in the process simulation due to the randomness of the failure. To predict the pressure behavior realistically, MCS is combined with DPS. In MCS, discrete events are generated to create a lifetime scenario for a system. The combination of MCS with long-term DPS reveals the frequency of the exceedance pressure. The exceedance curve of the pressure provides risk-based information for determining the design pressure based on risk acceptance criteria, which may vary with different points of view. - Highlights: • The realistic operation scenario of the LNG FGS system is estimated by MCS. • In repeated MCS trials, the availability of the FGS system is evaluated. • The realistic pressure profile is obtained by the proposed methodology. • The exceedance curve provides risk-based information for determining design pressure
A heterogeneous graph-based recommendation simulator
Energy Technology Data Exchange (ETDEWEB)
Yeonchan, Ahn [Seoul National University; Sungchan, Park [Seoul National University; Lee, Matt Sangkeun [ORNL; Sang-goo, Lee [Seoul National University
2013-01-01
Heterogeneous graph-based recommendation frameworks have flexibility in that they can incorporate various recommendation algorithms and various kinds of information to produce better results. In this demonstration, we present a heterogeneous graph-based recommendation simulator which enables participants to experience the flexibility of a heterogeneous graph-based recommendation method. With our system, participants can simulate various recommendation semantics by expressing the semantics via meaningful paths like User Movie User Movie. The simulator then returns the recommendation results on the fly based on the user-customized semantics using a fast Monte Carlo algorithm.
Buschbaum, Jan; Fremd, Rainer; Pohlemann, Tim; Kristen, Alexander
2017-08-01
Reduction is a crucial step in the surgical treatment of bone fractures. Finding an optimal path for restoring anatomical alignment is considered technically demanding because collisions as well as high forces caused by surrounding soft tissues can avoid desired reduction movements. The repetition of reduction movements leads to a trial-and-error process which causes a prolonged duration of surgery. By planning an appropriate reduction path-an optimal sequence of target-directed movements-these problems should be overcome. For this purpose, a computer-based method has been developed. Using the example of simple femoral shaft fractures, 3D models are generated out of CT images. A reposition algorithm aligns both fragments by reconstructing their broken edges. According to the criteria of a deduced planning strategy, a modified A*-algorithm searches collision-free route of minimal force from the dislocated into the computed target position. Muscular forces are considered using a musculoskeletal reduction model (OpenSim model), and bone collisions are detected by an appropriate method. Five femoral SYNBONE models were broken into different fracture classification types and were automatically reduced from ten randomly selected displaced positions. Highest mean translational and rotational error for achieving target alignment is [Formula: see text] and [Formula: see text]. Mean value and standard deviation of occurring forces are [Formula: see text] for M. tensor fasciae latae and [Formula: see text] for M. semitendinosus over all trials. These pathways are precise, collision-free, required forces are minimized, and thus regarded as optimal paths. A novel method for planning reduction paths under consideration of collisions and muscular forces is introduced. The results deliver additional knowledge for an appropriate tactical reduction procedure and can provide a basis for further navigated or robotic-assisted developments.
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
Directory of Open Access Journals (Sweden)
Mahsa Noori Asl
2013-01-01
Full Text Available Compton-scattered photons included within the photopeak pulse-height window result in the degradation of SPECT images both qualitatively and quantitatively. The purpose of this study is to evaluate and compare six scatter correction methods based on setting the energy windows in 99m Tc spectrum. SIMIND Monte Carlo simulation is used to generate the projection images from a cold-sphere hot-background phantom. For evaluation of different scatter correction methods, three assessment criteria including image contrast, signal-to-noise ratio (SNR and relative noise of the background (RNB are considered. Except for the dual-photopeak window (DPW method, the image contrast of the five cold spheres is improved in the range of 2.7-26%. Among methods considered, two methods show a nonuniform correction performance. The RNB for all of the scatter correction methods is ranged from minimum 0.03 for DPW method to maximum 0.0727 for the three energy window (TEW method using trapezoidal approximation. The TEW method using triangular approximation because of ease of implementation, good improvement of the image contrast and the SNR for the five cold spheres, and the low noise level is proposed as most appropriate correction method.
Meshless Method for Simulation of Compressible Flow
Nabizadeh Shahrebabak, Ebrahim
In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow
Jong-Ho Nam; Inha Park; Ho Jin Lee; Mi Ok Kwon; Kyungsik Choi; Young-Kyo Seo
2013-01-01
Ever since the Arctic region has opened its mysterious passage to mankind, continuous attempts to take advantage of its fastest route across the region has been made. The Arctic region is still covered by thick ice and thus finding a feasible navigating route is essential for an economical voyage. To find the optimal route, it is necessary to establish an efficient transit model that enables us to simulate every possible route in advance. In this work, an enhanced algorithm to determine the o...
Collaborative simulation method with spatiotemporal synchronization process control
Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian
2016-10-01
When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.
International Nuclear Information System (INIS)
Ishimoto, Takayoshi; Koyama, Michihisa
2012-01-01
Graphical abstract: Molecular dynamics method based on multi-component molecular orbital method was applied to basic hydrogen bonding systems, H 5 O 2 + , and its isotopomers (D 5 O 2 + andT 5 O 2 + ). Highlights: ► Molecular dynamics method with nuclear quantum effect was developed. ► Multi-component molecular orbital method was used as ab initio MO calculation. ► Developed method applied to basic hydrogen bonding system, H 5 O 2 + , and isotopomers. ► O ⋯ O vibrational stretching reflected to the distribution of protonic wavefunctions. ► H/D/T isotope effect was also analyzed. - Abstract: We propose a molecular dynamics (MD) method based on the multi-component molecular orbital (MC M O) method, which takes into account the quantum effect of proton directly, for the detailed analyses of proton transfer in hydrogen bonding system. The MC M O based MD (MC M O-MD) method is applied to the basic structures, H 5 O 2 + (called “Zundel ion”), and its isotopomers (D 5 O 2 + andT 5 O 2 + ). We clearly demonstrate the geometrical difference of hydrogen bonded O ⋯ O distance induced by H/D/T isotope effect because the O ⋯ O in H-compound was longer than that in D- or T-compound. We also find the strong relation between stretching vibration of O ⋯ O and the distribution of hydrogen bonded protonic wavefunction because the protonic wavefunction tends to delocalize when the O ⋯ O distance becomes short during the dynamics. Our proposed MC M O-MD simulation is expected as a powerful tool to analyze the proton dynamics in hydrogen bonding systems.
Simulation of tunneling construction methods of the Cisumdawu toll road
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Methods for simulating turbulent phase screen
International Nuclear Information System (INIS)
Zhang Jianzhu; Zhang Feizhou; Wu Yi
2012-01-01
Some methods for simulating turbulent phase screen are summarized, and their characteristics are analyzed by calculating the phase structure function, decomposing phase screens into Zernike polynomials, and simulating laser propagation in the atmosphere. Through analyzing, it is found that, the turbulent high-frequency components are well contained by those phase screens simulated by the FFT method, but the low-frequency components are little contained. The low-frequency components are well contained by screens simulated by Zernike method, but the high-frequency components are not contained enough. The high frequency components contained will be improved by increasing the order of the Zernike polynomial, but they mainly lie in the edge-area. Compared with the two methods above, the fractal method is a better method to simulate turbulent phase screens. According to the radius of the focal spot and the variance of the focal spot jitter, there are limitations in the methods except the fractal method. Combining the FFT and Zernike method or combining the FFT method and self-similar theory to simulate turbulent phase screens is an effective and appropriate way. In general, the fractal method is probably the best way. (authors)
Debernardi, Alberto; Fanciulli, Marco
Within the framework of the envelope function approximation we have computed - without adjustable parameters and with a reduced computational effort due to analytical expression of relevant Hamiltonian terms - the energy levels of the shallow P impurity in silicon and the hyperfine and superhyperfine splitting of the ground state. We have studied the dependence of these quantities on the applied external electric field along the [001] direction. Our results reproduce correctly the experimental splitting of the impurity ground states detected at zero electric field and provide reliable predictions for values of the field where experimental data are lacking. Further, we have studied the effect of confinement of a shallow state of a P atom at the center of a spherical Si-nanocrystal embedded in a SiO2 matrix. In our simulations the valley-orbit interaction of a realistically screened Coulomb potential and of the core potential are included exactly, within the numerical accuracy due to the use of a finite basis set, while band-anisotropy effects are taken into account within the effective-mass approximation.
Detector Simulation: Data Treatment and Analysis Methods
Apostolakis, J
2011-01-01
Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...
Isogeometric methods for numerical simulation
Bordas, Stéphane
2015-01-01
The book presents the state of the art in isogeometric modeling and shows how the method has advantaged. First an introduction to geometric modeling with NURBS and T-splines is given followed by the implementation into computer software. The implementation in both the FEM and BEM is discussed.
International Nuclear Information System (INIS)
Xu, Kai-Jiang; Pan, Xiao-Min; Li, Ren-Xian; Sheng, Xin-Qing
2017-01-01
In optical trapping applications, the optical force should be investigated within a wide range of parameter space in terms of beam configuration to reach the desirable performance. A simple but reliable way of conducting the related investigation is to evaluate optical forces corresponding to all possible beam configurations. Although the optical force exerted on arbitrarily shaped particles can be well predicted by boundary element method (BEM), such investigation is time costing because it involves many repetitions of expensive computation, where the forces are calculated from the equivalent surface currents. An algorithm is proposed to alleviate the difficulty by exploiting our previously developed skeletonization framework. The proposed algorithm succeeds in reducing the number of repetitions. Since the number of skeleton beams is always much less than that of beams in question, the computation can be very efficient. The proposed algorithm is accurate because the skeletonization is accuracy controllable. - Highlights: • A fast and accurate algorithm is proposed in terms of boundary element method to reduce the number of repetitions of computing the optical forces from the equivalent currents. • The algorithm is accuracy controllable because the accuracy of the associated rank-revealing process is well-controlled. • The accelerate rate can reach over one thousand because the number of skeleton beams can be very small. • The algorithm can be applied to other methods, e.g., FE-BI.
Directory of Open Access Journals (Sweden)
John A. Lees
2018-03-01
Full Text Available Background: Phylogenetic reconstruction is a necessary first step in many analyses which use whole genome sequence data from bacterial populations. There are many available methods to infer phylogenies, and these have various advantages and disadvantages, but few unbiased comparisons of the range of approaches have been made. Methods: We simulated data from a defined “true tree” using a realistic evolutionary model. We built phylogenies from this data using a range of methods, and compared reconstructed trees to the true tree using two measures, noting the computational time needed for different phylogenetic reconstructions. We also used real data from Streptococcus pneumoniae alignments to compare individual core gene trees to a core genome tree. Results: We found that, as expected, maximum likelihood trees from good quality alignments were the most accurate, but also the most computationally intensive. Using less accurate phylogenetic reconstruction methods, we were able to obtain results of comparable accuracy; we found that approximate results can rapidly be obtained using genetic distance based methods. In real data we found that highly conserved core genes, such as those involved in translation, gave an inaccurate tree topology, whereas genes involved in recombination events gave inaccurate branch lengths. We also show a tree-of-trees, relating the results of different phylogenetic reconstructions to each other. Conclusions: We recommend three approaches, depending on requirements for accuracy and computational time. Quicker approaches that do not perform full maximum likelihood optimisation may be useful for many analyses requiring a phylogeny, as generating a high quality input alignment is likely to be the major limiting factor of accurate tree topology. We have publicly released our simulated data and code to enable further comparisons.
Evaluation of full-scope simulator testing methods
Energy Technology Data Exchange (ETDEWEB)
Feher, M P; Moray, N; Senders, J W; Biron, K [Human Factors North Inc., Toronto, ON (Canada)
1995-03-01
This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs.
Evaluation of full-scope simulator testing methods
International Nuclear Information System (INIS)
Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.
1995-03-01
This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs
International Nuclear Information System (INIS)
Ajzatulin, A.I.
2007-01-01
One studies the factors affecting the designing of the full-scale simulation facilities, the design data base simulation and the application of digital computerized process control systems. Paper describes problems dealing with the errors in the process system design data and the algorithm simulation methodological problems. On the basis of the records of the efforts to design the full-scale simulation facilities of the Tienvan NPP and of the Kudankulam NPP one brings to the notice a procedure to elaborate new tools to simulate and to elaborate algorithms for the computerized process control systems based on the process system design data. Paper lists the basic components of the program system under elaboration to ensure simulation and designing and describes their functions. The introduction result is briefly described [ru
Pantelidis, Panteleimon; Staikoglou, Nikolaos; Paparoidamis, Georgios; Drosos, Christos; Karamaroudis, Stefanos; Samara, Athina; Keskinis, Christodoulos; Sideris, Michail; Giannakoulas, George; Tsoulfas, Georgios; Karagiannis, Asterios
2016-01-01
The integration of simulation-based learning (SBL) methods holds promise for improving the medical education system in Greece. The Applied Basic Clinical Seminar with Scenarios for Students (ABCS3) is a novel two-day SBL course that was designed by the Scientific Society of Hellenic Medical Students. The ABCS3 targeted undergraduate medical students and consisted of three core components: the case-based lectures, the ABCDE hands-on station, and the simulation-based clinical scenarios. The purpose of this study was to evaluate the general educational environment of the course, as well as the skills and knowledge acquired by the participants. Two sets of questions were distributed to the participants: the Dundee Ready Educational Environment Measure (DREEM) questionnaire and an internally designed feedback questionnaire (InEv). A multiple-choice examination was also distributed prior to the course and following its completion. A total of 176 participants answered the DREEM questionnaire, 56 the InEv, and 60 the MCQs. The overall DREEM score was 144.61 (±28.05) out of 200. Delegates who participated in both the case-based lectures and the interactive scenarios core components scored higher than those who only completed the case-based lecture session (P=0.038). The mean overall feedback score was 4.12 (±0.56) out of 5. Students scored significantly higher on the post-test than on the pre-test (Pmedical students reported positive opinions about their experiences and exhibited improvements in their clinical knowledge and skills.
Wang, Han; Dong, Xiao-Xi; Yang, Ji-Chun; Huang, He; Li, Ying-Xin; Zhang, Hai-Xia
2017-07-01
For predicting the temperature distribution within skin tissue in 980-nm laser-evoked potentials (LEPs) experiments, a five-layer finite element model (FEM-5) was constructed based on Pennes bio-heat conduction equation and the Lambert-Beer law. The prediction results of the FEM-5 model were verified by ex vivo pig skin and in vivo rat experiments. Thirty ex vivo pig skin samples were used to verify the temperature distribution predicted by the model. The output energy of the laser was 1.8, 3, and 4.4 J. The laser spot radius was 1 mm. The experiment time was 30 s. The laser stimulated the surface of the ex vivo pig skin beginning at 10 s and lasted for 40 ms. A thermocouple thermometer was used to measure the temperature of the surface and internal layers of the ex vivo pig skin, and the sampling frequency was set to 60 Hz. For the in vivo experiments, nine adult male Wistar rats weighing 180 ± 10 g were used to verify the prediction results of the model by tail-flick latency. The output energy of the laser was 1.4 and 2.08 J. The pulsed width was 40 ms. The laser spot radius was 1 mm. The Pearson product-moment correlation and Kruskal-Wallis test were used to analyze the correlation and the difference of data. The results of all experiments showed that the measured and predicted data had no significant difference (P > 0.05) and good correlation (r > 0.9). The safe laser output energy range (1.8-3 J) was also predicted. Using the FEM-5 model prediction, the effective pain depth could be accurately controlled, and the nociceptors could be selectively activated. The FEM-5 model can be extended to guide experimental research and clinical applications for humans.
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
Factorization method for simulating QCD at finite density
International Nuclear Information System (INIS)
Nishimura, Jun
2003-01-01
We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Constraint methods that accelerate free-energy simulations of biomolecules.
Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A
2015-12-28
Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.
Image based SAR product simulation for analysis
Domik, G.; Leberl, F.
1987-01-01
SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.
DEFF Research Database (Denmark)
Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe
2016-01-01
% plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population......It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive...... populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90...
New method of fast simulation for a hadron calorimeter response
International Nuclear Information System (INIS)
Kul'chitskij, Yu.; Sutiak, J.; Tokar, S.; Zenis, T.
2003-01-01
In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data
International Nuclear Information System (INIS)
Zhang, Y X; Su, M; Hou, H C; Song, P F
2013-01-01
This research adopts the quasi three-dimensional hydraulic design method for the impeller of high specific speed mixed-flow pump to achieve the purpose of verifying the hydraulic design method and improving hydraulic performance. Based on the two families of stream surface theory, the direct problem is completed when the meridional flow field of impeller is obtained by employing iterative calculation to settle the continuity and momentum equation of fluid. The inverse problem is completed by using the meridional flow field calculated in the direct problem. After several iterations of the direct and inverse problem, the shape of impeller and flow field information can be obtained finally when the result of iteration satisfies the convergent criteria. Subsequently the internal flow field of the designed pump are simulated by using RANS equations with RNG k-ε two-equation turbulence model. The static pressure and streamline distributions at the symmetrical cross-section, the vector velocity distribution around blades and the reflux phenomenon are analyzed. The numerical results show that the quasi three-dimensional hydraulic design method for high specific speed mixed-flow pump improves the hydraulic performance and reveal main characteristics of the internal flow of mixed-flow pump as well as provide basis for judging the rationality of the hydraulic design, improvement and optimization of hydraulic model
Agent-based simulation of animal behaviour
C.M. Jonker (Catholijn); J. Treur
1998-01-01
textabstract In this paper it is shown how animal behaviour can be simulated in an agent-based manner. Different models are shown for different types of behaviour, varying from purely reactive behaviour to pro-active, social and adaptive behaviour. The compositional development method for
2D PIM Simulation Based on COMSOL
DEFF Research Database (Denmark)
Wang, Xinbo; Cui, Wanzhao; Wang, Jingyu
2011-01-01
Passive intermodulation (PIM) is a problematic type of nonlinear distortion en- countered in many communication systems. To analyze the PIM distortion resulting from ma- terial nonlinearity, a 2D PIM simulation method based on COMSOL is proposed in this paper. As an example, a rectangular wavegui...
Spectral Methods in Numerical Plasma Simulation
DEFF Research Database (Denmark)
Coutsias, E.A.; Hansen, F.R.; Huld, T.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...
Evaluation of structural reliability using simulation methods
Directory of Open Access Journals (Sweden)
Baballëku Markel
2015-01-01
Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.
Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.
2017-05-01
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
Natural tracer test simulation by stochastic particle tracking method
International Nuclear Information System (INIS)
Ackerer, P.; Mose, R.; Semra, K.
1990-01-01
Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)
Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T
2018-02-01
The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.
2-d Simulations of Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm
2004-01-01
One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...... using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham...
Inversion based on computational simulations
International Nuclear Information System (INIS)
Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.
1998-01-01
A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal
Novel Methods for Electromagnetic Simulation and Design
2016-08-03
modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow design by simulation. 15. SUBJECT...electrically large objects in a manner that is sufficiently fast to allow design by simulation. We also developed new methods for scattering from cavities in a...basis for high fidelity modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow
Numerical methods in simulation of resistance welding
DEFF Research Database (Denmark)
Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi
2015-01-01
Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...... thecontact area and the distribution of contact pressure. The numerical simulation of resistancewelding is illustrated by a spot welding example that includes subsequent tensile shear testing...
Comparing three methods for participatory simulation of hospital work systems
DEFF Research Database (Denmark)
Broberg, Ole; Andersen, Simone Nyholm
Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...
Simulation-based summative assessments in surgery.
Szasz, Peter; Grantcharov, Teodor P; Sweet, Robert M; Korndorffer, James R; Pedowitz, Robert A; Roberts, Patricia L; Sachdeva, Ajit K
2016-09-01
The American College of Surgeons-Accredited Education Institutes (ACS-AEI) Consortium aims to enhance patient safety and advance surgical education through the use of cutting-edge simulation-based training and assessment methods. The annual ACS-AEI Consortium meeting provides a forum to discuss the latest simulation-based training and assessment methods and includes special panel presentations on key topics. During the 8th annual Consortium, there was a panel presentation on simulation-based summative assessments, during which experiences from across surgical disciplines were presented. The formal presentations were followed by a robust discussion between the conference attendees and the panelists. This report summarizes the panelists' presentations and their ensuing discussion with attendees. The focus of this report is on the basis for and advances in simulation-based summative assessments, the current practices employed across various surgical disciplines, and future directions that may be pursued by the ACS-AEI Consortium. Copyright © 2016 Elsevier Inc. All rights reserved.
Simulation teaching method in Engineering Optics
Lu, Qieni; Wang, Yi; Li, Hongbin
2017-08-01
We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.
Bridging the gap: simulations meet knowledge bases
King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.
2003-09-01
Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.
Hybrid Method Simulation of Slender Marine Structures
DEFF Research Database (Denmark)
Christiansen, Niels Hørbye
This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... only recognize patterns similar to those comprised in the data used to train the network. Fatigue life evaluation of marine structures often considers simulations of more than a hundred different sea states. Hence, in order for this method to be useful, the training data must be arranged so...... that a single neural network can cover all relevant sea states. The applicability and performance of the present hybrid method is demonstrated on a numerical model of a mooring line attached to a floating offshore platform. The second part of the thesis demonstrates how sequential neural networks can be used...
A Simulation Method Measuring Psychomotor Nursing Skills.
McBride, Helena; And Others
1981-01-01
The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…
Plasma simulations using the Car-Parrinello method
International Nuclear Information System (INIS)
Clerouin, J.; Zerah, G.; Benisti, D.; Hansen, J.P.
1990-01-01
A simplified version of the Car-Parrinello method, based on the Thomas-Fermi (local density) functional for the electrons, is adapted to the simulation of the ionic dynamics in dense plasmas. The method is illustrated by an explicit application to a degenerate one-dimensional hydrogen plasma
A direct simulation method for flows with suspended paramagnetic particles
Kang, T.G.; Hulsen, M.A.; Toonder, den J.M.J.; Anderson, P.D.; Meijer, H.E.H.
2008-01-01
A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a
Multilevel panel method for wind turbine rotor flow simulations
van Garrel, Arne
2016-01-01
Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering
A simulation method for lightning surge response of switching power
International Nuclear Information System (INIS)
Wei, Ming; Chen, Xiang
2013-01-01
In order to meet the need of protection design for lighting surge, a prediction method of lightning electromagnetic pulse (LEMP) response which is based on system identification is presented. Experiments of switching power's surge injection were conducted, and the input and output data were sampled, de-noised and de-trended. In addition, the model of energy coupling transfer function was obtained by system identification method. Simulation results show that the system identification method can predict the surge response of linear circuit well. The method proposed in the paper provided a convenient and effective technology for simulation of lightning effect.
The simulation of CAMAC system based on Windows API
International Nuclear Information System (INIS)
Li Lei; Song Yushou; Xi Yinyin; Yan Qiang; Liu Huilan; Li Taosheng
2012-01-01
Based on Windows API, a kind of design method to simulate the CAMAC System, which is commonly used in nuclear physics experiments, is developed. Using C++ object-oriented programming, the simulation is carried out in the environment of Visual Studio 2010 and the interfaces, the data-way, the control commands and the modules are simulated with the functions either user-defined or from Windows API. Applying this method, the amplifier plug AMP575A produced by ORTEC is simulated and performance experiments are studied for this simulation module. The results indicate that the simulation module can fulfill the function of pole-zero adjustment, which means this method is competent for the simulation of CAMAC System. Compared with the simulation based on LabVIEW, this way is more flexible and closer to the bottom of the system. All the works above have found a path to making the virtual instrument platform based on CAMAC system. (authors)
International Nuclear Information System (INIS)
Chijimatsu, Masakazu; Koyama, Tomofumi; Shimizu, Hiroyuki; Nakama, Shigeo; Fujita, Tomoo
2013-01-01
DECOVALEX-2011 is an international cooperation project for enhancing the numerical models of radioactive waste repositories. In DECOVALEX-2011 project, the failure mechanism during excavation and heating processes observed in the Aespoe pillar stability experiment, which was carried out at the Aespoe Hard Rock Laboratory by the Swedish Nuclear Fuel and Waste Management Company, were simulated using Finite Element Method. When the calibrated parameters were used, simulation results agree qualitatively well with the experimental results. Therefore, it can be said that the spalling phenomenon is expressible even by the application with the continuum model by the use of the suitable parameters. (author)
A method for ensemble wildland fire simulation
Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain
2011-01-01
An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...
Gu, Yu; Zhang, Xu; Chen, Yan-Kun; Zhao, Bo-Wen; Zhang, Yan-Ling
2017-12-01
5-lipoxygenase (5-LOX) and leukotriene A4 hydrolase (LTA4H), as the major targets of 5-LOX branch in the arachidonic acid (AA) metabolic pathway, play an important role in the treatment of inflammation. Rhei Radix et Rhizoma, Notopterygii Rhizoma et Radix and Genitana Macrophyllae Radix have clear anti-inflammation activities. In this paper, the targets of 5-LOX and LTA4H were used as the research carrier, and Hiphop module in DS4.0 (Discovery studio) was used to construct ingredients database for preliminary screening of three traditional Chinese medicines based on target inhibitor pharmacophore, so as to obtain 5-LOX and LTA4H potential active ingredients. The ingredients obtained in initial pharmacophore screening were further screened by using CDOCKER module, and the screening rules were established based on the score of initial compound and the key amino acids to obtain 12 potential 5-LOX inhibitors and 7 potential LTA4H inhibitors. To be more specific, the potential 5-LOX inhibitors included 6 ingredients in Rhei Radix et Rhizoma, such as procyanidins B2-3,3'-O-double gallate and revandchinone 2; four ingredients in notopterygium, such as dodecanoic acid and so on. On the other hand, potential LTA4H inhibitors included revandchinone 1, revandchinone 4 in Rhei Radix et Rhizoma, tridecanoic acid, tetracosanoic acid and methyl eicosanoate in Notopterygii Rhizoma et Radix, montanic acid methyl ester and N-docosanoyl-O-aminobenzoate in Genitana Macrophyllae Radix and so on. The molecular simulation methods were highly efficient and time-saving to obtain the potential inhibitors of 5-LOX and LTA4H, which could provide assistance for discovering the chemical quality indicators of anti-inflammatory efficacy of three Chinese herbs, and may be helpful to promote the whole-process quality control of three Chinese herbs. Copyright© by the Chinese Pharmaceutical Association.
Simulation methods for nuclear production scheduling
International Nuclear Information System (INIS)
Miles, W.T.; Markel, L.C.
1975-01-01
Recent developments and applications of simulation methods for use in nuclear production scheduling and fuel management are reviewed. The unique characteristics of the nuclear fuel cycle as they relate to the overall optimization of a mixed nuclear-fossil system in both the short-and mid-range time frame are described. Emphasis is placed on the various formulations and approaches to the mid-range planning problem, whose objective is the determination of an optimal (least cost) system operation strategy over a multi-year planning horizon. The decomposition of the mid-range problem into power system simulation, reactor core simulation and nuclear fuel management optimization, and system integration models is discussed. Present utility practices, requirements, and research trends are described. 37 references
Architecture oriented modeling and simulation method for combat mission profile
Directory of Open Access Journals (Sweden)
CHEN Xia
2017-05-01
Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.
International Nuclear Information System (INIS)
Titt, U.; Newhauser, W. D.
2005-01-01
Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)
Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.
2008-10-01
The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is
Lagrangian numerical methods for ocean biogeochemical simulations
Paparella, Francesco; Popolizio, Marina
2018-05-01
We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Simulating colloid hydrodynamics with lattice Boltzmann methods
International Nuclear Information System (INIS)
Cates, M E; Stratford, K; Adhikari, R; Stansell, P; Desplat, J-C; Pagonabarraga, I; Wagner, A J
2004-01-01
We present a progress report on our work on lattice Boltzmann methods for colloidal suspensions. We focus on the treatment of colloidal particles in binary solvents and on the inclusion of thermal noise. For a benchmark problem of colloids sedimenting and becoming trapped by capillary forces at a horizontal interface between two fluids, we discuss the criteria for parameter selection, and address the inevitable compromise between computational resources and simulation accuracy
An improved method for simulating radiographs
International Nuclear Information System (INIS)
Laguna, G.W.
1986-01-01
The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials
Simulation of the acoustic wave propagation using a meshless method
Directory of Open Access Journals (Sweden)
Bajko J.
2017-01-01
Full Text Available This paper presents numerical simulations of the acoustic wave propagation phenomenon modelled via Linearized Euler equations. A meshless method based on collocation of the strong form of the equation system is adopted. Moreover, the Weighted least squares method is used for local approximation of derivatives as well as stabilization technique in a form of spatial ltering. The accuracy and robustness of the method is examined on several benchmark problems.
Numerical simulation methods for wave propagation through optical waveguides
International Nuclear Information System (INIS)
Sharma, A.
1993-01-01
The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs
Spectral methods in numerical plasma simulation
International Nuclear Information System (INIS)
Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.
1989-01-01
An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)
Electromagnetic simulation using the FDTD method
Sullivan, Dennis M
2013-01-01
A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp
Method of simulating dose reduction for digital radiographic systems
International Nuclear Information System (INIS)
Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.
2005-01-01
The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)
Performance evaluation of sea surface simulation methods for target detection
Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi
2017-11-01
With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.
Simulation of quantum systems by the tomography Monte Carlo method
International Nuclear Information System (INIS)
Bogdanov, Yu I
2007-01-01
A new method of statistical simulation of quantum systems is presented which is based on the generation of data by the Monte Carlo method and their purposeful tomography with the energy minimisation. The numerical solution of the problem is based on the optimisation of the target functional providing a compromise between the maximisation of the statistical likelihood function and the energy minimisation. The method does not involve complicated and ill-posed multidimensional computational procedures and can be used to calculate the wave functions and energies of the ground and excited stationary sates of complex quantum systems. The applications of the method are illustrated. (fifth seminar in memory of d.n. klyshko)
Dynamical simulation of heavy ion collisions; VUU and QMD method
International Nuclear Information System (INIS)
Niita, Koji
1992-01-01
We review two simulation methods based on the Vlasov-Uehling-Uhlenbeck (VUU) equation and Quantum Molecular Dynamics (QMD), which are the most widely accepted theoretical framework for the description of intermediate-energy heavy-ion reactions. We show some results of the calculations and compare them with the experimental data. (author)
Interactive methods for exploring particle simulation data
Energy Technology Data Exchange (ETDEWEB)
Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.
2004-05-01
In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.
Physics-Based Simulations of Natural Hazards
Schultz, Kasey William
Earthquakes and tsunamis are some of the most damaging natural disasters that we face. Just two recent events, the 2004 Indian Ocean earthquake and tsunami and the 2011 Haiti earthquake, claimed more than 400,000 lives. Despite their catastrophic impacts on society, our ability to predict these natural disasters is still very limited. The main challenge in studying the earthquake cycle is the non-linear and multi-scale properties of fault networks. Earthquakes are governed by physics across many orders of magnitude of spatial and temporal scales; from the scale of tectonic plates and their evolution over millions of years, down to the scale of rock fracturing over milliseconds to minutes at the sub-centimeter scale during an earthquake. Despite these challenges, there are useful patterns in earthquake occurrence. One such pattern, the frequency-magnitude relation, relates the number of large earthquakes to small earthquakes and forms the basis for assessing earthquake hazard. However the utility of these relations is proportional to the length of our earthquake records, and typical records span at most a few hundred years. Utilizing physics based interactions and techniques from statistical physics, earthquake simulations provide rich earthquake catalogs allowing us to measure otherwise unobservable statistics. In this dissertation I will discuss five applications of physics-based simulations of natural hazards, utilizing an earthquake simulator called Virtual Quake. The first is an overview of computing earthquake probabilities from simulations, focusing on the California fault system. The second uses simulations to help guide satellite-based earthquake monitoring methods. The third presents a new friction model for Virtual Quake and describes how we tune simulations to match reality. The fourth describes the process of turning Virtual Quake into an open source research tool. This section then focuses on a resulting collaboration using Virtual Quake for a detailed
A new method for simulating human emotions
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
How to make machines express emotions would be instrumental in establishing a completely new paradigm for man ma-chine interaction. A new method for simulating and assessing artificial psychology has been developed for the research of the emo-tion robot. The human psychology activity is regarded as a Markov process. An emotion space and psychology model is constructedbased on Markov process. The conception of emotion entropy is presented to assess the artificial emotion complexity. The simulatingresults play up to human psychology activity. This model can also be applied to consumer-friendly human-computer interfaces, andinteractive video etc.
RELAP5 based engineering simulator
International Nuclear Information System (INIS)
Charlton, T.R.; Laats, E.T.; Burtt, J.D.
1990-01-01
The INEL Engineering Simulation Center was established in 1988 to provide a modern, flexible, state-of-the-art simulation facility. This facility and two of the major projects which are part of the simulation center, the Advance Test Reactor (ATR) engineering simulator project and the Experimental Breeder Reactor II (EBR-II) advanced reactor control system, have been the subject of several papers in the past few years. Two components of the ATR engineering simulator project, RELAP5 and the Nuclear Plant Analyzer (NPA), have recently been improved significantly. This paper will present an overview of the INEL Engineering Simulation Center, and discuss the RELAP5/MOD3 and NPA/MOD1 codes, specifically how they are being used at the INEL Engineering Simulation Center. It will provide an update on the modifications to these two codes and their application to the ATR engineering simulator project, as well as, a discussion on the reactor system representation, control system modeling, two phase flow and heat transfer modeling. It will also discuss how these two codes are providing desktop, stand-alone reactor simulation. 12 refs., 2 figs
RELAP5 based engineering simulator
International Nuclear Information System (INIS)
Charlton, T.R.; Laats, E.T.; Burtt, J.D.
1990-01-01
The INEL Engineering Simulation Center was established in 1988 to provide a modern, flexible, state-of-the-art simulation facility. This facility and two of the major projects which are part of the simulation center, the Advance Test Reactor (ATR) engineering simulator project and the Experimental Breeder Reactor (EBR-II) advanced reactor control system, have been the subject of several papers in the past few years. Two components of the ATR engineering simulator project, RELAP5 and the Nuclear Plant Analyzer (NPA), have recently been improved significantly. This paper presents an overview of the INEL Engineering Simulation Center, and discusses the RELAP5/MOD3 and NPA/MOD1 codes, specifically how they are being used at the INEL Engineering Simulation Center. It provides an update on the modifications to these two codes and their application to the ATR engineering simulator project, as well as, a discussion on the reactor system representation, control system modeling, two phase flow and heat transfer modeling. It will also discuss how these two codes are providing desktop, stand-alone reactor simulation
Comparison of validation methods for forming simulations
Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus
2018-05-01
The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.
Adaptive implicit method for thermal compositional reservoir simulation
Energy Technology Data Exchange (ETDEWEB)
Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)
2008-10-15
As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.
DEFF Research Database (Denmark)
Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G
2016-01-01
a homeostatic set point that follows a normal (Gaussian) distribution. This set point (or baseline in steady-state) should be estimated from a set of previous samples, but, in practice, decisions based on reference change value are often based on only two consecutive results. The original reference change value......-positive results. The aim of this study was to investigate false-positive results using five different published methods for calculation of reference change value. METHODS: The five reference change value methods were examined using normally and ln-normally distributed simulated data. RESULTS: One method performed...... best in approaching the theoretical false-positive percentages on normally distributed data and another method performed best on ln-normally distributed data. The commonly used reference change value method based on two results (without use of estimated set point) performed worst both on normally...
Efficient method for transport simulations in quantum cascade lasers
Directory of Open Access Journals (Sweden)
Maczka Mariusz
2017-01-01
Full Text Available An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green’s functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.
Preview-based sampling for controlling gaseous simulations
Huang, Ruoguan
2011-01-01
In this work, we describe an automated method for directing the control of a high resolution gaseous fluid simulation based on the results of a lower resolution preview simulation. Small variations in accuracy between low and high resolution grids can lead to divergent simulations, which is problematic for those wanting to achieve a desired behavior. Our goal is to provide a simple method for ensuring that the high resolution simulation matches key properties from the lower resolution simulation. We first let a user specify a fast, coarse simulation that will be used for guidance. Our automated method samples the data to be matched at various positions and scales in the simulation, or allows the user to identify key portions of the simulation to maintain. During the high resolution simulation, a matching process ensures that the properties sampled from the low resolution simulation are maintained. This matching process keeps the different resolution simulations aligned even for complex systems, and can ensure consistency of not only the velocity field, but also advected scalar values. Because the final simulation is naturally similar to the preview simulation, only minor controlling adjustments are needed, allowing a simpler control method than that used in prior keyframing approaches. Copyright © 2011 by the Association for Computing Machinery, Inc.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Directory of Open Access Journals (Sweden)
Tingting Li
2017-12-01
Full Text Available Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Physiological Based Simulator Fidelity Design Guidance
Schnell, Thomas; Hamel, Nancy; Postnikov, Alex; Hoke, Jaclyn; McLean, Angus L. M. Thom, III
2012-01-01
The evolution of the role of flight simulation has reinforced assumptions in aviation that the degree of realism in a simulation system directly correlates to the training benefit, i.e., more fidelity is always better. The construct of fidelity has several dimensions, including physical fidelity, functional fidelity, and cognitive fidelity. Interaction of different fidelity dimensions has an impact on trainee immersion, presence, and transfer of training. This paper discusses research results of a recent study that investigated if physiological-based methods could be used to determine the required level of simulator fidelity. Pilots performed a relatively complex flight task consisting of mission task elements of various levels of difficulty in a fixed base flight simulator and a real fighter jet trainer aircraft. Flight runs were performed using one forward visual channel of 40 deg. field of view for the lowest level of fidelity, 120 deg. field of view for the middle level of fidelity, and unrestricted field of view and full dynamic acceleration in the real airplane. Neuro-cognitive and physiological measures were collected under these conditions using the Cognitive Avionics Tool Set (CATS) and nonlinear closed form models for workload prediction were generated based on these data for the various mission task elements. One finding of the work described herein is that simple heart rate is a relatively good predictor of cognitive workload, even for short tasks with dynamic changes in cognitive loading. Additionally, we found that models that used a wide range of physiological and neuro-cognitive measures can further boost the accuracy of the workload prediction.
Determining procedures for simulation-based training in radiology
DEFF Research Database (Denmark)
Nayahangan, Leizl Joy; Nielsen, Kristina Rue; Albrecht-Beste, Elisabeth
2018-01-01
, and basic abdominal ultrasound. CONCLUSION: A needs assessment identified and prioritized 13 technical procedures to include in a simulation-based curriculum. The list may be used as guide for development of training programs. KEY POINTS: • Simulation-based training can supplement training on patients......OBJECTIVES: New training modalities such as simulation are widely accepted in radiology; however, development of effective simulation-based training programs is challenging. They are often unstructured and based on convenience or coincidence. The study objective was to perform a nationwide needs...... assessment to identify and prioritize technical procedures that should be included in a simulation-based curriculum. METHODS: A needs assessment using the Delphi method was completed among 91 key leaders in radiology. Round 1 identified technical procedures that radiologists should learn. Round 2 explored...
Clark, Joseph Warren
2012-01-01
In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Computerized simulation methods for dose reduction, in radiodiagnosis
International Nuclear Information System (INIS)
Brochi, M.A.C.
1990-01-01
The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)
Computational Simulations and the Scientific Method
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Rapid Development of Scenario-Based Simulations and Tutoring Systems
National Research Council Canada - National Science Library
Mohammed, John L; Sorensen, Barbara; Ong, James C; Li, Jian
2005-01-01
.... Scenario-based training, in which trainees practice handling specific situations using faithful simulations of the equipment they will use on the job has proven to be an extremely effective method...
International Nuclear Information System (INIS)
Kajii, Shin-ichirou; Yasuda, Chiaki; Yamashita, Toshio; Abe, Hiroshi; Kanki, Hiroshi
2004-01-01
In the seismic design of nuclear power plant, it is recently considered to use probability method in a addition to certainty method. The former method is called Seismic Probability Safety Assessment (Seismic PSA). In case of seismic PSA for some components of a nuclear power plant using a shaking table, it is necessary for some limited conditions with high level of accelerations such as actual conditions. However, it might be difficult to achieve the test conditions that a current shaking table based on hydraulic power system is intended for the test facility. Therefore, we have been planning out a test method in which both a current and another shaking table called a booster device are applied. This paper describes the verification test of a synchronized control between a current shaking table and a booster device. (author)
Component-based framework for subsurface simulations
International Nuclear Information System (INIS)
Palmer, B J; Fang, Yilin; Hammond, Glenn; Gurumoorthi, Vidhya
2007-01-01
Simulations in the subsurface environment represent a broad range of phenomena covering an equally broad range of scales. Developing modelling capabilities that can integrate models representing different phenomena acting at different scales present formidable challenges both from the algorithmic and computer science perspective. This paper will describe the development of an integrated framework that will be used to combine different models into a single simulation. Initial work has focused on creating two frameworks, one for performing smooth particle hydrodynamics (SPH) simulations of fluid systems, the other for performing grid-based continuum simulations of reactive subsurface flow. The SPH framework is based on a parallel code developed for doing pore scale simulations, the continuum grid-based framework is based on the STOMP (Subsurface Transport Over Multiple Phases) code developed at PNNL Future work will focus on combining the frameworks together to perform multiscale, multiphysics simulations of reactive subsurface flow
Agent-Based Simulations for Project Management
White, J. Chris; Sholtes, Robert M.
2011-01-01
Currently, the most common approach used in project planning tools is the Critical Path Method (CPM). While this method was a great improvement over the basic Gantt chart technique being used at the time, it now suffers from three primary flaws: (1) task duration is an input, (2) productivity impacts are not considered , and (3) management corrective actions are not included. Today, computers have exceptional computational power to handle complex simulations of task e)(eculion and project management activities (e.g ., dynamically changing the number of resources assigned to a task when it is behind schedule). Through research under a Department of Defense contract, the author and the ViaSim team have developed a project simulation tool that enables more realistic cost and schedule estimates by using a resource-based model that literally turns the current duration-based CPM approach "on its head." The approach represents a fundamental paradigm shift in estimating projects, managing schedules, and reducing risk through innovative predictive techniques.
An Example-Based Brain MRI Simulation Framework.
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L
2015-02-21
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
Directory of Open Access Journals (Sweden)
Kaushikbhai C. Parmar
2017-04-01
Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
Simulation-based optimization of thermal systems
International Nuclear Information System (INIS)
Jaluria, Yogesh
2009-01-01
This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results
Simulating individual-based models of epidemics in hierarchical networks
Quax, R.; Bader, D.A.; Sloot, P.M.A.
2009-01-01
Current mathematical modeling methods for the spreading of infectious diseases are too simplified and do not scale well. We present the Simulator of Epidemic Evolution in Complex Networks (SEECN), an efficient simulator of detailed individual-based models by parameterizing separate dynamics
Thomas P. Holmes; Wiktor L. Adamowicz
2003-01-01
Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...
From fuel cells to batteries: Synergies, scales and simulation methods
Bessler, Wolfgang G.
2011-01-01
The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...
Haptic Feedback for the GPU-based Surgical Simulator
DEFF Research Database (Denmark)
Sørensen, Thomas Sangild; Mosegaard, Jesper
2006-01-01
The GPU has proven to be a powerful processor to compute spring-mass based surgical simulations. It has not previously been shown however, how to effectively implement haptic interaction with a simulation running entirely on the GPU. This paper describes a method to calculate haptic feedback...... with limited performance cost. It allows easy balancing of the GPU workload between calculations of simulation, visualisation, and the haptic feedback....
Modified network simulation model with token method of bus access
Directory of Open Access Journals (Sweden)
L.V. Stribulevich
2013-08-01
Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.
Directory of Open Access Journals (Sweden)
C. Albergel
2008-12-01
Full Text Available A long term data acquisition effort of profile soil moisture is under way in southwestern France at 13 automated weather stations. This ground network was developed in order to validate remote sensing and model soil moisture estimates. In this paper, both those in situ observations and a synthetic data set covering continental France are used to test a simple method to retrieve root zone soil moisture from a time series of surface soil moisture information. A recursive exponential filter equation using a time constant, T, is used to compute a soil water index. The Nash and Sutcliff coefficient is used as a criterion to optimise the T parameter for each ground station and for each model pixel of the synthetic data set. In general, the soil water indices derived from the surface soil moisture observations and simulations agree well with the reference root-zone soil moisture. Overall, the results show the potential of the exponential filter equation and of its recursive formulation to derive a soil water index from surface soil moisture estimates. This paper further investigates the correlation of the time scale parameter T with soil properties and climate conditions. While no significant relationship could be determined between T and the main soil properties (clay and sand fractions, bulk density and organic matter content, the modelled spatial variability and the observed inter-annual variability of T suggest that a weak climate effect may exist.
Reddy, M Rami; Erion, Mark D
2009-12-01
Molecular dynamics (MD) simulations in conjunction with thermodynamic perturbation approach was used to calculate relative solvation free energies of five pairs of small molecules, namely; (1) methanol to ethane, (2) acetone to acetamide, (3) phenol to benzene, (4) 1,1,1 trichloroethane to ethane, and (5) phenylalanine to isoleucine. Two studies were performed to evaluate the dependence of the convergence of these calculations on MD simulation length and starting configuration. In the first study, each transformation started from the same well-equilibrated configuration and the simulation length was varied from 230 to 2,540 ps. The results indicated that for transformations involving small structural changes, a simulation length of 860 ps is sufficient to obtain satisfactory convergence. In contrast, transformations involving relatively large structural changes, such as phenylalanine to isoleucine, require a significantly longer simulation length (>2,540 ps) to obtain satisfactory convergence. In the second study, the transformation was completed starting from three different configurations and using in each case 860 ps of MD simulation. The results from this study suggest that performing one long simulation may be better than averaging results from three different simulations using a shorter simulation length and three different starting configurations.
SIMULATION OF SUBGRADE EMBANKMENT ON WEAK BASE
Directory of Open Access Journals (Sweden)
V. D. Petrenko
2015-08-01
Full Text Available Purpose. This article provides: the question of the sustainability of the subgrade on a weak base is considered in the paper. It is proposed to use the method of jet grouting. Investigation of the possibility of a weak base has an effect on the overall deformation of the subgrade; the identification and optimization of the parameters of subgrade based on studies using numerical simulation. Methodology. The theoretical studies of the stress-strain state of the base and subgrade embankment by modeling in the software package LIRA have been conducted to achieve this goal. Findings. After making the necessary calculations perform building fields of a subsidence, borders cramped thickness, bed’s coefficients of Pasternak and Winkler. The diagrams construction of vertical stress performs at any point of load application. Also, using the software system may perform peer review subsidence, rolls railroad tracks in natural and consolidated basis. Originality. For weak soils is the most appropriate nonlinear model of the base with the existing areas of both elastic and limit equilibrium, mixed problem of the theory of elasticity and plasticity. Practical value. By increasing the load on the weak base as a result of the second track construction, adds embankment or increasing axial load when changing the rolling stock process of sedimentation and consolidation may continue again. Therefore, one of the feasible and promising options for the design and reconstruction of embankments on weak bases is to strengthen the bases with the help of jet grouting. With the expansion of the railway infrastructure, increasing speed and weight of the rolling stock is necessary to ensure the stability of the subgrade on weak bases. LIRA software package allows you to perform all the necessary calculations for the selection of a proper way of strengthening weak bases.
Meshfree simulation of avalanches with the Finite Pointset Method (FPM)
Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios
2017-04-01
Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.
Desebbe, Olivier; Joosten, Alexandre; Suehiro, Koichi; Lahham, Sari; Essiet, Mfonobong; Rinehart, Joseph; Cannesson, Maxime
2016-07-01
Pulse pressure variation (PPV) can be used to assess fluid status in the operating room. This measurement, however, is time consuming when done manually and unreliable through visual assessment. Moreover, its continuous monitoring requires the use of expensive devices. Capstesia™ is a novel Android™/iOS™ application, which calculates PPV from a digital picture of the arterial pressure waveform obtained from any monitor. The application identifies the peaks and troughs of the arterial curve, determines maximum and minimum pulse pressures, and computes PPV. In this study, we compared the accuracy of PPV generated with the smartphone application Capstesia (PPVapp) against the reference method that is the manual determination of PPV (PPVman). The Capstesia application was loaded onto a Samsung Galaxy S4 phone. A physiologic simulator including PPV was used to display arterial waveforms on a computer screen. Data were obtained with different sweep speeds (6 and 12 mm/s) and randomly generated PPV values (from 2% to 24%), pulse pressure (30, 45, and 60 mm Hg), heart rates (60-80 bpm), and respiratory rates (10-15 breaths/min) on the simulator. Each metric was recorded 5 times at an arterial height scale X1 (PPV5appX1) and 5 times at an arterial height scale X3 (PPV5appX3). Reproducibility of PPVapp and PPVman was determined from the 5 pictures of the same hemodynamic profile. The effect of sweep speed, arterial waveform scale (X1 or X3), and number of images captured was assessed by a Bland-Altman analysis. The measurement error (ME) was calculated for each pair of data. A receiver operating characteristic curve analysis determined the ability of PPVapp to discriminate a PPVman > 13%. Four hundred eight pairs of PPVapp and PPVman were analyzed. The reproducibility of PPVapp and PPVman was 10% (interquartile range, 7%-14%) and 6% (interquartile range, 3%-10%), respectively, allowing a threshold ME of 12%. The overall mean bias for PPVappX1 was 1.1% within limits of
Energy Technology Data Exchange (ETDEWEB)
Koch, Stephan
2009-03-30
This thesis is concerned with the numerical simulation of electromagnetic fields in the quasi-static approximation which is applicable in many practical cases. Main emphasis is put on higher-order finite element methods. Quasi-static applications can be found, e.g., in accelerator physics in terms of the design of magnets required for beam guidance, in power engineering as well as in high-voltage engineering. Especially during the first design and optimization phase of respective devices, numerical models offer a cheap alternative to the often costly assembly of prototypes. However, large differences in the magnitude of the material parameters and the geometric dimensions as well as in the time-scales of the electromagnetic phenomena involved lead to an unacceptably long simulation time or to an inadequately large memory requirement. Under certain circumstances, the simulation itself and, in turn, the desired design improvement becomes even impossible. In the context of this thesis, two strategies aiming at the extension of the range of application for numerical simulations based on the finite element method are pursued. The first strategy consists in parallelizing existing methods such that the computation can be distributed over several computers or cores of a processor. As a consequence, it becomes feasible to simulate a larger range of devices featuring more degrees of freedom in the numerical model than before. This is illustrated for the calculation of the electromagnetic fields, in particular of the eddy-current losses, inside a superconducting dipole magnet developed at the GSI Helmholtzzentrum fuer Schwerionenforschung as a part of the FAIR project. As the second strategy to improve the efficiency of numerical simulations, a hybrid discretization scheme exploiting certain geometrical symmetries is established. Using this method, a significant reduction of the numerical effort in terms of required degrees of freedom for a given accuracy is achieved. The
International Nuclear Information System (INIS)
Yang, Zhensheng; Wu, Haixi; Yu, Zhonghua; Huang, Youfang
2014-01-01
Grinding is usually done in the final finishing of a component. As a result, the surface quality of finished products, e.g., surface roughness, hardness and residual stress, are affected by the grinding procedure. However, the lack of methods for monitoring of grinding makes it difficult to control the quality of the process. This paper focuses on the monitoring approaches for the surface burn phenomenon in grinding. A non-destructive burn detection method based on acoustic emission (AE) and ensemble empirical mode decomposition (EEMD) was proposed for this purpose. To precisely extract the AE features caused by phase transformation during burn formation, artificial burn was produced to mimic grinding burn by means of laser irradiation, since laser-induced burn involves less mechanical and electrical noise. The burn formation process was monitored by an AE sensor. The frequency band ranging from 150 to 400 kHz was believed to be related to surface burn formation in the laser irradiation process. The burn-sensitive frequency band was further used to instruct feature extraction during the grinding process based on EEMD. Linear classification results evidenced a distinct margin between samples with and without surface burn. This work provides a practical means for grinding burn detection. (paper)
International Nuclear Information System (INIS)
Dattoli, G.; Schiavi, A.; Migliorati, M.
2006-03-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of this type of problems should be fast and reliable, conditions that are usually hardly achieved at the same rime. In the past, codes based on Lie algebraic techniques , have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique, using exponential operators. We show that the integration procedure is capable of reproducing the onset of an instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed [it
International Nuclear Information System (INIS)
Dattoli, G.; Migliorati, M.; Schiavi, A.
2007-01-01
The coherent synchrotron radiation (CSR) is one of the main problems limiting the performance of high-intensity electron accelerators. The complexity of the physical mechanisms underlying the onset of instabilities due to CSR demands for accurate descriptions, capable of including the large number of features of an actual accelerating device. A code devoted to the analysis of these types of problems should be fast and reliable, conditions that are usually hardly achieved at the same time. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non-linear case is ideally suited to treat CSR instability problems. We report on the development of a numerical code, based on the solution of the Vlasov equation, with the inclusion of non-linear contribution due to wake field effects. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that the integration procedure is capable of reproducing the onset of instability and the effects associated with bunching mechanisms leading to the growth of the instability itself. In addition, considerations on the threshold of the instability are also developed
Simulation-based certification for cataract surgery
DEFF Research Database (Denmark)
Thomsen, Ann Sofia Skou; Kiilgaard, Jens Folke; Kjaerbo, Hadi
2015-01-01
PURPOSE: To evaluate the EyeSi(™) simulator in regard to assessing competence in cataract surgery. The primary objective was to explore all simulator metrics to establish a proficiency-based test with solid evidence. The secondary objective was to evaluate whether the skill assessment was specific...
Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.
Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd
2018-02-01
There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.
Simulation of Rossi-α method with analog Monte-Carlo method
International Nuclear Information System (INIS)
Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang
2012-01-01
The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)
Simulation-based optimization parametric optimization techniques and reinforcement learning
Gosavi, Abhijit
2003-01-01
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...
Virtual Crowds Methods, Simulation, and Control
Pelechano, Nuria; Allbeck, Jan
2008-01-01
There are many applications of computer animation and simulation where it is necessary to model virtual crowds of autonomous agents. Some of these applications include site planning, education, entertainment, training, and human factors analysis for building evacuation. Other applications include simulations of scenarios where masses of people gather, flow, and disperse, such as transportation centers, sporting events, and concerts. Most crowd simulations include only basic locomotive behaviors possibly coupled with a few stochastic actions. Our goal in this survey is to establish a baseline o
Nuno David; Jaime Simão Sichman; Helder Coelho
2005-01-01
WOS:000235217900009 (Nº de Acesso Web of Science) The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional pers...
Spectrum estimation method based on marginal spectrum
International Nuclear Information System (INIS)
Cai Jianhua; Hu Weiwen; Wang Xianchun
2011-01-01
FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)
Discrete Particle Method for Simulating Hypervelocity Impact Phenomena
Directory of Open Access Journals (Sweden)
Erkai Watson
2017-04-01
Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.
Water simulation for cell based sandbox games
Lundell, Christian
2014-01-01
This thesis work presents a new algorithm for simulating fluid based on the Navier-Stokes equations. The algorithm is designed for cell based sandbox games where interactivity and performance are the main priorities. The algorithm enforces mass conservation conservatively instead of enforcing a divergence free velocity field. A global scale pressure model that simulates hydrostatic pressure is used where the pressure propagates between neighboring cells. A prefix sum algorithm is used to only...
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...
Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)
Enayatpour, Saeid; van Oort, Eric; Patzek, Tadeusz
2018-01-01
Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.
Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)
Enayatpour, Saeid
2018-05-17
Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.
Hybrid numerical methods for multiscale simulations of subsurface biogeochemical processes
International Nuclear Information System (INIS)
Scheibe, T D; Tartakovsky, A M; Tartakovsky, D M; Redden, G D; Meakin, P
2007-01-01
Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale. Important examples include 1. molecular simulations (e.g., molecular dynamics); 2. simulation of microbial processes at the cell level (e.g., cellular automata or particle individual-based models); 3. pore-scale simulations (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics); and 4. macroscopic continuum-scale simulations (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each
A fast mollified impulse method for biomolecular atomistic simulations
Energy Technology Data Exchange (ETDEWEB)
Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)
2017-03-15
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.
Numerical method for IR background and clutter simulation
Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio
1997-06-01
The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.
A review of computer-based simulators for ultrasound training.
Blum, Tobias; Rieger, Andreas; Navab, Nassir; Friess, Helmut; Martignoni, Marc
2013-04-01
Computer-based simulators for ultrasound training are a topic of recent interest. During the last 15 years, many different systems and methods have been proposed. This article provides an overview and classification of systems in this domain and a discussion of their advantages. Systems are classified and discussed according to the image simulation method, user interactions and medical applications. Computer simulation of ultrasound has one key advantage over traditional training. It enables novel training concepts, for example, through advanced visualization, case databases, and automatically generated feedback. Qualitative evaluations have mainly shown positive learning effects. However, few quantitative evaluations have been performed and long-term effects have to be examined.
Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam
2016-07-01
Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.
Identifying content for simulation-based curricula in urology
DEFF Research Database (Denmark)
Nayahangan, Leizl Joy; Hansen, Rikke Bolling; Lindorff-Larsen, Karen Gilboe
2017-01-01
to identify technical procedures in urology that should be included in a simulation-based curriculum for residency training. MATERIALS AND METHODS: A national needs assessment was performed using the Delphi method involving 56 experts with significant roles in the education of urologists. Round 1 identified...
DEFF Research Database (Denmark)
Stock, Andreas; Neudorfer, Jonathan; Riedlinger, Marc
2012-01-01
Fast design codes for the simulation of the particle–field interaction in the interior of gyrotron resonators are available. They procure their rapidity by making strong physical simplifications and approximations, which are not known to be valid for many variations of the geometry and the operat...
Wang, Jianling; Xiao, Xiaofeng; Chen, Tong; Liu, Tingfei; Tao, Huaming; He, Jun
2016-06-17
The glyceride in oil food simulant usually causes serious interferences to target analytes and leads to failure of the normal function of the RP-HPLC column. In this work, a convenient HPLC-UV method for the determination of the total specific migration of nine ultraviolet (UV) absorbers in food simulants was developed based on 1,1,3,3-tetramethylguanidine (TMG) and organic phase anion exchange (OPAE) SPE to efficiently remove glyceride in olive oil simulant. In contrast to the normal ion exchange carried out in an aqueous solution or aqueous phase environment, the OPAE SPE was performed in the organic phase environments, and the time-consuming and challenging extraction of the nine UV absorbers from vegetable oil with aqueous solution could be readily omitted. The method was proved to have good linearity (r≥0.99992), precision (intra-day RSD≤3.3%), and accuracy(91.0%≤recoveries≤107%); furthermore, the lower limit of quantifications (0.05-0.2mg/kg) in five types of food simulants(10% ethanol, 3% acetic acid, 20% ethanol, 50% ethanol and olive oil) was observed. The method was found to be well suited for quantitative determination of the total specific migration of the nine UV absorbers both in aqueous and vegetable oil simulant according to Commission Regulation (EU) No. 10/2011. Migration levels of the nine UV absorbers were determined in 31 plastic samples, and UV-24, UV-531, HHBP and UV-326 were frequently detected, especially in olive oil simulant for UV-326 in PE samples. In addition, the OPAE SPE procedure was also been applied to efficiently enrich or purify seven antioxidants in olive oil simulant. Results indicate that this procedure will have more extensive applications in the enriching or purification of the extremely weak acidic compounds with phenol hydroxyl group that are relatively stable in TMG n-hexane solution and that can be barely extracted from vegetable oil. Copyright © 2016 Elsevier B.V. All rights reserved.
Yang, Eunjoo; Park, Hyun Woo; Choi, Yeon Hwa; Kim, Jusim; Munkhdalai, Lkhagvadorj; Musa, Ibrahim; Ryu, Keun Ho
2018-05-11
Early detection of infectious disease outbreaks is one of the important and significant issues in syndromic surveillance systems. It helps to provide a rapid epidemiological response and reduce morbidity and mortality. In order to upgrade the current system at the Korea Centers for Disease Control and Prevention (KCDC), a comparative study of state-of-the-art techniques is required. We compared four different temporal outbreak detection algorithms: the CUmulative SUM (CUSUM), the Early Aberration Reporting System (EARS), the autoregressive integrated moving average (ARIMA), and the Holt-Winters algorithm. The comparison was performed based on not only 42 different time series generated taking into account trends, seasonality, and randomly occurring outbreaks, but also real-world daily and weekly data related to diarrhea infection. The algorithms were evaluated using different metrics. These were namely, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), F1 score, symmetric mean absolute percent error (sMAPE), root-mean-square error (RMSE), and mean absolute deviation (MAD). Although the comparison results showed better performance for the EARS C3 method with respect to the other algorithms, despite the characteristics of the underlying time series data, Holt⁻Winters showed better performance when the baseline frequency and the dispersion parameter values were both less than 1.5 and 2, respectively.
Cutting Method of the CAD model of the Nuclear facility for Dismantling Simulation
Energy Technology Data Exchange (ETDEWEB)
Kim, Ikjune; Choi, ByungSeon; Hyun, Dongjun; Jeong, KwanSeong; Kim, GeunHo; Lee, Jonghwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-05-15
Current methods for process simulation cannot simulate the cutting operation flexibly. As is, to simulate a cutting operation, user needs to prepare the result models of cutting operation based on pre-define cutting path, depth and thickness with respect to a dismantle scenario in advance. And those preparations should be built again as scenario changes. To be, user can change parameters and scenarios dynamically within a simulation configuration process so that the user saves time and efforts to simulate cutting operations. This study presents the methodology of cutting operation which can be applied to all the procedure in the simulation of dismantling of nuclear facilities. We developed the cutting simulation module for cutting operation in the dismantling of the nuclear facilities based on proposed cutting methodology. We defined the requirement of model cutting methodology based on the requirement of the dismantling of nuclear facilities. And we implemented cutting simulation module based on API of the commercial CAD system.
Simulation-based medical education in pediatrics.
Lopreiato, Joseph O; Sawyer, Taylor
2015-01-01
The use of simulation-based medical education (SBME) in pediatrics has grown rapidly over the past 2 decades and is expected to continue to grow. Similar to other instructional formats used in medical education, SBME is an instructional methodology that facilitates learning. Successful use of SBME in pediatrics requires attention to basic educational principles, including the incorporation of clear learning objectives. To facilitate learning during simulation the psychological safety of the participants must be ensured, and when done correctly, SBME is a powerful tool to enhance patient safety in pediatrics. Here we provide an overview of SBME in pediatrics and review key topics in the field. We first review the tools of the trade and examine various types of simulators used in pediatric SBME, including human patient simulators, task trainers, standardized patients, and virtual reality simulation. Then we explore several uses of simulation that have been shown to lead to effective learning, including curriculum integration, feedback and debriefing, deliberate practice, mastery learning, and range of difficulty and clinical variation. Examples of how these practices have been successfully used in pediatrics are provided. Finally, we discuss the future of pediatric SBME. As a community, pediatric simulation educators and researchers have been a leading force in the advancement of simulation in medicine. As the use of SBME in pediatrics expands, we hope this perspective will serve as a guide for those interested in improving the state of pediatric SBME. Published by Elsevier Inc.
Computational steering of GEM based detector simulations
Sheharyar, Ali; Bouhali, Othmane
2017-10-01
Gas based detector R&D relies heavily on full simulation of detectors and their optimization before final prototypes can be built and tested. These simulations in particular those with complex scenarios such as those involving high detector voltages or gas with larger gains are computationally intensive may take several days or weeks to complete. These long-running simulations usually run on the high-performance computers in batch mode. If the results lead to unexpected behavior, then the simulation might be rerun with different parameters. However, the simulations (or jobs) may have to wait in a queue until they get a chance to run again because the supercomputer is a shared resource that maintains a queue of other user programs as well and executes them as time and priorities permit. It may result in inefficient resource utilization and increase in the turnaround time for the scientific experiment. To overcome this issue, the monitoring of the behavior of a simulation, while it is running (or live), is essential. In this work, we employ the computational steering technique by coupling the detector simulations with a visualization package named VisIt to enable the exploration of the live data as it is produced by the simulation.
Methods in Logic Based Control
DEFF Research Database (Denmark)
Christensen, Georg Kronborg
1999-01-01
Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...
Hospital Registration Process Reengineering Using Simulation Method
Directory of Open Access Journals (Sweden)
Qiang Su
2010-01-01
Full Text Available With increasing competition, many healthcare organizations have undergone tremendous reform in the last decade aiming to increase efficiency, decrease waste, and reshape the way that care is delivered. This study focuses on the operational efficiency improvement of hospital’s registration process. The operational efficiency related factors including the service process, queue strategy, and queue parameters were explored systematically and illustrated with a case study. Guided by the principle of business process reengineering (BPR, a simulation approach was employed for process redesign and performance optimization. As a result, the queue strategy is changed from multiple queues and multiple servers to single queue and multiple servers with a prepare queue. Furthermore, through a series of simulation experiments, the length of the prepare queue and the corresponding registration process efficiency was quantitatively evaluated and optimized.
Numerical simulation methods for electron and ion optics
International Nuclear Information System (INIS)
Munro, Eric
2011-01-01
This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.
Improving the performance of a filling line based on simulation
Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.
2016-08-01
The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.
Simulation and Verificaiton of Flow in Test Methods
DEFF Research Database (Denmark)
Thrane, Lars Nyholm; Szabo, Peter; Geiker, Mette Rica
2005-01-01
Simulations and experimental results of L-box and slump flow test of a self-compacting mortar and a self-compacting concrete are compared. The simulations are based on a single fluid approach and assume an ideal Bingham behavior. It is possible to simulate the experimental results of both tests...
Computer Animation Based on Particle Methods
Directory of Open Access Journals (Sweden)
Rafal Wcislo
1999-01-01
Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.
Activity based costing (ABC Method
Directory of Open Access Journals (Sweden)
Prof. Ph.D. Saveta Tudorache
2008-05-01
Full Text Available In the present paper the need and advantages are presented of using the Activity BasedCosting method, need arising from the need of solving the information pertinence issue. This issue has occurreddue to the limitation of classic methods in this field, limitation also reflected by the disadvantages ofsuch classic methods in establishing complete costs.
Energy Technology Data Exchange (ETDEWEB)
Rinkel, J.; Dinten, J.M.; Tabary, J
2004-07-01
The use of focused anti-scatter grids on digital radiographic systems with two-dimensional detectors produces acquisitions with a decreased scatter to primary ratio and thus improved contrast and resolution. Simulation software is of great interest in optimizing grid configuration according to a specific application. Classical simulators are based on complete detailed geometric descriptions of the grid. They are accurate but very time consuming since they use Monte Carlo code to simulate scatter within the high-frequency grids. We propose a new practical method which couples an analytical simulation of the grid interaction with a radiographic system simulation program. First, a two dimensional matrix of probability depending on the grid is created offline, in which the first dimension represents the angle of impact with respect to the normal to the grid lines and the other the energy of the photon. This matrix of probability is then used by the Monte Carlo simulation software in order to provide the final scattered flux image. To evaluate the gain of CPU time, we define the increasing factor as the increase of CPU time of the simulation with as opposed to without the grid. Increasing factors were calculated with the new model and with classical methods representing the grid with its CAD model as part of the object. With the new method, increasing factors are shorter by one to two orders of magnitude compared with the second one. These results were obtained with a difference in calculated scatter of less than five percent between the new and the classical method. (authors)
Simulation-Based Training for Thoracoscopy
DEFF Research Database (Denmark)
Bjurström, Johanna Margareta; Konge, Lars; Lehnert, Per
2013-01-01
An increasing proportion of thoracic procedures are performed using video-assisted thoracic surgery. This minimally invasive technique places special demands on the surgeons. Using simulation-based training on artificial models or animals has been proposed to overcome the initial part of the lear......An increasing proportion of thoracic procedures are performed using video-assisted thoracic surgery. This minimally invasive technique places special demands on the surgeons. Using simulation-based training on artificial models or animals has been proposed to overcome the initial part...... of the learning curve. This study aimed to investigate the effect of simulation-based training and to compare self-guided and educator-guided training....
Benchmarking HRA methods against different NPP simulator data
International Nuclear Information System (INIS)
Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta
2008-01-01
The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described
Particle-transport simulation with the Monte Carlo method
International Nuclear Information System (INIS)
Carter, L.L.; Cashwell, E.D.
1975-01-01
Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)
Knowledge-based simulation using object-oriented programming
Sidoran, Karen M.
1993-01-01
Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.
Real-time hybrid simulation using the convolution integral method
International Nuclear Information System (INIS)
Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A
2011-01-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results
Hockey lines for simulation-based learning.
Topps, David; Ellaway, Rachel; Kupsh, Christine
2015-06-01
Simulation-based health professional education is often limited in accommodating large numbers of students. Most organisations do not have enough simulation suites or staff to support growing demands. We needed to find ways to make simulation sessions more accommodating for larger groups of learners, so that more than a few individuals could be active in a simulation scenario at any one time. Moreover, we needed to make the experience meaningful for all participating learners. We used the metaphor of (ice) hockey lines and substitution 'on the fly' to effectively double the numbers of learners that can be actively engaged at once. Team players must communicate clearly, and observe keenly, so that currently playing members understand what is happening from moment to moment and incoming substitutes can take over their roles seamlessly. Most organisations do not have enough simulation suites or staff to support growing demands We found that this hockey lines approach to simulation-based team scenarios will raise learners' levels of engagement, reinforce good crew resource management (CRM) practices, enhance closed-loop communication, and help learners to understand their cognitive biases and limitations when working in high-pressure situations. During our continuing refinement of the hockey-lines approach, we developed a number of variations on the basic activity model, with various benefits and applications. Both students and teachers have been enthusiastically positive about this approach when it was introduced at our various courses and participating institutions. © 2015 John Wiley & Sons Ltd.
DEFF Research Database (Denmark)
Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J
2012-01-01
Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....
Module-based Simulation System for efficient development of nuclear simulation programs
International Nuclear Information System (INIS)
Yoshikawa, Hidekazu; Wakabayashi, Jiro
1990-01-01
Module-based Simulation System (MSS) has been developed to realize a new software environment enabling versatile dynamic simulation of a complex nuclear power plant system flexibly. Described in the paper are (i) fundamental methods utilized in MMS and its software systemization, (ii) development of human interface system to help users in generating integrated simulation programs automatically, and (iii) development of an intelligent user support system for helping users in the two phases of automatical semantic diagnosis and consultation to automatic input data setup for the MSS-generated programs. (author)
Simulation-based Testing of Control Software
Energy Technology Data Exchange (ETDEWEB)
Ozmen, Ozgur [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nutaro, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sanyal, Jibonananda [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Olama, Mohammed M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-02-10
It is impossible to adequately test complex software by examining its operation in a physical prototype of the system monitored. Adequate test coverage can require millions of test cases, and the cost of equipment prototypes combined with the real-time constraints of testing with them makes it infeasible to sample more than a small number of these tests. Model based testing seeks to avoid this problem by allowing for large numbers of relatively inexpensive virtual prototypes that operate in simulation time at a speed limited only by the available computing resources. In this report, we describe how a computer system emulator can be used as part of a model based testing environment; specifically, we show that a complete software stack including operating system and application software - can be deployed within a simulated environment, and that these simulations can proceed as fast as possible. To illustrate this approach to model based testing, we describe how it is being used to test several building control systems that act to coordinate air conditioning loads for the purpose of reducing peak demand. These tests involve the use of ADEVS (A Discrete Event System Simulator) and QEMU (Quick Emulator) to host the operational software within the simulation, and a building model developed with the MODELICA programming language using Buildings Library and packaged as an FMU (Functional Mock-up Unit) that serves as the virtual test environment.
The frontal method in hydrodynamics simulations
Walters, R.A.
1980-01-01
The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.
Viscoelastic Earthquake Cycle Simulation with Memory Variable Method
Hirahara, K.; Ohtani, M.
2017-12-01
There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
Tong, Xuming; Chen, Jinghang; Miao, Hongyu; Li, Tingting; Zhang, Le
2015-01-01
Agent-based models (ABM) and differential equations (DE) are two commonly used methods for immune system simulation. However, it is difficult for ABM to estimate key parameters of the model by incorporating experimental data, whereas the differential equation model is incapable of describing the complicated immune system in detail. To overcome these problems, we developed an integrated ABM regression model (IABMR). It can combine the advantages of ABM and DE by employing ABM to mimic the multi-scale immune system with various phenotypes and types of cells as well as using the input and output of ABM to build up the Loess regression for key parameter estimation. Next, we employed the greedy algorithm to estimate the key parameters of the ABM with respect to the same experimental data set and used ABM to describe a 3D immune system similar to previous studies that employed the DE model. These results indicate that IABMR not only has the potential to simulate the immune system at various scales, phenotypes and cell types, but can also accurately infer the key parameters like DE model. Therefore, this study innovatively developed a complex system development mechanism that could simulate the complicated immune system in detail like ABM and validate the reliability and efficiency of model like DE by fitting the experimental data. PMID:26535589
Agent Based Modelling for Social Simulation
Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.
2013-01-01
This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course
Directory of Open Access Journals (Sweden)
Odile Sauzet
2017-07-01
Full Text Available Abstract Background The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Methods Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. Results The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. Conclusions This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.
Simulation of the 2-dimensional Drude’s model using molecular dynamics method
Energy Technology Data Exchange (ETDEWEB)
Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra [Theoretical High Energy Physics and Instrumentation Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Wahyoedi, Seramika Ari; Viridi, Sparisoma, E-mail: viridi@cphys.fi.itb.ac.id [Nuclear and Biophysics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)
2015-04-16
In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
International Nuclear Information System (INIS)
Antonenko, V.G.; Blau, D.S.
2006-01-01
After all lead tungstate crystals have been fabricated and transferred for assembling of the gamma-spectrometer PHOS in frame of ALICE experiment on the Large Hadron Collider a simulation was performed of the light collection in single scintillation module taking into account realistic properties of entire crystal party [ru
DEFF Research Database (Denmark)
Christensen, Steen; Moore, C.; Doherty, J.
2006-01-01
accurate and required a few hundred model calls to be computed. (b) The linearized regression-based interval (Cooley, 2004) required just over a hundred model calls and also appeared to be nearly correct. (c) The calibration-constrained Monte-Carlo interval (Doherty, 2003) was found to be narrower than......For a synthetic case we computed three types of individual prediction intervals for the location of the aquifer entry point of a particle that moves through a heterogeneous aquifer and ends up in a pumping well. (a) The nonlinear regression-based interval (Cooley, 2004) was found to be nearly...... the regression-based intervals but required about half a million model calls. It is unclear whether or not this type of prediction interval is accurate....
Applying Simulation Method in Formulation of Gluten-Free Cookies
Directory of Open Access Journals (Sweden)
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
An introduction to statistical computing a simulation-based approach
Voss, Jochen
2014-01-01
A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems. Sampling-based simulation techniques are now an invaluable tool for exploring statistical models. This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods. It also includes some advanced met
Enriching Triangle Mesh Animations with Physically Based Simulation.
Li, Yijing; Xu, Hongyi; Barbic, Jernej
2017-10-01
We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.
Simulation-based training for thoracoscopic lobectomy
DEFF Research Database (Denmark)
Jensen, Katrine; Ringsted, Charlotte; Hansen, Henrik Jessen
2014-01-01
overcome the first part of the learning curve, but no virtual-reality simulators for thoracoscopy are commercially available. This study aimed to investigate whether training on a laparoscopic simulator enables trainees to perform a thoracoscopic lobectomy. METHODS: Twenty-eight surgical residents were...... randomized to either virtual-reality training on a nephrectomy module or traditional black-box simulator training. After a retention period they performed a thoracoscopic lobectomy on a porcine model and their performance was scored using a previously validated assessment tool. RESULTS: The groups did...... not differ in age or gender. All participants were able to complete the lobectomy. The performance of the black-box group was significantly faster during the test scenario than the virtual-reality group: 26.6 min (SD 6.7 min) versus 32.7 min (SD 7.5 min). No difference existed between the two groups when...
Aleshin, V. I.; Raevskiĭ, I. P.; Sitalo, E. I.
2008-11-01
A complete set of dielectric, piezoelectric, and elastic parameters for the textured ceramic material 0.67PMN-0.33PT is calculated by the self-consistency method with due regard for the anisotropy and piezoelectric activity of the medium. It is shown that the best piezoelectric properties corresponding to those of a single crystal are observed for the ceramic material with a texture in which all crystallites are oriented parallel to the [001] direction of the parent perovskite cubic cell. The simplest models of the polarization of an untextured ceramic material with a random initial orientation of crystallites are considered. The results obtained are compared with experimental data.
Solution of partial differential equations by agent-based simulation
International Nuclear Information System (INIS)
Szilagyi, Miklos N
2014-01-01
The purpose of this short note is to demonstrate that partial differential equations can be quickly solved by agent-based simulation with high accuracy. There is no need for the solution of large systems of algebraic equations. This method is especially useful for quick determination of potential distributions and demonstration purposes in teaching electromagnetism. (letters and comments)
Daylighting simulation: methods, algorithms, and resources
Energy Technology Data Exchange (ETDEWEB)
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but
Simulation of bubble motion under gravity by lattice Boltzmann method
International Nuclear Information System (INIS)
Takada, Naoki; Misawa, Masaki; Tomiyama, Akio; Hosokawa, Shigeo
2001-01-01
We describe the numerical simulation results of bubble motion under gravity by the lattice Boltzmann method (LBM), which assumes that a fluid consists of mesoscopic fluid particles repeating collision and translation and a multiphase interface is reproduced in a self-organizing way by repulsive interaction between different kinds of particles. The purposes in this study are to examine the applicability of LBM to the numerical analysis of bubble motions, and to develop a three-dimensional version of the binary fluid model that introduces a free energy function. We included the buoyancy terms due to the density difference in the lattice Boltzmann equations, and simulated single-and two-bubble motions, setting flow conditions according to the Eoetvoes and Morton numbers. The two-dimensional results by LBM agree with those by the Volume of Fluid method based on the Navier-Stokes equations. The three-dimensional model possesses the surface tension satisfying the Laplace's law, and reproduces the motion of single bubble and the two-bubble interaction of their approach and coalescence in circular tube. There results prove that the buoyancy terms and the 3D model proposed here are suitable, and that LBM is useful for the numerical analysis of bubble motion under gravity. (author)
Research methods of simulate digital compensators and autonomous control systems
Directory of Open Access Journals (Sweden)
V. S. Kudryashov
2016-01-01
Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.
Sauzet, Odile; Peacock, Janet L
2017-07-20
The analysis of perinatal outcomes often involves datasets with some multiple births. These are datasets mostly formed of independent observations and a limited number of clusters of size two (twins) and maybe of size three or more. This non-independence needs to be accounted for in the statistical analysis. Using simulated data based on a dataset of preterm infants we have previously investigated the performance of several approaches to the analysis of continuous outcomes in the presence of some clusters of size two. Mixed models have been developed for binomial outcomes but very little is known about their reliability when only a limited number of small clusters are present. Using simulated data based on a dataset of preterm infants we investigated the performance of several approaches to the analysis of binomial outcomes in the presence of some clusters of size two. Logistic models, several methods of estimation for the logistic random intercept models and generalised estimating equations were compared. The presence of even a small percentage of twins means that a logistic regression model will underestimate all parameters but a logistic random intercept model fails to estimate the correlation between siblings if the percentage of twins is too small and will provide similar estimates to logistic regression. The method which seems to provide the best balance between estimation of the standard error and the parameter for any percentage of twins is the generalised estimating equations. This study has shown that the number of covariates or the level two variance do not necessarily affect the performance of the various methods used to analyse datasets containing twins but when the percentage of small clusters is too small, mixed models cannot capture the dependence between siblings.
Algebraic Verification Method for SEREs Properties via Groebner Bases Approaches
Directory of Open Access Journals (Sweden)
Ning Zhou
2013-01-01
Full Text Available This work presents an efficient solution using computer algebra system to perform linear temporal properties verification for synchronous digital systems. The method is essentially based on both Groebner bases approaches and symbolic simulation. A mechanism for constructing canonical polynomial set based symbolic representations for both circuit descriptions and assertions is studied. We then present a complete checking algorithm framework based on these algebraic representations by using Groebner bases. The computational experience result in this work shows that the algebraic approach is a quite competitive checking method and will be a useful supplement to the existent verification methods based on simulation.
DEFF Research Database (Denmark)
Sørensen, Jette Led; Østergaard, Doris; LeBlanc, Vicki
2017-01-01
that choice of setting for simulations does not seem to influence individual and team learning. Department-based local simulation, such as simulation in-house and especially in situ simulation, leads to gains in organisational learning. The overall objectives of simulation-based education and factors......BACKGROUND: Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities...... simulations. DISCUSSION: Non-randomised studies argue that in situ simulation is more effective for educational purposes than other types of simulation settings. Conversely, the few comparison studies that exist, either randomised or retrospective, show that choice of setting does not seem to influence...
A web-based virtual lighting simulator
Energy Technology Data Exchange (ETDEWEB)
Papamichael, Konstantinos; Lai, Judy; Fuller, Daniel; Tariq, Tara
2002-05-06
This paper is about a web-based ''virtual lighting simulator,'' which is intended to allow architects and lighting designers to quickly assess the effect of key parameters on the daylighting and lighting performance in various space types. The virtual lighting simulator consists of a web-based interface that allows navigation through a large database of images and data, which were generated through parametric lighting simulations. At its current form, the virtual lighting simulator has two main modules, one for daylighting and one for electric lighting. The daylighting module includes images and data for a small office space, varying most key daylighting parameters, such as window size and orientation, glazing type, surface reflectance, sky conditions, time of the year, etc. The electric lighting module includes images and data for five space types (classroom, small office, large open office, warehouse and small retail), varying key lighting parameters, such as the electric lighting system, surface reflectance, dimming/switching, etc. The computed images include perspectives and plans and are displayed in various formats to support qualitative as well as quantitative assessment. The quantitative information is in the form of iso-contour lines superimposed on the images, as well as false color images and statistical information on work plane illuminance. The qualitative information includes images that are adjusted to account for the sensitivity and adaptation of the human eye. The paper also includes a section on the major technical issues and their resolution.
Research on Monte Carlo simulation method of industry CT system
International Nuclear Information System (INIS)
Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan
2010-01-01
There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)
A simple method for potential flow simulation of cascades
Indian Academy of Sciences (India)
vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.
Lin, Z; Gehring, R; Mochel, J P; Lavé, T; Riviere, J E
2016-10-01
This review provides a tutorial for individuals interested in quantitative veterinary pharmacology and toxicology and offers a basis for establishing guidelines for physiologically based pharmacokinetic (PBPK) model development and application in veterinary medicine. This is important as the application of PBPK modeling in veterinary medicine has evolved over the past two decades. PBPK models can be used to predict drug tissue residues and withdrawal times in food-producing animals, to estimate chemical concentrations at the site of action and target organ toxicity to aid risk assessment of environmental contaminants and/or drugs in both domestic animals and wildlife, as well as to help design therapeutic regimens for veterinary drugs. This review provides a comprehensive summary of PBPK modeling principles, model development methodology, and the current applications in veterinary medicine, with a focus on predictions of drug tissue residues and withdrawal times in food-producing animals. The advantages and disadvantages of PBPK modeling compared to other pharmacokinetic modeling approaches (i.e., classical compartmental/noncompartmental modeling, nonlinear mixed-effects modeling, and interspecies allometric scaling) are further presented. The review finally discusses contemporary challenges and our perspectives on model documentation, evaluation criteria, quality improvement, and offers solutions to increase model acceptance and applications in veterinary pharmacology and toxicology. © 2016 John Wiley & Sons Ltd.
Traffic simulation based ship collision probability modeling
Energy Technology Data Exchange (ETDEWEB)
Goerlandt, Floris, E-mail: floris.goerlandt@tkk.f [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland); Kujala, Pentti [Aalto University, School of Science and Technology, Department of Applied Mechanics, Marine Technology, P.O. Box 15300, FI-00076 AALTO, Espoo (Finland)
2011-01-15
Maritime traffic poses various risks in terms of human, environmental and economic loss. In a risk analysis of ship collisions, it is important to get a reasonable estimate for the probability of such accidents and the consequences they lead to. In this paper, a method is proposed to assess the probability of vessels colliding with each other. The method is capable of determining the expected number of accidents, the locations where and the time when they are most likely to occur, while providing input for models concerned with the expected consequences. At the basis of the collision detection algorithm lays an extensive time domain micro-simulation of vessel traffic in the given area. The Monte Carlo simulation technique is applied to obtain a meaningful prediction of the relevant factors of the collision events. Data obtained through the Automatic Identification System is analyzed in detail to obtain realistic input data for the traffic simulation: traffic routes, the number of vessels on each route, the ship departure times, main dimensions and sailing speed. The results obtained by the proposed method for the studied case of the Gulf of Finland are presented, showing reasonable agreement with registered accident and near-miss data.
International Nuclear Information System (INIS)
Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing
2017-01-01
Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Simulation methods of nuclear electromagnetic pulse effects in integrated circuits
International Nuclear Information System (INIS)
Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen
2013-01-01
In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)
Simulation-based assessment for construction helmets.
Long, James; Yang, James; Lei, Zhipeng; Liang, Daan
2015-01-01
In recent years, there has been a concerted effort for greater job safety in all industries. Personnel protective equipment (PPE) has been developed to help mitigate the risk of injury to humans that might be exposed to hazardous situations. The human head is the most vulnerable to impact as a moderate magnitude can cause serious injury or death. That is why industries have required the use of an industrial hard hat or helmet. There have only been a few articles published to date that are focused on the risk of head injury when wearing an industrial helmet. A full understanding of the effectiveness of construction helmets on reducing injury is lacking. This paper presents a simulation-based method to determine the threshold at which a human will sustain injury when wearing a construction helmet and assesses the risk of injury for wearers of construction helmets or hard hats. Advanced finite element, or FE, models were developed to study the impact on construction helmets. The FE model consists of two parts: the helmet and the human models. The human model consists of a brain, enclosed by a skull and an outer layer of skin. The level and probability of injury to the head was determined using both the head injury criterion (HIC) and tolerance limits set by Deck and Willinger. The HIC has been widely used to assess the likelihood of head injury in vehicles. The tolerance levels proposed by Deck and Willinger are more suited for finite element models but lack wide-scale validation. Different cases of impact were studied using LSTC's LS-DYNA.
Simulation and case-based learning
DEFF Research Database (Denmark)
Ørngreen, Rikke; Guralnick, David
2008-01-01
Abstract- This paper has its origin in the authors' reflection on years of practical experiences combined with literature readings in our preparation for a workshop on learn-by-doing simulation and case-based learning to be held at the ICELW 2008 conference (the International Conference on E-Learning...... in the Workplace). The purpose of this paper is to describe the two online learning methodologies and to raise questions for future discussion. In the workshop, the organizers and participants work with and discuss differences and similarities within the two pedagogical methodologies, focusing on how...... they are applied in workplace related and e-learning contexts. In addition to the organizers, a small number of invited presenters will attend, giving demonstrations of their work within learn-by-doing simulation and cases-based learning, but still leaving ample of time for discussion among all participants....
Sorensen, J.L.; Ostergaard, D.; Leblanc, V.; Ottesen, B.; Konge, L.; Dieckmann, P.; Vleuten, C. van der
2017-01-01
BACKGROUND: Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities
Fast spot-based multiscale simulations of granular drainage
Energy Technology Data Exchange (ETDEWEB)
Rycroft, Chris H.; Wong, Yee Lok; Bazant, Martin Z.
2009-05-22
We develop a multiscale simulation method for dense granular drainage, based on the recently proposed spot model, where the particle packing flows by local collective displacements in response to diffusing"spots'" of interstitial free volume. By comparing with discrete-element method (DEM) simulations of 55,000 spheres in a rectangular silo, we show that the spot simulation is able to approximately capture many features of drainage, such as packing statistics, particle mixing, and flow profiles. The spot simulation runs two to three orders of magnitude faster than DEM, making it an appropriate method for real-time control or optimization. We demonstrateextensions for modeling particle heaping and avalanching at the free surface, and for simulating the boundary layers of slower flow near walls. We show that the spot simulations are robust and flexible, by demonstrating that they can be used in both event-driven and fixed timestep approaches, and showing that the elastic relaxation step used in the model can be applied much less frequently and still create good results.
An introduction to computer simulation methods applications to physical systems
Gould, Harvey; Christian, Wolfgang
2007-01-01
Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...
Motion simulation of hydraulic driven safety rod using FSI method
International Nuclear Information System (INIS)
Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In
2013-01-01
Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results
Agent Based Modelling for Social Simulation
Smit, S.K.; Ubink, E.M.; Vecht, B. van der; Langley, D.J.
2013-01-01
This document is the result of an exploratory project looking into the status of, and opportunities for Agent Based Modelling (ABM) at TNO. The project focussed on ABM applications containing social interactions and human factors, which we termed ABM for social simulation (ABM4SS). During the course of this project two workshops were organized. At these workshops, a wide range of experts, both ABM experts and domain experts, worked on several potential applications of ABM. The results and ins...
International Nuclear Information System (INIS)
Sekimura, Naoto; Okita, Taira
2006-01-01
Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)
Interactive physically-based sound simulation
Raghuvanshi, Nikunj
The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation
Entropy-based benchmarking methods
Temurshoev, Umed
2012-01-01
We argue that benchmarking sign-volatile series should be based on the principle of movement and sign preservation, which states that a bench-marked series should reproduce the movement and signs in the original series. We show that the widely used variants of Denton (1971) method and the growth
Ocean Wave Simulation Based on Wind Field.
Directory of Open Access Journals (Sweden)
Zhongyi Li
Full Text Available Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.
Real time simulation method for fast breeder reactors dynamics
International Nuclear Information System (INIS)
Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.
1985-01-01
The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)
Simulation of plume dynamics by the Lattice Boltzmann Method
Mora, Peter; Yuen, David A.
2017-09-01
The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.
Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold
International Nuclear Information System (INIS)
Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang
2016-01-01
Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)
Adaptive and dynamic meshing methods for numerical simulations
Acikgoz, Nazmiye
For the numerical simulation of many problems of engineering interest, it is desirable to have an automated mesh adaption tool capable of producing high quality meshes with an affordably low number of mesh points. This is important especially for problems, which are characterized by anisotropic features of the solution and require mesh clustering in the direction of high gradients. Another significant issue in meshing emerges in the area of unsteady simulations with moving boundaries or interfaces, where the motion of the boundary has to be accommodated by deforming the computational grid. Similarly, there exist problems where current mesh needs to be adapted to get more accurate solutions because either the high gradient regions are initially predicted inaccurately or they change location throughout the simulation. To solve these problems, we propose three novel procedures. For this purpose, in the first part of this work, we present an optimization procedure for three-dimensional anisotropic tetrahedral grids based on metric-driven h-adaptation. The desired anisotropy in the grid is dictated by a metric that defines the size, shape, and orientation of the grid elements throughout the computational domain. Through the use of topological and geometrical operators, the mesh is iteratively adapted until the final mesh minimizes a given objective function. In this work, the objective function measures the distance between the metric of each simplex and a target metric, which can be either user-defined (a-priori) or the result of a-posteriori error analysis. During the adaptation process, one tries to decrease the metric-based objective function until the final mesh is compliant with the target within a given tolerance. However, in regions such as corners and complex face intersections, the compliance condition was found to be very difficult or sometimes impossible to satisfy. In order to address this issue, we propose an optimization process based on an ad
A tool for simulating parallel branch-and-bound methods
Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail
2016-01-01
The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
A tool for simulating parallel branch-and-bound methods
Directory of Open Access Journals (Sweden)
Golubeva Yana
2016-01-01
Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.
Based on Penalty Function Method
Directory of Open Access Journals (Sweden)
Ishaq Baba
2015-01-01
Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.
Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations
Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.
2018-02-01
The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.
Simulation-based instruction of technical skills
Towne, Douglas M.; Munro, Allen
1991-01-01
A rapid intelligent tutoring development system (RAPIDS) was developed to facilitate the production of interactive, real-time graphical device models for use in instructing the operation and maintenance of complex systems. The tools allowed subject matter experts to produce device models by creating instances of previously defined objects and positioning them in the emerging device model. These simulation authoring functions, as well as those associated with demonstrating procedures and functional effects on the completed model, required no previous programming experience or use of frame-based instructional languages. Three large simulations were developed in RAPIDS, each involving more than a dozen screen-sized sections. Seven small, single-view applications were developed to explore the range of applicability. Three workshops were conducted to train others in the use of the authoring tools. Participants learned to employ the authoring tools in three to four days and were able to produce small working device models on the fifth day.
Energy Technology Data Exchange (ETDEWEB)
Richard C. Martineau; Ray A. Berry
2003-04-01
A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson
A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.
Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L
2018-05-16
During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.
Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.
Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo
2016-12-13
The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.
Optimizing a Water Simulation based on Wavefront Parameter Optimization
Lundgren, Martin
2017-01-01
DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...
Nonequilibrium relaxation method – An alternative simulation strategy
Indian Academy of Sciences (India)
One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces over the ...
DRK methods for time-domain oscillator simulation
Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.
LOMEGA: a low frequency, field implicit method for plasma simulation
International Nuclear Information System (INIS)
Barnes, D.C.; Kamimura, T.
1982-04-01
Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)
Simulation based virtual learning environment in medical genetics counseling
DEFF Research Database (Denmark)
Makransky, Guido; Bonde, Mads T.; Wulff, Julie S. G.
2016-01-01
BACKGROUND: Simulation based learning environments are designed to improve the quality of medical education by allowing students to interact with patients, diagnostic laboratory procedures, and patient data in a virtual environment. However, few studies have evaluated whether simulation based...... the perceived relevance of medical educational activities. The results suggest that simulations can help future generations of doctors transfer new understanding of disease mechanisms gained in virtual laboratory settings into everyday clinical practice....... learning environments increase students' knowledge, intrinsic motivation, and self-efficacy, and help them generalize from laboratory analyses to clinical practice and health decision-making. METHODS: An entire class of 300 University of Copenhagen first-year undergraduate students, most with a major...
Comparison of GPU-Based Numerous Particles Simulation and Experiment
International Nuclear Information System (INIS)
Park, Sang Wook; Jun, Chul Woong; Sohn, Jeong Hyun; Lee, Jae Wook
2014-01-01
The dynamic behavior of numerous grains interacting with each other can be easily observed. In this study, this dynamic behavior was analyzed based on the contact between numerous grains. The discrete element method was used for analyzing the dynamic behavior of each particle and the neighboring-cell algorithm was employed for detecting their contact. The Hertzian and tangential sliding friction contact models were used for calculating the contact force acting between the particles. A GPU-based parallel program was developed for conducting the computer simulation and calculating the numerous contacts. The dam break experiment was performed to verify the simulation results. The reliability of the program was verified by comparing the results of the simulation with those of the experiment
Clinical simulation as an evaluation method in health informatics
DEFF Research Database (Denmark)
Jensen, Sanne
2016-01-01
Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...... work practice including other technology and organizational structure. Clinical simulation is ideal for proactive evaluation of new technology for clinical work practice. Clinical simulations involve real end-users as they simulate the use of technology in realistic environments performing realistic...... tasks. Clinical simulation study assesses effects on clinical workflow and enables identification and evaluation of patient safety hazards before implementation at a hospital. Clinical simulation also offers an opportunity to create a space in which healthcare professionals working in different...
Simulation-based MDP verification for leading-edge masks
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification
Dynamic Garment Simulation based on Hybrid Bounding Volume Hierarchy
Directory of Open Access Journals (Sweden)
Zhu Dongyong
2016-12-01
Full Text Available In order to solve the computing speed and efficiency problem of existing dynamic clothing simulation, this paper presents a dynamic garment simulation based on a hybrid bounding volume hierarchy. It firstly uses MCASG graph theory to do the primary segmentation for a given three-dimensional human body model. And then it applies K-means cluster to do the secondary segmentation to collect the human body’s upper arms, lower arms, upper legs, lower legs, trunk, hip and woman’s chest as the elementary units of dynamic clothing simulation. According to different shapes of these elementary units, it chooses the closest and most efficient hybrid bounding box to specify these units, such as cylinder bounding box and elliptic cylinder bounding box. During the process of constructing these bounding boxes, it uses the least squares method and slices of the human body to get the related parameters. This approach makes it possible to use the least amount of bounding boxes to create close collision detection regions for the appearance of the human body. A spring-mass model based on a triangular mesh of the clothing model is finally constructed for dynamic simulation. The simulation result shows the feasibility and superiority of the method described.
Current concepts in simulation-based trauma education.
Cherry, Robert A; Ali, Jameel
2008-11-01
The use of simulation-based technology in trauma education has focused on providing a safe and effective alternative to the more traditional methods that are used to teach technical skills and critical concepts in trauma resuscitation. Trauma team training using simulation-based technology is also being used to develop skills in leadership, team-information sharing, communication, and decision-making. The integration of simulators into medical student curriculum, residency training, and continuing medical education has been strongly recommended by the American College of Surgeons as an innovative means of enhancing patient safety, reducing medical errors, and performing a systematic evaluation of various competencies. Advanced human patient simulators are increasingly being used in trauma as an evaluation tool to assess clinical performance and to teach and reinforce essential knowledge, skills, and abilities. A number of specialty simulators in trauma and critical care have also been designed to meet these educational objectives. Ongoing educational research is still needed to validate long-term retention of knowledge and skills, provide reliable methods to evaluate teaching effectiveness and performance, and to demonstrate improvement in patient safety and overall quality of care.
A nondissipative simulation method for the drift kinetic equation
International Nuclear Information System (INIS)
Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya
2001-07-01
With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)
The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline
Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji
2018-02-01
This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.
Nuclear Power Reactor simulator - based training program
International Nuclear Information System (INIS)
Abdelwahab, S.A.S.
2009-01-01
nuclear power stations will continue playing a major role as an energy source for electric generation and heat production in the world. in this paper, a nuclear power reactor simulator- based training program will be presented . this program is designed to aid in training of the reactor operators about the principles of operation of the plant. also it could help the researchers and the designers to analyze and to estimate the performance of the nuclear reactors and facilitate further studies for selection of the proper controller and its optimization process as it is difficult and time consuming to do all experiments in the real nuclear environment.this program is written in MATLAB code as MATLAB software provides sophisticated tools comparable to those in other software such as visual basic for the creation of graphical user interface (GUI). moreover MATLAB is available for all major operating systems. the used SIMULINK reactor model for the nuclear reactor can be used to model different types by adopting appropriate parameters. the model of each component of the reactor is based on physical laws rather than the use of look up tables or curve fitting.this simulation based training program will improve acquisition and retention knowledge also trainee will learn faster and will have better attitude
Simulation-based disassembly systems design
Ohlendorf, Martin; Herrmann, Christoph; Hesselbach, Juergen
2004-02-01
Recycling of Waste of Electrical and Electronic Equipment (WEEE) is a matter of actual concern, driven by economic, ecological and legislative reasons. Here, disassembly as the first step of the treatment process plays a key role. To achieve sustainable progress in WEEE disassembly, the key is not to limit analysis and planning to merely disassembly processes in a narrow sense, but to consider entire disassembly plants including additional aspects such as internal logistics, storage, sorting etc. as well. In this regard, the paper presents ways of designing, dimensioning, structuring and modeling different disassembly systems. Goal is to achieve efficient and economic disassembly systems that allow recycling processes complying with legal requirements. Moreover, advantages of applying simulation software tools that are widespread and successfully utilized in conventional industry sectors are addressed. They support systematic disassembly planning by means of simulation experiments including consecutive efficiency evaluation. Consequently, anticipatory recycling planning considering various scenarios is enabled and decisions about which types of disassembly systems evidence appropriateness for specific circumstances such as product spectrum, throughput, disassembly depth etc. is supported. Furthermore, integration of simulation based disassembly planning in a holistic concept with configuration of interfaces and data utilization including cost aspects is described.
Activity coefficients from molecular simulations using the OPAS method
Kohns, Maximilian; Horsch, Martin; Hasse, Hans
2017-10-01
A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.
Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design
Ang, Chee Siang; Zaphiris, Panayiotis
We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.
Research on facial expression simulation based on depth image
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Evaluation of a proposed optimization method for discrete-event simulation models
Directory of Open Access Journals (Sweden)
Alexandre Ferreira de Pinho
2012-12-01
Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.
Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation
Energy Technology Data Exchange (ETDEWEB)
Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua
2016-02-15
When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.
Methods employed to speed up Cathare for simulation uses
International Nuclear Information System (INIS)
Agator, J.M.
1992-01-01
This paper describes the main methods used to speed up the french advanced thermal-hydraulic computer code CATHARE and build a speedy version, called CATHARE-SIMU, adapted to real time calculations and simulation environment. Since CATHARE-SIMU, like CATHARE, uses a numerical scheme based on a fully implicit Newton's iterative method, and therefore with a variable time step, two ways have been explored to reduce the computing time: avoidance of short time steps, and so minimization of the number of iterations per time step, reduction of the computing time needed for an iteration. CATHARE-SIMU uses the same physical laws and correlations as in CATHARE with only some minor simplifications. This was considered the only way to be sure to maintain the level of physical relevance of CATHARE. Finally it is indicated that the validation programme of CATHARE-SIMU includes a set of 33 transient calculations, referring either to CATHARE for two-phase transients, or to measurements on real plants for operational transients
Hybrid vortex simulations of wind turbines using a three-dimensional viscous-inviscid panel method
DEFF Research Database (Denmark)
Ramos García, Néstor; Hejlesen, Mads Mølholm; Sørensen, Jens Nørkær
2017-01-01
adirect calculation, whereas the contribution from the large downstream wake is calculated using a mesh-based method. Thehybrid method is first validated in detail against the well-known MEXICO experiment, using the direct filament method asa comparison. The second part of the validation includes a study......A hybrid filament-mesh vortex method is proposed and validated to predict the aerodynamic performance of wind turbinerotors and to simulate the resulting wake. Its novelty consists of using a hybrid method to accurately simulate the wakedownstream of the wind turbine while reducing...
Remote collaboration system based on large scale simulation
International Nuclear Information System (INIS)
Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.
2008-01-01
Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another
Sørensen, Jette Led; Østergaard, Doris; LeBlanc, Vicki; Ottesen, Bent; Konge, Lars; Dieckmann, Peter; Van der Vleuten, Cees
2017-01-21
Simulation-based medical education (SBME) has traditionally been conducted as off-site simulation in simulation centres. Some hospital departments also provide off-site simulation using in-house training room(s) set up for simulation away from the clinical setting, and these activities are called in-house training. In-house training facilities can be part of hospital departments and resemble to some extent simulation centres but often have less technical equipment. In situ simulation, introduced over the past decade, mainly comprises of team-based activities and occurs in patient care units with healthcare professionals in their own working environment. Thus, this intentional blend of simulation and real working environments means that in situ simulation brings simulation to the real working environment and provides training where people work. In situ simulation can be either announced or unannounced, the latter also known as a drill. This article presents and discusses the design of SBME and the advantage and disadvantage of the different simulation settings, such as training in simulation-centres, in-house simulations in hospital departments, announced or unannounced in situ simulations. Non-randomised studies argue that in situ simulation is more effective for educational purposes than other types of simulation settings. Conversely, the few comparison studies that exist, either randomised or retrospective, show that choice of setting does not seem to influence individual or team learning. However, hospital department-based simulations, such as in-house simulation and in situ simulation, lead to a gain in organisational learning. To our knowledge no studies have compared announced and unannounced in situ simulation. The literature suggests some improved organisational learning from unannounced in situ simulation; however, unannounced in situ simulation was also found to be challenging to plan and conduct, and more stressful among participants. The importance of
Activity based costing method
Directory of Open Access Journals (Sweden)
Èuchranová Katarína
2001-06-01
Full Text Available Activity based costing is a method of identifying and tracking the operating costs directly associated with processing items. It is the practice of focusing on some unit of output, such as a purchase order or an assembled automobile and attempting to determine its total as precisely as poccible based on the fixed and variable costs of the inputs.You use ABC to identify, quantify and analyze the various cost drivers (such as labor, materials, administrative overhead, rework. and to determine which ones are candidates for reduction.A processes any activity that accepts inputs, adds value to these inputs for customers and produces outputs for these customers. The customer may be either internal or external to the organization. Every activity within an organization comprimes one or more processes. Inputs, controls and resources are all supplied to the process.A process owner is the person responsible for performing and or controlling the activity.The direction of cost through their contact to partial activity and processes is a new modern theme today. Beginning of this method is connected with very important changes in the firm processes.ABC method is a instrument , that bring a competitive advantages for the firm.
Advance in research on aerosol deposition simulation methods
International Nuclear Information System (INIS)
Liu Keyang; Li Jingsong
2011-01-01
A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)
Finite element method for simulation of the semiconductor devices
International Nuclear Information System (INIS)
Zikatanov, L.T.; Kaschiev, M.S.
1991-01-01
An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs
Reliability analysis of neutron transport simulation using Monte Carlo method
International Nuclear Information System (INIS)
Souza, Bismarck A. de; Borges, Jose C.
1995-01-01
This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs
Simulation-Based Abdominal Ultrasound Training – A Systematic Review
DEFF Research Database (Denmark)
Østergaard, Mia L.; Ewertsen, Caroline; Konge, Lars
2016-01-01
of Science, and the Cochrane Library was searched. Articles were divided into three categories based on study design (randomized controlled trials, before-and-after studies and descriptive studies) and assessed for level of evidence using the Oxford Centre for Evidence Based Medicine (OCEBM) system......PURPOSE: The aim is to provide a complete overview of the different simulation-based training options for abdominal ultrasound and to explore the evidence of their effect. MATERIALS AND METHODS: This systematic review was performed according to the PRISMA guidelines and Medline, Embase, Web...
Simulation based optimization on automated fibre placement process
Lei, Shi
2018-02-01
In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.
An Agent-Based Monetary Production Simulation Model
DEFF Research Database (Denmark)
Bruun, Charlotte
2006-01-01
An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable......An Agent-Based Simulation Model Programmed in Objective Borland Pascal. Program and source code is downloadable...
Cost Effective Community Based Dementia Screening: A Markov Model Simulation
Directory of Open Access Journals (Sweden)
Erin Saito
2014-01-01
Full Text Available Background. Given the dementia epidemic and the increasing cost of healthcare, there is a need to assess the economic benefit of community based dementia screening programs. Materials and Methods. Markov model simulations were generated using data obtained from a community based dementia screening program over a one-year period. The models simulated yearly costs of caring for patients based on clinical transitions beginning in pre dementia and extending for 10 years. Results. A total of 93 individuals (74 female, 19 male were screened for dementia and 12 meeting clinical criteria for either mild cognitive impairment (n=7 or dementia (n=5 were identified. Assuming early therapeutic intervention beginning during the year of dementia detection, Markov model simulations demonstrated 9.8% reduction in cost of dementia care over a ten-year simulation period, primarily through increased duration in mild stages and reduced time in more costly moderate and severe stages. Discussion. Community based dementia screening can reduce healthcare costs associated with caring for demented individuals through earlier detection and treatment, resulting in proportionately reduced time in more costly advanced stages.
An electromechanical based deformable model for soft tissue simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Smith, Julian; Gu, Chengfan
2009-11-01
Soft tissue deformation is of great importance to surgery simulation. Although a significant amount of research efforts have been dedicated to simulating the behaviours of soft tissues, modelling of soft tissue deformation is still a challenging problem. This paper presents a new deformable model for simulation of soft tissue deformation from the electromechanical viewpoint of soft tissues. Soft tissue deformation is formulated as a reaction-diffusion process coupled with a mechanical load. The mechanical load applied to a soft tissue to cause a deformation is incorporated into the reaction-diffusion system, and consequently distributed among mass points of the soft tissue. Reaction-diffusion of mechanical load and non-rigid mechanics of motion are combined to govern the simulation dynamics of soft tissue deformation. An improved reaction-diffusion model is developed to describe the distribution of the mechanical load in soft tissues. A three-layer artificial cellular neural network is constructed to solve the reaction-diffusion model for real-time simulation of soft tissue deformation. A gradient based method is established to derive internal forces from the distribution of the mechanical load. Integration with a haptic device has also been achieved to simulate soft tissue deformation with haptic feedback. The proposed methodology does not only predict the typical behaviours of living tissues, but it also accepts both local and large-range deformations. It also accommodates isotropic, anisotropic and inhomogeneous deformations by simple modification of diffusion coefficients.
Energy Technology Data Exchange (ETDEWEB)
HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK
2000-04-01
Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.
Simulation methods with extended stability for stiff biochemical Kinetics
Directory of Open Access Journals (Sweden)
Rué Pau
2010-08-01
Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.
AUV-Based Plume Tracking: A Simulation Study
Directory of Open Access Journals (Sweden)
Awantha Jayasiri
2016-01-01
Full Text Available This paper presents a simulation study of an autonomous underwater vehicle (AUV navigation system operating in a GPS-denied environment. The AUV navigation method makes use of underwater transponder positioning and requires only one transponder. A multirate unscented Kalman filter is used to determine the AUV orientation and position by fusing high-rate sensor data and low-rate information. The paper also proposes a gradient-based, efficient, and adaptive novel algorithm for plume boundary tracking missions. The algorithm follows a centralized approach and it includes path optimization features based on gradient information. The proposed algorithm is implemented in simulation on the AUV-based navigation system and successful boundary tracking results are obtained.
Research on neutron noise analysis stochastic simulation method for α calculation
International Nuclear Information System (INIS)
Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang
2014-01-01
The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)
Gradient augmented level set method for phase change simulations
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
GPU based numerical simulation of core shooting process
Directory of Open Access Journals (Sweden)
Yi-zhong Zhang
2017-11-01
Full Text Available Core shooting process is the most widely used technique to make sand cores and it plays an important role in the quality of sand cores. Although numerical simulation can hopefully optimize the core shooting process, research on numerical simulation of the core shooting process is very limited. Based on a two-fluid model (TFM and a kinetic-friction constitutive correlation, a program for 3D numerical simulation of the core shooting process has been developed and achieved good agreements with in-situ experiments. To match the needs of engineering applications, a graphics processing unit (GPU has also been used to improve the calculation efficiency. The parallel algorithm based on the Compute Unified Device Architecture (CUDA platform can significantly decrease computing time by multi-threaded GPU. In this work, the program accelerated by CUDA parallelization method was developed and the accuracy of the calculations was ensured by comparing with in-situ experimental results photographed by a high-speed camera. The design and optimization of the parallel algorithm were discussed. The simulation result of a sand core test-piece indicated the improvement of the calculation efficiency by GPU. The developed program has also been validated by in-situ experiments with a transparent core-box, a high-speed camera, and a pressure measuring system. The computing time of the parallel program was reduced by nearly 95% while the simulation result was still quite consistent with experimental data. The GPU parallelization method can successfully solve the problem of low computational efficiency of the 3D sand shooting simulation program, and thus the developed GPU program is appropriate for engineering applications.
Acidity constants from DFT-based molecular dynamics simulations
International Nuclear Information System (INIS)
Sulpizi, Marialore; Sprik, Michiel
2010-01-01
In this contribution we review our recently developed method for the calculation of acidity constants from density functional theory based molecular dynamics simulations. The method is based on a half reaction scheme in which protons are formally transferred from solution to the gas phase. The corresponding deprotonation free energies are computed from the vertical energy gaps for insertion or removal of protons. Combined to full proton transfer reactions, the deprotonation energies can be used to estimate relative acidity constants and also the Broensted pK a when the deprotonation free energy of a hydronium ion is used as a reference. We verified the method by investigating a series of organic and inorganic acids and bases spanning a wide range of pK a values (20 units). The thermochemical corrections for the biasing potentials assisting and directing the insertion are discussed in some detail.
Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele; Yazaydin, Ozgur
2017-05-01
In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results.
Vectorization of a particle simulation method for hypersonic rarefied flow
Mcdonald, Jeffrey D.; Baganoff, Donald
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.
A mixed finite element method for particle simulation in lasertron
International Nuclear Information System (INIS)
Le Meur, G.
1987-03-01
A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown
Vectorization of a particle simulation method for hypersonic rarefied flow
International Nuclear Information System (INIS)
Mcdonald, J.D.; Baganoff, D.
1988-01-01
An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry. 14 references
Correction of measured multiplicity distributions by the simulated annealing method
International Nuclear Information System (INIS)
Hafidouni, M.
1993-01-01
Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs
Kinematics and simulation methods to determine the target thickness
International Nuclear Information System (INIS)
Rosales, P.; Aguilar, E.F.; Martinez Q, E.
2001-01-01
Making use of the kinematics and of the particles energy loss two methods for calculating the thickness of a target are described. Through a computer program and other of simulation in which parameters obtained experimentally are used. Several values for a 12 C target thickness were obtained. It is presented a comparison of the obtained values with each one of the used programs. (Author)
A mixed finite element method for particle simulation in Lasertron
International Nuclear Information System (INIS)
Le Meur, G.
1987-01-01
A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown
Simulating water hammer with corrective smoothed particle method
Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.
2012-01-01
The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in
STUDY ON SIMULATION METHOD OF AVALANCHE : FLOW ANALYSIS OF AVALANCHE USING PARTICLE METHOD
塩澤, 孝哉
2015-01-01
In this paper, modeling for the simulation of the avalanche by a particle method is discussed. There are two kinds of the snow avalanches, one is the surface avalanche which shows a smoke-like flow, and another is the total-layer avalanche which shows a flow like Bingham fluid. In the simulation of the surface avalanche, the particle method in consideration of a rotation resistance model is used. The particle method by Bingham fluid is used in the simulation of the total-layer avalanche. At t...
A method of simulating and visualizing nuclear reactions
International Nuclear Information System (INIS)
Atwood, C.H.; Paul, K.M.
1994-01-01
Teaching nuclear reactions to students is difficult because the mechanisms are complex and directly visualizing them is impossible. As a teaching tool, the authors have developed a method of simulating nuclear reactions using colliding water droplets. Videotaping of the collisions, taken with a high shutter speed camera and run frame-by-frame, shows details of the collisions that are analogous to nuclear reactions. The method for colliding the water drops and videotaping the collisions are shown
Towards an entropy-based detached-eddy simulation
Zhao, Rui; Yan, Chao; Li, XinLiang; Kong, WeiXuan
2013-10-01
A concept of entropy increment ratio ( s¯) is introduced for compressible turbulence simulation through a series of direct numerical simulations (DNS). s¯ represents the dissipation rate per unit mechanical energy with the benefit of independence of freestream Mach numbers. Based on this feature, we construct the shielding function f s to describe the boundary layer region and propose an entropy-based detached-eddy simulation method (SDES). This approach follows the spirit of delayed detached-eddy simulation (DDES) proposed by Spalart et al. in 2005, but it exhibits much better behavior after their performances are compared in the following flows, namely, pure attached flow with thick boundary layer (a supersonic flat-plate flow with high Reynolds number), fully separated flow (the supersonic base flow), and separated-reattached flow (the supersonic cavity-ramp flow). The Reynolds-averaged Navier-Stokes (RANS) resolved region is reliably preserved and the modeled stress depletion (MSD) phenomenon which is inherent in DES and DDES is partly alleviated. Moreover, this new hybrid strategy is simple and general, making it applicable to other models related to the boundary layer predictions.
Directory of Open Access Journals (Sweden)
Yan Zhang
2018-03-01
Full Text Available In this paper, an improved delayed detached eddy simulation method combined with shear-stress transport (SST model was used to study the three-dimensional turbulent characteristics in a small rotary engine with a peripheral port. The turbulent characteristics including instantaneous velocity, turbulent fluctuation, coherent structure and velocity circulation were analysed based on a dynamic model of the small rotary engine. Three sets of conclusions on the basis of computational results were obtained. First, it was found that large-scale vortex structures with high intensity were distributed in the center of the chamber in the intake process and broke into lots of small vortex structures in the compression process. Second, flow stability in the X direction decreased from the leading to the trailing in the small rotary engine. The fluctuation velocity of the Y direction showed the paraboloid feature and its peak position moved from the mid-back to the middle of the chamber during the operation process. Third, during the intake process, two vortices occurred in the cross section parallel to the covers and were located at the leading and trailing of the cross section, respectively. Compared to the intake process, more vortices occur at cross sections which were far away from the central section during the compression process.
Designing solar thermal experiments based on simulation
International Nuclear Information System (INIS)
Huleihil, Mahmoud; Mazor, Gedalya
2013-01-01
In this study three different models to describe the temperature distribution inside a cylindrical solid body subjected to high solar irradiation were examined, beginning with the simpler approach, which is the single dimension lump system (time), progressing through the two-dimensional distributed system approach (time and vertical direction), and ending with the three-dimensional distributed system approach with azimuthally symmetry (time, vertical direction, and radial direction). The three models were introduced and solved analytically and numerically. The importance of the models and their solution was addressed. The simulations based on them might be considered as a powerful tool in designing experiments, as they make it possible to estimate the different effects of the parameters involved in these models
Simulation based engineering in solid mechanics
Rao, J S
2017-01-01
This book begins with a brief historical perspective of the advent of rotating machinery in 20th century Solid Mechanics and the development of the discipline of the Strength of Materials. High Performance Computing (HPC) and Simulation Based Engineering Science (SBES) have gradually replaced the conventional approach in Design bringing science directly into engineering without approximations. A recap of the required mathematical principles is given. The science of deformation, strain and stress at a point under the application of external traction loads is next presented. Only one-dimensional structures classified as Bars (axial loads), Rods (twisting loads) and Beams (bending loads) are considered in this book. The principal stresses and strains and von Mises stress and strain that used in design of structures are next presented. Lagrangian solution was used to derive the governing differential equations consistent with assumed deformation field and solution for deformations, strains and stresses were obtai...
Internet-based system for simulation-based medical planning for cardiovascular disease.
Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A
2003-06-01
Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.
Simulation-based education for transfusion medicine.
Morgan, Shanna; Rioux-Masse, Benjamin; Oancea, Cristina; Cohn, Claudia; Harmon, James; Konia, Mojca
2015-04-01
The administration of blood products is frequently determined by physicians without subspecialty training in transfusion medicine (TM). Education in TM is necessary for appropriate utilization of resources and maintaining patient safety. Our institution developed an efficient simulation-based TM course with the goal of identifying key topics that could be individualized to learners of all levels in various environments while also allowing for practice in an environment where the patient is not placed at risk. A 2.5-hour simulation-based educational activity was designed and taught to undergraduate medical students rotating through anesthesiology and TM elective rotations and to all Clinical Anesthesia Year 1 (CA-1) residents. Content and process evaluation of the activity consisted of multiple-choice tests and course evaluations. Seventy medical students and seven CA-1 residents were enrolled in the course. There was no significant difference on pretest results between medical students and CA-1 residents. The posttest results for both medical students and CA-1 residents were significantly higher than pretest results. The results of the posttest between medical students and CA-1 residents were not significantly different. The TM knowledge gap is not a trivial problem as transfusion of blood products is associated with significant risks. Innovative educational techniques are needed to address the ongoing challenges with knowledge acquisition and retention in already full curricula. Our institution developed a feasible and effective way to integrate TM into the curriculum. Educational activities, such as this, might be a way to improve the safety of transfusions. © 2014 AABB.
Bayesian statistic methods and theri application in probabilistic simulation models
Directory of Open Access Journals (Sweden)
Sergio Iannazzo
2007-03-01
Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.
An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments
Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram
2018-01-01
Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.
Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations
DEFF Research Database (Denmark)
Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht
2011-01-01
Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shadow...... region method reduces the computation time mainly by skipping stability analysis for a large portion of compositions in the single phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be employed with initial estimates from the previous step. The CSAT method saves...... and the tolerance set for accepting the feed composition are the key parameters in this method since they will influence the simulation speed and the accuracy of simulation results. Inspired by CSAT, we proposed a Tieline Distance Based Approximation (TDBA) method to get approximate flash results in the twophase...
International Nuclear Information System (INIS)
Kim, A.R.; Kim, G.H.; Kim, K.M.; Kim, D.W.; Park, M.; Yu, I.K.; Kim, S.H.; Sim, K.; Sohn, M.H.; Seong, K.C.
2010-01-01
This paper analyzes the operational characteristics of conduction cooling Superconducting Magnetic Energy Storage (SMES) through a real hardware based simulation. To analyze the operational characteristics, the authors manufactured a small-scale toroidal-type SMES and implemented a Real Time Digital Simulator (RTDS) based power quality enhancement simulation. The method can consider not only electrical characteristics such as inductance and current but also temperature characteristic by using the real SMES system. In order to prove the effectiveness of the proposed method, a voltage sag compensation simulation has been implemented using the RTDS connected with the High Temperature Superconducting (HTS) model coil and DC/DC converter system, and the simulation results are discussed in detail.
Model-based microwave image reconstruction: simulations and experiments
International Nuclear Information System (INIS)
Ciocan, Razvan; Jiang Huabei
2004-01-01
We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data
Innovative Calibration Method for System Level Simulation Models of Internal Combustion Engines
Directory of Open Access Journals (Sweden)
Ivo Prah
2016-09-01
Full Text Available The paper outlines a procedure for the computer-controlled calibration of the combined zero-dimensional (0D and one-dimensional (1D thermodynamic simulation model of a turbocharged internal combustion engine (ICE. The main purpose of the calibration is to determine input parameters of the simulation model in such a way as to achieve the smallest difference between the results of the measurements and the results of the numerical simulations with minimum consumption of the computing time. An innovative calibration methodology is based on a novel interaction between optimization methods and physically based methods of the selected ICE sub-systems. Therein physically based methods were used for steering the division of the integral ICE to several sub-models and for determining parameters of selected components considering their governing equations. Innovative multistage interaction between optimization methods and physically based methods allows, unlike the use of well-established methods that rely only on the optimization techniques, for successful calibration of a large number of input parameters with low time consumption. Therefore, the proposed method is suitable for efficient calibration of simulation models of advanced ICEs.
Computer Simulation of Nonuniform MTLs via Implicit Wendroff and State-Variable Methods
Directory of Open Access Journals (Sweden)
L. Brancik
2011-04-01
Full Text Available The paper deals with techniques for a computer simulation of nonuniform multiconductor transmission lines (MTLs based on the implicit Wendroff and the statevariable methods. The techniques fall into a class of finitedifference time-domain (FDTD methods useful to solve various electromagnetic systems. Their basic variants are extended and modified to enable solving both voltage and current distributions along nonuniform MTL’s wires and their sensitivities with respect to lumped and distributed parameters. An experimental error analysis is performed based on the Thomson cable whose analytical solutions are known, and some examples of simulation of both uniform and nonuniform MTLs are presented. Based on the Matlab language programme, CPU times are analyzed to compare efficiency of the methods. Some results for nonlinear MTLs simulation are presented as well.
Simulation methods supporting homologation of Electronic Stability Control in vehicle variants
Lutz, Albert; Schick, Bernhard; Holzmann, Henning; Kochem, Michael; Meyer-Tuve, Harald; Lange, Olav; Mao, Yiqin; Tosolin, Guido
2017-10-01
Vehicle simulation has a long tradition in the automotive industry as a powerful supplement to physical vehicle testing. In the field of Electronic Stability Control (ESC) system, the simulation process has been well established to support the ESC development and application by suppliers and Original Equipment Manufacturers (OEMs). The latest regulation of the United Nations Economic Commission for Europe UN/ECE-R 13 allows also for simulation-based homologation. This extends the usage of simulation from ESC development to homologation. This paper gives an overview of simulation methods, as well as processes and tools used for the homologation of ESC in vehicle variants. The paper first describes the generic homologation process according to the European Regulation (UN/ECE-R 13H, UN/ECE-R 13/11) and U.S. Federal Motor Vehicle Safety Standard (FMVSS 126). Subsequently the ESC system is explained as well as the generic application and release process at the supplier and OEM side. Coming up with the simulation methods, the ESC development and application process needs to be adapted for the virtual vehicles. The simulation environment, consisting of vehicle model, ESC model and simulation platform, is explained in detail with some exemplary use-cases. In the final section, examples of simulation-based ESC homologation in vehicle variants are shown for passenger cars, light trucks, heavy trucks and trailers. This paper is targeted to give a state-of-the-art account of the simulation methods supporting the homologation of ESC systems in vehicle variants. However, the described approach and the lessons learned can be used as reference in future for an extended usage of simulation-supported releases of the ESC system up to the development and release of driver assistance systems.
Simulating the operation of photosensor-based lighting controls
International Nuclear Information System (INIS)
Ehrlich, Charles; Papamichael, Konstantinos; Lai, Judy; Revzan, Kenneth
2001-01-01
Energy savings from the use of daylighting in commercial buildings are realized through implementation of photoelectric lighting controls that dim electric lights when sufficient daylight is available to provide adequate workplane illumination. The dimming level of electric lighting is based on the signal of a photosensor. Current simulation approaches for such systems are based on the questionable assumption that the signal of the photosensor is proportional to the task illuminance. This paper presents a method that simulates the performance of photosensor controls considering the acceptance angle, angular sensitivity, placement of the photosensor within a space, and color correction filter. The method is based on the multiplication of two fisheye images: one generated from the angular sensitivity of the photosensor and the other from a 180- or 360-degree fisheye image of the space as ''seen'' by the photosensor. The paper includes a detailed description of the method and its implementation, example applications, and validation results based on comparison with measurements in an actual office space
A virtual reality based simulator for learning nasogastric tube placement.
Choi, Kup-Sze; He, Xuejian; Chiang, Vico Chung-Lim; Deng, Zhaohong
2015-02-01
Nasogastric tube (NGT) placement is a common clinical procedure where a plastic tube is inserted into the stomach through the nostril for feeding or drainage. However, the placement is a blind process in which the tube may be mistakenly inserted into other locations, leading to unexpected complications or fatal incidents. The placement techniques are conventionally acquired by practising on unrealistic rubber mannequins or on humans. In this paper, a virtual reality based training simulation system is proposed to facilitate the training of NGT placement. It focuses on the simulation of tube insertion and the rendering of the feedback forces with a haptic device. A hybrid force model is developed to compute the forces analytically or numerically under different conditions, including the situations when the patient is swallowing or when the tube is buckled at the nostril. To ensure real-time interactive simulations, an offline simulation approach is adopted to obtain the relationship between the insertion depth and insertion force using a non-linear finite element method. The offline dataset is then used to generate real-time feedback forces by interpolation. The virtual training process is logged quantitatively with metrics that can be used for assessing objective performance and tracking progress. The system has been evaluated by nursing professionals. They found that the haptic feeling produced by the simulated forces is similar to their experience during real NGT insertion. The proposed system provides a new educational tool to enhance conventional training in NGT placement. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simulation-based interpersonal communication skills training for neurosurgical residents.
Harnof, Sagi; Hadani, Moshe; Ziv, Amitai; Berkenstadt, Haim
2013-09-01
Communication skills are an important component of the neurosurgery residency training program. We developed a simulation-based training module for neurosurgery residents in which medical, communication and ethical dilemmas are presented by role-playing actors. To assess the first national simulation-based communication skills training for neurosurgical residents. Eight scenarios covering different aspects of neurosurgery were developed by our team: (1) obtaining informed consent for an elective surgery, (2) discharge of a patient following elective surgery, (3) dealing with an unsatisfied patient, (4) delivering news of intraoperative complications, (5) delivering news of a brain tumor to parents of a 5 year old boy, (6) delivering news of brain death to a family member, (7) obtaining informed consent for urgent surgery from the grandfather of a 7 year old boy with an epidural hematoma, and (8) dealing with a case of child abuse. Fifteen neurosurgery residents from all major medical centers in Israel participated in the training. The session was recorded on video and was followed by videotaped debriefing by a senior neurosurgeon and communication expert and by feedback questionnaires. All trainees participated in two scenarios and observed another two. Participants largely agreed that the actors simulating patients represented real patients and family members and that the videotaped debriefing contributed to the teaching of professional skills. Simulation-based communication skill training is effective, and together with thorough debriefing is an excellent learning and practical method for imparting communication skills to neurosurgery residents. Such simulation-based training will ultimately be part of the national residency program.
Forced Ignition Study Based On Wavelet Method
Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.
2011-05-01
The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.
Power quality events recognition using a SVM-based method
Energy Technology Data Exchange (ETDEWEB)
Cerqueira, Augusto Santiago; Ferreira, Danton Diego; Ribeiro, Moises Vidal; Duque, Carlos Augusto [Department of Electrical Circuits, Federal University of Juiz de Fora, Campus Universitario, 36036 900, Juiz de Fora MG (Brazil)
2008-09-15
In this paper, a novel SVM-based method for power quality event classification is proposed. A simple approach for feature extraction is introduced, based on the subtraction of the fundamental component from the acquired voltage signal. The resulting signal is presented to a support vector machine for event classification. Results from simulation are presented and compared with two other methods, the OTFR and the LCEC. The proposed method shown an improved performance followed by a reasonable computational cost. (author)
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Multibus-based parallel processor for simulation
Ogrady, E. P.; Wang, C.-H.
1983-01-01
A Multibus-based parallel processor simulation system is described. The system is intended to serve as a vehicle for gaining hands-on experience, testing system and application software, and evaluating parallel processor performance during development of a larger system based on the horizontal/vertical-bus interprocessor communication mechanism. The prototype system consists of up to seven Intel iSBC 86/12A single-board computers which serve as processing elements, a multiple transmission controller (MTC) designed to support system operation, and an Intel Model 225 Microcomputer Development System which serves as the user interface and input/output processor. All components are interconnected by a Multibus/IEEE 796 bus. An important characteristic of the system is that it provides a mechanism for a processing element to broadcast data to other selected processing elements. This parallel transfer capability is provided through the design of the MTC and a minor modification to the iSBC 86/12A board. The operation of the MTC, the basic hardware-level operation of the system, and pertinent details about the iSBC 86/12A and the Multibus are described.
Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth
2015-01-01
based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling......For the efficiency and simplicity of electric systems, the dc based power electronics systems are widely used in variety applications such as electric vehicles, ships, aircrafts and also in homes. In these systems, there could be a number of dynamic interactions between loads and other dc-dc....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....
Agent Programming Languages and Logics in Agent-Based Simulation
DEFF Research Database (Denmark)
Larsen, John
2018-01-01
and social behavior, and work on verification. Agent-based simulation is an approach for simulation that also uses the notion of agents. Although agent programming languages and logics are much less used in agent-based simulation, there are successful examples with agents designed according to the BDI...
Computer-Based Simulation Games in Public Administration Education
Kutergina Evgeniia
2017-01-01
Computer simulation, an active learning technique, is now one of the advanced pedagogical technologies. Th e use of simulation games in the educational process allows students to gain a firsthand understanding of the processes of real life. Public- administration, public-policy and political-science courses increasingly adopt simulation games in universities worldwide. Besides person-to-person simulation games, there are computer-based simulations in public-administration education. Currently...
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
NMR diffusion simulation based on conditional random walk.
Gudbjartsson, H; Patz, S
1995-01-01
The authors introduce here a new, very fast, simulation method for free diffusion in a linear magnetic field gradient, which is an extension of the conventional Monte Carlo (MC) method or the convolution method described by Wong et al. (in 12th SMRM, New York, 1993, p.10). In earlier NMR-diffusion simulation methods, such as the finite difference method (FD), the Monte Carlo method, and the deterministic convolution method, the outcome of the calculations depends on the simulation time step. In the authors' method, however, the results are independent of the time step, although, in the convolution method the step size has to be adequate for spins to diffuse to adjacent grid points. By always selecting the largest possible time step the computation time can therefore be reduced. Finally the authors point out that in simple geometric configurations their simulation algorithm can be used to reduce computation time in the simulation of restricted diffusion.
Application of the finite volume method in the simulation of saturated flows of binary mixtures
International Nuclear Information System (INIS)
Murad, M.A.; Gama, R.M.S. da; Sampaio, R.
1989-12-01
This work presents the simulation of saturated flows of an incompressible Newtonian fluid through a rigid, homogeneous and isotropic porous medium. The employed mathematical model is derived from the Continuum Theory of Mixtures and generalizes the classical one which is based on Darcy's Law form of the momentum equation. In this approach fluid and porous matrix are regarded as continuous constituents of a binary mixture. The finite volume method is employed in the simulation. (author) [pt
DEFF Research Database (Denmark)
Petersen, Steffen; Svendsen, Svend
2011-01-01
A method for simulating predictive control of building systems operation in the early stages of building design is presented. The method uses building simulation based on weather forecasts to predict whether there is a future heating or cooling requirement. This information enables the thermal...... control systems of the building to respond proactively to keep the operational temperature within the thermal comfort range with the minimum use of energy. The method is implemented in an existing building simulation tool designed to inform decisions in the early stages of building design through...... parametric analysis. This enables building designers to predict the performance of the method and include it as a part of the solution space. The method furthermore facilitates the task of configuring appropriate building systems control schemes in the tool, and it eliminates time consuming manual...
Study on simulation methods of atrium building cooling load in hot and humid regions
Energy Technology Data Exchange (ETDEWEB)
Pan, Yiqun; Li, Yuming; Huang, Zhizhong [Institute of Building Performance and Technology, Sino-German College of Applied Sciences, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Wu, Gang [Weldtech Technology (Shanghai) Co. Ltd. (China)
2010-10-15
In recent years, highly glazed atria are popular because of their architectural aesthetics and advantage of introducing daylight into inside. However, cooling load estimation of such atrium buildings is difficult due to complex thermal phenomena that occur in the atrium space. The study aims to find out a simplified method of estimating cooling loads through simulations for various types of atria in hot and humid regions. Atrium buildings are divided into different types. For every type of atrium buildings, both CFD and energy models are developed. A standard method versus the simplified one is proposed to simulate cooling load of atria in EnergyPlus based on different room air temperature patterns as a result from CFD simulation. It incorporates CFD results as input into non-dimensional height room air models in EnergyPlus, and the simulation results are defined as a baseline model in order to compare with the results from the simplified method for every category of atrium buildings. In order to further validate the simplified method an actual atrium office building is tested on site in a typical summer day and measured results are compared with simulation results using the simplified methods. Finally, appropriate methods of simulating different types of atrium buildings are proposed. (author)
A Simulation Base Investigation of High Latency Space Systems Operations
Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael
2017-01-01
NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO
A Hybrid Positioning Method Based on Hypothesis Testing
DEFF Research Database (Denmark)
Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed
2012-01-01
maxima. We propose to first estimate the support region of the two peaks of the likelihood function using a set membership method, and then decide between the two regions using a rule based on the less reliable observations. Monte Carlo simulations show that the performance of the proposed method...
A regularized vortex-particle mesh method for large eddy simulation
DEFF Research Database (Denmark)
Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible ﬂuid ﬂow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the ﬁltered Navier Stokes equations, hence we use the method for Large Eddy...
International Nuclear Information System (INIS)
Xi Li-Ying; Chen Huan-Ming; Zheng Fu; Gao Hua; Tong Yang; Ma Zhi
2015-01-01
Three-dimensional simulations of ferroelectric hysteresis and butterfly loops are carried out based on solving the time dependent Ginzburg–Landau equations using a finite volume method. The influence of externally mechanical loadings with a tensile strain and a compressive strain on the hysteresis and butterfly loops is studied numerically. Different from the traditional finite element and finite difference methods, the finite volume method is applicable to simulate the ferroelectric phase transitions and properties of ferroelectric materials even for more realistic and physical problems. (paper)
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Quantum control with NMR methods: Application to quantum simulations
International Nuclear Information System (INIS)
Negrevergne, Camille
2002-01-01
Manipulating information according to quantum laws allows improvements in the efficiency of the way we treat certain problems. Liquid state Nuclear Magnetic Resonance methods allow us to initialize, manipulate and read the quantum state of a system of coupled spins. These methods have been used to realize an experimental small Quantum Information Processor (QIP) able to process information through around hundred elementary operations. One of the main themes of this work was to design, optimize and validate reliable RF-pulse sequences used to 'program' the QIP. Such techniques have been used to run a quantum simulation algorithm for anionic systems. Some experimental results have been obtained on the determination of Eigen energies and correlation function for a toy problem consisting of fermions on a lattice, showing an experimental proof of principle for such quantum simulations. (author) [fr
Simulation-Based System Design Laboratory
Federal Laboratory Consortium — The research objective is to develop, test, and implement effective and efficient simulation techniques for modeling, evaluating, and optimizing systems in order to...
Simulation-Based Testing of Distributed Systems
National Research Council Canada - National Science Library
Rutherford, Matthew J; Carzaniga, Antonio; Wolf, Alexander L
2006-01-01
.... Typically written using an imperative programming language, these simulations capture basic algorithmic functionality at the same time as they focus attention on properties critical to distribution...
Simulation-based training for colonoscopy
DEFF Research Database (Denmark)
Preisler, Louise; Svendsen, Morten Bo Søndergaard; Nerup, Nikolaj
2015-01-01
in colonoscopy before practicing on patients. Twenty-five physicians (10 consultants with endoscopic experience and 15 fellows with very little endoscopic experience) were tested on 2 different simulator models: a virtual-reality simulator and a physical model. Tests were repeated twice on each simulator model...... on both the models (P virtual-reality and the physical model, respectively. The established pass/fail standards failed one of the consultants (virtual-reality simulator) and allowed one fellow to pass (physical model). The 2 tested...
Miller, Daniel J.; Zhang, Zhibo; Ackerman, Andrew S.; Platnick, Steven; Baum, Bryan A.
2018-01-01
Passive optical retrievals of cloud liquid water path (LWP), like those implemented for Moderate Resolution Imaging Spectroradiometer (MODIS), rely on cloud vertical profile assumptions to relate optical thickness (τ) and effective radius (re) retrievals to LWP. These techniques typically assume that shallow clouds are vertically homogeneous; however, an adiabatic cloud model is plausibly more realistic for shallow marine boundary layer cloud regimes. In this study a satellite retrieval simulator is used to perform MODIS-like satellite retrievals, which in turn are compared directly to the large-eddy simulation (LES) output. This satellite simulator creates a framework for rigorous quantification of the impact that vertical profile features have on LWP retrievals, and it accomplishes this while also avoiding sources of bias present in previous observational studies. The cloud vertical profiles from the LES are often more complex than either of the two standard assumptions, and the favored assumption was found to be sensitive to cloud regime (cumuliform/stratiform). Confirming previous studies, drizzle and cloud top entrainment of dry air are identified as physical features that bias LWP retrievals away from adiabatic and toward homogeneous assumptions. The mean bias induced by drizzle-influenced profiles was shown to be on the order of 5–10 g/m2. In contrast, the influence of cloud top entrainment was found to be smaller by about a factor of 2. A theoretical framework is developed to explain variability in LWP retrievals by introducing modifications to the adiabatic re profile. In addition to analyzing bispectral retrievals, we also compare results with the vertical profile sensitivity of passive polarimetric retrieval techniques. PMID:29637042
Miller, Daniel J; Zhang, Zhibo; Ackerman, Andrew S; Platnick, Steven; Baum, Bryan A
2016-04-27
Passive optical retrievals of cloud liquid water path (LWP), like those implemented for Moderate Resolution Imaging Spectroradiometer (MODIS), rely on cloud vertical profile assumptions to relate optical thickness ( τ ) and effective radius ( r e ) retrievals to LWP. These techniques typically assume that shallow clouds are vertically homogeneous; however, an adiabatic cloud model is plausibly more realistic for shallow marine boundary layer cloud regimes. In this study a satellite retrieval simulator is used to perform MODIS-like satellite retrievals, which in turn are compared directly to the large-eddy simulation (LES) output. This satellite simulator creates a framework for rigorous quantification of the impact that vertical profile features have on LWP retrievals, and it accomplishes this while also avoiding sources of bias present in previous observational studies. The cloud vertical profiles from the LES are often more complex than either of the two standard assumptions, and the favored assumption was found to be sensitive to cloud regime (cumuliform/stratiform). Confirming previous studies, drizzle and cloud top entrainment of dry air are identified as physical features that bias LWP retrievals away from adiabatic and toward homogeneous assumptions. The mean bias induced by drizzle-influenced profiles was shown to be on the order of 5-10 g/m 2 . In contrast, the influence of cloud top entrainment was found to be smaller by about a factor of 2. A theoretical framework is developed to explain variability in LWP retrievals by introducing modifications to the adiabatic r e profile. In addition to analyzing bispectral retrievals, we also compare results with the vertical profile sensitivity of passive polarimetric retrieval techniques.
Application of subset simulation methods to dynamic fault tree analysis
International Nuclear Information System (INIS)
Liu Mengyun; Liu Jingquan; She Ding
2015-01-01
Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)
A computer method for simulating the decay of radon daughters
International Nuclear Information System (INIS)
Hartley, B.M.
1988-01-01
The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure
Agent-based simulation of electricity markets : a literature review
International Nuclear Information System (INIS)
Sensfuss, F.; Genoese, M.; Genoese, M.; Most, D.
2007-01-01
The electricity sector in Europe and North America is undergoing considerable changes as a result of deregulation, issues related to climate change, and the integration of renewable resources within the electricity grid. This article reviewed agent-based simulation methods of analyzing electricity markets. The paper provided an analysis of research currently being conducted on electricity market designs and examined methods of modelling agent decisions. Methods of coupling long term and short term decisions were also reviewed. Issues related to single and multiple market analysis methods were discussed, as well as different approaches to integrating agent-based models with models of other commodities. The integration of transmission constraints within agent-based models was also discussed, and methods of measuring market efficiency were evaluated. Other topics examined in the paper included approaches to integrating investment decisions, carbon dioxide (CO 2 ) trading, and renewable support schemes. It was concluded that agent-based models serve as a test bed for the electricity sector, and will help to provide insights for future policy decisions. 74 refs., 6 figs
Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods
Energy Technology Data Exchange (ETDEWEB)
Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.
2006-12-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.
Use of simulated data sets to evaluate the fidelity of metagenomic processing methods
Energy Technology Data Exchange (ETDEWEB)
Mavromatis, K [U.S. Department of Energy, Joint Genome Institute; Ivanova, N [U.S. Department of Energy, Joint Genome Institute; Barry, Kerrie [U.S. Department of Energy, Joint Genome Institute; Shapiro, Harris [U.S. Department of Energy, Joint Genome Institute; Goltsman, Eugene [U.S. Department of Energy, Joint Genome Institute; McHardy, Alice C. [IBM T. J. Watson Research Center; Rigoutsos, Isidore [IBM T. J. Watson Research Center; Salamov, Asaf [U.S. Department of Energy, Joint Genome Institute; Korzeniewski, Frank [U.S. Department of Energy, Joint Genome Institute; Land, Miriam L [ORNL; Lapidus, Alla L. [U.S. Department of Energy, Joint Genome Institute; Grigoriev, Igor [U.S. Department of Energy, Joint Genome Institute; Hugenholtz, Philip [U.S. Department of Energy, Joint Genome Institute; Kyrpides, Nikos C [U.S. Department of Energy, Joint Genome Institute
2007-01-01
Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and two sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.
Numerical simulation of GEW equation using RBF collocation method
Directory of Open Access Journals (Sweden)
Hamid Panahipour
2012-08-01
Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.
Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method
Directory of Open Access Journals (Sweden)
Francesco Pellicani
2016-05-01
Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.
A particle finite element method for machining simulations
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Xue, Zhong; Shen, Dinggang; Karacali, Bilge; Stern, Joshua; Rottenberg, David; Davatzikos, Christos
2006-01-01
Simulated deformations and images can act as the gold standard for evaluating various template-based image segmentation and registration algorithms. Traditional deformable simulation methods, such as the use of analytic deformation fields or the displacement of landmarks followed by some form of interpolation, are often unable to construct rich (complex) and/or realistic deformations of anatomical organs. This paper presents new methods aiming to automatically simulate realistic inter- and in...
Simulated Annealing-Based Krill Herd Algorithm for Global Optimization
Directory of Open Access Journals (Sweden)
Gai-Ge Wang
2013-01-01
Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.
Multigrid Methods for Fully Implicit Oil Reservoir Simulation
Molenaar, J.
1996-01-01
In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for
Radon movement simulation in overburden by the 'Scattered Packet Method'
International Nuclear Information System (INIS)
Marah, H.; Sabir, A.; Hlou, L.; Tayebi, M.
1998-01-01
The analysis of Radon ( 222 Rn) movement in overburden needs the resolution of the General Equation of Transport in porous medium, involving diffusion and convection. Generally this equation was derived and solved analytically. The 'Scattered Packed Method' is a recent mathematical method of resolution, initially developed for the electrons movements in the semiconductors studies. In this paper, we have adapted this method to simulate radon emanation in porous medium. The keys parameters are the radon concentration at the source, the diffusion coefficient, and the geometry. To show the efficiency of this method, several cases of increasing complexity are considered. This model allows to follow the migration, in the time and space, of radon produced as a function of the characteristics of the studied site. Forty soil radon measurements were taken from a North Moroccan fault. Forward modeling of the radon anomalies produces satisfactory fits of the observed data and allows the overburden thickness determination. (author)
Simulated BRDF based on measured surface topography of metal
Yang, Haiyue; Haist, Tobias; Gronle, Marc; Osten, Wolfgang
2017-06-01
The radiative reflective properties of a calibration standard rough surface were simulated by ray tracing and the Finite-difference time-domain (FDTD) method. The simulation results have been used to compute the reflectance distribution functions (BRDF) of metal surfaces and have been compared with experimental measurements. The experimental and simulated results are in good agreement.
Evaluation of null-point detection methods on simulation data
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
Energy Technology Data Exchange (ETDEWEB)
Terano, Takao [Univ. of Tsukuba, Tokyo (Japan); Ishino, Yoko [Univ. of Tokyo (Japan)
1996-12-31
This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm (GA) based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. In this paper, we show a basic interactive version of the method and two variations: the one with semi-automated GA phases and the one with the relatively evaluation phase via the Analytic Hierarchy Process (AHP). The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data.
Directory of Open Access Journals (Sweden)
Cristina Portalés
2017-06-01
Full Text Available The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users.
A New Method to Simulate Free Surface Flows for Viscoelastic Fluid
Directory of Open Access Journals (Sweden)
Yu Cao
2015-01-01
Full Text Available Free surface flows arise in a variety of engineering applications. To predict the dynamic characteristics of such problems, specific numerical methods are required to accurately capture the shape of free surface. This paper proposed a new method which combined the Arbitrary Lagrangian-Eulerian (ALE technique with the Finite Volume Method (FVM to simulate the time-dependent viscoelastic free surface flows. Based on an open source CFD toolbox called OpenFOAM, we designed an ALE-FVM free surface simulation platform. In the meantime, the die-swell flow had been investigated with our proposed platform to make a further analysis of free surface phenomenon. The results validated the correctness and effectiveness of the proposed method for free surface simulation in both Newtonian fluid and viscoelastic fluid.
Directory of Open Access Journals (Sweden)
Ilaria Iaconeta
2017-09-01
Full Text Available The simulation of large deformation problems, involving complex history-dependent constitutive laws, is of paramount importance in several engineering fields. Particular attention has to be paid to the choice of a suitable numerical technique such that reliable results can be obtained. In this paper, a Material Point Method (MPM and a Galerkin Meshfree Method (GMM are presented and verified against classical benchmarks in solid mechanics. The aim is to demonstrate the good behavior of the methods in the simulation of cohesive-frictional materials, both in static and dynamic regimes and in problems dealing with large deformations. The vast majority of MPM techniques in the literatrue are based on some sort of explicit time integration. The techniques proposed in the current work, on the contrary, are based on implicit approaches, which can also be easily adapted to the simulation of static cases. The two methods are presented so as to highlight the similarities to rather than the differences from “standard” Updated Lagrangian (UL approaches commonly employed by the Finite Elements (FE community. Although both methods are able to give a good prediction, it is observed that, under very large deformation of the medium, GMM lacks robustness due to its meshfree natrue, which makes the definition of the meshless shape functions more difficult and expensive than in MPM. On the other hand, the mesh-based MPM is demonstrated to be more robust and reliable for extremely large deformation cases.
Simulation based virtual learning environment in medical genetics counseling
DEFF Research Database (Denmark)
Makransky, Guido; Bonde, Mads T; Wulff, Julie S G
2016-01-01
learning environments increase students' knowledge, intrinsic motivation, and self-efficacy, and help them generalize from laboratory analyses to clinical practice and health decision-making. METHODS: An entire class of 300 University of Copenhagen first-year undergraduate students, most with a major...... in medicine, received a 2-h training session in a simulation based learning environment. The main outcomes were pre- to post- changes in knowledge, intrinsic motivation, and self-efficacy, together with post-intervention evaluation of the effect of the simulation on student understanding of everyday clinical...... practice were demonstrated. RESULTS: Knowledge (Cohen's d = 0.73), intrinsic motivation (d = 0.24), and self-efficacy (d = 0.46) significantly increased from the pre- to post-test. Low knowledge students showed the greatest increases in knowledge (d = 3.35) and self-efficacy (d = 0.61), but a non...
Ergonomics and simulation-based approach in improving facility layout
Abad, Jocelyn D.
2018-02-01
The use of the simulation-based technique in facility layout has been a choice in the industry due to its convenience and efficient generation of results. Nevertheless, the solutions generated are not capable of addressing delays due to worker's health and safety which significantly impact overall operational efficiency. It is, therefore, critical to incorporate ergonomics in facility design. In this study, workstation analysis was incorporated into Promodel simulation to improve the facility layout of a garment manufacturing. To test the effectiveness of the method, existing and improved facility designs were measured using comprehensive risk level, efficiency, and productivity. Results indicated that the improved facility layout generated a decrease in comprehensive risk level and rapid upper limb assessment score; an increase of 78% in efficiency and 194% increase in productivity compared to existing design and thus proved that the approach is effective in attaining overall facility design improvement.