WorldWideScience

Sample records for mc simulation method

  1. Biasing transition rate method based on direct MC simulation for probabilistic safety assessment

    Institute of Scientific and Technical Information of China (English)

    Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang

    2017-01-01

    Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.

  2. Validation of the intrinsic spatial efficiency method for non cylindrical homogeneous sources using MC simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile)

    2016-07-07

    The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of the intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.

  3. Integration of OpenMC methods into MAMMOTH and Serpent

    Energy Technology Data Exchange (ETDEWEB)

    Kerby, Leslie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Idaho State Univ., Idaho Falls, ID (United States); DeHart, Mark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tumulak, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States); Univ. of Michigan, Ann Arbor, MI (United States)

    2016-09-01

    OpenMC, a Monte Carlo particle transport simulation code focused on neutron criticality calculations, contains several methods we wish to emulate in MAMMOTH and Serpent. First, research coupling OpenMC and the Multiphysics Object-Oriented Simulation Environment (MOOSE) has shown promising results. Second, the utilization of Functional Expansion Tallies (FETs) allows for a more efficient passing of multiphysics data between OpenMC and MOOSE. Both of these capabilities have been preliminarily implemented into Serpent. Results are discussed and future work recommended.

  4. Developing an interface between MCNP and McStas for simulation of neutron moderators

    DEFF Research Database (Denmark)

    Klinkby, Esben Bryndt; Lauritzen, Bent; Nonbøl, Erik

    2012-01-01

    Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using MCNP/X whereas simulations of neutron transport and instrument performance are carried out by neutron ray tracing codes such as McStas. The coupling between the two simulations suites...... typically consists of providing analytical fits from MCNP/X neutron spectra to McStas. This method is generally successful, but as will be discussed in the this paper, there are limitations and a more direct coupling between MCNP/X andMcStas could allow for more accurate simulations of e.g. complex...... moderator geometries, interference between beamlines as well as shielding requirements along the neutron guides. In this paper different possible interfaces between McStas and MCNP/X are discussed and first preliminary performance results are shown....

  5. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC

    International Nuclear Information System (INIS)

    Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • The new developed CAD-based Monte Carlo program named SuperMC for integrated simulation of nuclear system makes use of hybrid MC-deterministic method and advanced computer technologies. SuperMC is designed to perform transport calculation of various types of particles, depletion and activation calculation including isotope burn-up, material activation and shutdown dose, and multi-physics coupling calculation including thermo-hydraulics, fuel performance and structural mechanics. The bi-directional automatic conversion between general CAD models and physical settings and calculation models can be well performed. Results and process of simulation can be visualized with dynamical 3D dataset and geometry model. Continuous-energy cross section, burnup, activation, irradiation damage and material data etc. are used to support the multi-process simulation. Advanced cloud computing framework makes the computation and storage extremely intensive simulation more attractive just as a network service to support design optimization and assessment. The modular design and generic interface promotes its flexible manipulation and coupling of external solvers. • The new developed and incorporated advanced methods in SuperMC was introduced including hybrid MC-deterministic transport method, particle physical interaction treatment method, multi-physics coupling calculation method, geometry automatic modeling and processing method, intelligent data analysis and visualization method, elastic cloud computing technology and parallel calculation method. • The functions of SuperMC2.1 integrating automatic modeling, neutron and photon transport calculation, results and process visualization was introduced. It has been validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. - Abstract: Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as a routine

  6. CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC

    International Nuclear Information System (INIS)

    Wu, Y.; Song, J.; Zheng, H.; Sun, G.; Hao, L.; Long, P.; Hu, L.

    2013-01-01

    SuperMC is a (Computer-Aided-Design) CAD-based Monte Carlo (MC) program for integrated simulation of nuclear systems developed by FDS Team (China), making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC are presented in this paper. The taking into account of multi-physics processes and the use of advanced computer technologies such as automatic geometry modeling, intelligent data analysis and visualization, high performance parallel computing and cloud computing, contribute to the efficiency of the code. SuperMC2.1, the latest version of the code for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model

  7. New developments in the McStas neutron instrument simulation package

    International Nuclear Information System (INIS)

    Willendrup, P K; Knudsen, E B; Klinkby, E; Nielsen, T; Farhi, E; Filges, U; Lefmann, K

    2014-01-01

    The McStas neutron ray-tracing software package is a versatile tool for building accurate simulators of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as a scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics.

  8. Virtual reality-based simulation system for nuclear and radiation safety SuperMC/RVIS

    Energy Technology Data Exchange (ETDEWEB)

    He, T.; Hu, L.; Long, P.; Shang, L.; Zhou, S.; Yang, Q.; Zhao, J.; Song, J.; Yu, S.; Cheng, M.; Hao, L., E-mail: liqin.hu@fds.org.cn [Chinese Academy of Sciences, Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Hefei, Anhu (China)

    2015-07-01

    The suggested work scenarios in radiation environment need to be iterative optimized according to the ALARA principle. Based on Virtual Reality (VR) technology and high-precision whole-body computational voxel phantom, a virtual reality-based simulation system for nuclear and radiation safety named SuperMC/RVIS has been developed for organ dose assessment and ALARA evaluation of work scenarios in radiation environment. The system architecture, ALARA evaluation strategy, advanced visualization methods and virtual reality technology used in SuperMC/RVIS are described. A case is presented to show its dose assessment and interactive simulation capabilities. (author)

  9. Virtual reality-based simulation system for nuclear and radiation safety SuperMC/RVIS

    International Nuclear Information System (INIS)

    He, T.; Hu, L.; Long, P.; Shang, L.; Zhou, S.; Yang, Q.; Zhao, J.; Song, J.; Yu, S.; Cheng, M.; Hao, L.

    2015-01-01

    The suggested work scenarios in radiation environment need to be iterative optimized according to the ALARA principle. Based on Virtual Reality (VR) technology and high-precision whole-body computational voxel phantom, a virtual reality-based simulation system for nuclear and radiation safety named SuperMC/RVIS has been developed for organ dose assessment and ALARA evaluation of work scenarios in radiation environment. The system architecture, ALARA evaluation strategy, advanced visualization methods and virtual reality technology used in SuperMC/RVIS are described. A case is presented to show its dose assessment and interactive simulation capabilities. (author)

  10. Monte Carlo simulations of neutron-scattering instruments using McStas

    DEFF Research Database (Denmark)

    Nielsen, K.; Lefmann, K.

    2000-01-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Rise National Laboratory, includes...

  11. Methods for Monte Carlo simulations of biomacromolecules.

    Science.gov (United States)

    Vitalis, Andreas; Pappu, Rohit V

    2009-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  12. Interfacing MCNPX and McStas for simulation of neutron transport

    DEFF Research Database (Denmark)

    Klinkby, Esben Bryndt; Lauritzen, Bent; Nonbøl, Erik

    2013-01-01

    Stas[4, 5, 6, 7]. The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve......Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX[1] or FLUKA[2, 3] whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as Mc...... geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides....

  13. An improved Monte Carlo (MC) dose simulation for charged particle cancer therapy

    Energy Technology Data Exchange (ETDEWEB)

    Ying, C. K. [Advanced Medical and Dental Institute, AMDI, Universiti Sains Malaysia, Penang, Malaysia and School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu (Malaysia); Kamil, W. A. [Advanced Medical and Dental Institute, AMDI, Universiti Sains Malaysia, Penang, Malaysia and Radiology Department, Hospital USM, Kota Bharu (Malaysia); Shuaib, I. L. [Advanced Medical and Dental Institute, AMDI, Universiti Sains Malaysia, Penang (Malaysia); Matsufuji, Naruhiro [Research Centre of Charged Particle Therapy, National Institute of Radiological Sciences, NIRS, Chiba (Japan)

    2014-02-12

    Heavy-particle therapy such as carbon ion therapy are more popular nowadays because of the nature characteristics of charged particle and almost no side effect to patients. An effective treatment is achieved with high precision of dose calculation, in this research work, Geant4 based Monte Carlo simulation method has been used to calculate the radiation transport and dose distribution. The simulation have the same setting with the treatment room in Heavy Ion Medical Accelerator, HIMAC. The carbon ion beam at the isocentric gantry nozzle for the therapeutic energy of 290 MeV/u was simulated, experimental work was carried out in National Institute of Radiological Sciences, NIRS, Chiba, Japan by using the HIMAC to confirm the accuracy and qualities dose distribution by MC methods. The Geant4 based simulated dose distribution were verified with measurements for Bragg peak and spread out Bragg peak (SOBP) respectively. The verification of results shows that the Bragg peak depth-dose and SOBP distributions in simulation has good agreement with measurements. In overall, the study showed that Geant4 based can be fully applied in the heavy-ion therapy field for simulation, further works need to be carry on to refine and improve the Geant4 MC simulations.

  14. An improved Monte Carlo (MC) dose simulation for charged particle cancer therapy

    International Nuclear Information System (INIS)

    Ying, C. K.; Kamil, W. A.; Shuaib, I. L.; Matsufuji, Naruhiro

    2014-01-01

    Heavy-particle therapy such as carbon ion therapy are more popular nowadays because of the nature characteristics of charged particle and almost no side effect to patients. An effective treatment is achieved with high precision of dose calculation, in this research work, Geant4 based Monte Carlo simulation method has been used to calculate the radiation transport and dose distribution. The simulation have the same setting with the treatment room in Heavy Ion Medical Accelerator, HIMAC. The carbon ion beam at the isocentric gantry nozzle for the therapeutic energy of 290 MeV/u was simulated, experimental work was carried out in National Institute of Radiological Sciences, NIRS, Chiba, Japan by using the HIMAC to confirm the accuracy and qualities dose distribution by MC methods. The Geant4 based simulated dose distribution were verified with measurements for Bragg peak and spread out Bragg peak (SOBP) respectively. The verification of results shows that the Bragg peak depth-dose and SOBP distributions in simulation has good agreement with measurements. In overall, the study showed that Geant4 based can be fully applied in the heavy-ion therapy field for simulation, further works need to be carry on to refine and improve the Geant4 MC simulations

  15. An improved Monte Carlo (MC) dose simulation for charged particle cancer therapy

    International Nuclear Information System (INIS)

    Ying, C.K.; Kamil, W.A.; Shuaib, I.L.; Ying, C.K.; Kamil, W.A.

    2013-01-01

    Full-text: Heavy-particle therapy such as carbon ion therapy are more popular nowadays because of the nature characteristics of charged particle and almost no side effect to patients. An effective treatment is achieved with high precision of dose calculation, in this research work, Geant4 based Monte Carlo simulation method has been used to calculate the radiation transport and dose distribution. The simulation have the same setting with the treatment room in Heavy Ion Medical Accelerator, HIMAC. The carbon ion beam at the isocentric gantry nozzle for the therapeutic energy of 290 MeV/u was simulated, experimental work was carried out in National Institute of Radiological Sciences, NIRS, Chiba, Japan by using the HIMAC to confirm the accuracy and qualities dose distribution by MC methods. The Geant4 based simulated dose distribution were verified with measurements for Bragg peak and spread out Bragg peak (SOBP) respectively. The verification of results shows that the Bragg peak depth-dose and SOBP distributions in simulation has good agreement with measurements. In overall, the study showed that Geant4 based can be fully applied in the heavy ion therapy field for simulation, further works need to be carry on to refine and improve the Geant4 MC simulations. (author)

  16. McStas 1.1: A tool for building neutron Monte Carlo simulations

    DEFF Research Database (Denmark)

    Lefmann, K.; Nielsen, K.; Tennant, D.A.

    2000-01-01

    McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron...

  17. GPM GROUND VALIDATION SATELLITE SIMULATED ORBITS MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation Satellite Simulated Orbits MC3E dataset is available in the Orbital database , which takes account for the atmospheric profiles, the...

  18. New developments in the McStas neutron instrument simulation package

    DEFF Research Database (Denmark)

    Willendrup, Peter Kjær; Bergbäck Knudsen, Erik; Klinkby, Esben Bryndt

    2014-01-01

    , virtual experiments, data analysis and user training. McStas was founded as a scienti_c, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013......), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics....

  19. Qualification test of few group constants generated from an MC method by the two-step neutronics analysis system McCARD/MASTER

    International Nuclear Information System (INIS)

    Park, Ho Jin; Shim, Hyung Jin; Joo, Han Gyu; Kim, Chang Hyo

    2011-01-01

    The purpose of this paper is to examine the qualification of few group constants estimated by the Seoul National University Monte Carlo particle transport analysis code McCARD in terms of core neutronics analyses and thus to validate the McCARD method as a few group constant generator. The two- step core neutronics analyses are conducted for a mini and a realistic PWR by the McCARD/MASTER code system in which McCARD is used as an MC group constant generation code and MASTER as a diffusion core analysis code. The two-step calculations for the effective multiplication factors and assembly power distributions of the two PWR cores by McCARD/MASTER are compared with the reference McCARD calculations. By showing excellent agreements between McCARD/MASTER and the reference MC core neutronics analyses for the two PWRs, it is concluded that the MC method implemented in McCARD can generate few group constants which are well qualified for high-accuracy two-step core neutronics calculations. (author)

  20. CloudMC: a cloud computing application for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-01-01

    This work presents CloudMC, a cloud computing application—developed in Windows Azure®, the platform of the Microsoft® cloud—for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based—the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice. (note)

  1. CloudMC: a cloud computing application for Monte Carlo simulation.

    Science.gov (United States)

    Miras, H; Jiménez, R; Miras, C; Gomà, C

    2013-04-21

    This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.

  2. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.

    2015-01-07

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  3. An Efficient Simulation Method for Rare Events

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.

  4. MC Sensor—A Novel Method for Measurement of Muscle Tension

    Directory of Open Access Journals (Sweden)

    Sašo Tomažič

    2011-09-01

    Full Text Available This paper presents a new muscle contraction (MC sensor. This MC sensor is based on a novel principle whereby muscle tension is measured during muscle contractions. During the measurement, the sensor is fixed on the skin surface above the muscle, while the sensor tip applies pressure and causes an indentation of the skin and intermediate layer directly above the muscle and muscle itself. The force on the sensor tip is then measured. This force is roughly proportional to the tension of the muscle. The measurement is non-invasive and selective. Selectivity of MC measurement refers to the specific muscle or part of the muscle that is being measured and is limited by the size of the sensor tip. The sensor is relatively small and light so that the measurements can be performed while the measured subject performs different activities. Test measurements with this MC sensor on the biceps brachii muscle under isometric conditions (elbow angle 90° showed a high individual linear correlation between the isometric force and MC signal amplitudes (0.97 ≤ r ≤ 1. The measurements also revealed a strong correlation between the MC and electromyogram (EMG signals as well as good dynamic behaviour by the MC sensor. We believe that this MC sensor, when fully tested, will be a useful device for muscle mechanic diagnostics and that it will be complementary to existing methods.

  5. McStas 1.1. A freeware package for neutron Monte Carlo ray-tracing simulations

    International Nuclear Information System (INIS)

    Lefmann, K.; Nielsen, K.

    1999-01-01

    Neutron simulation is becoming an indispensable tool for neutron instrument design. At Risoe National Laboratory, a user-friendly, versatile, and fast simulation package, McStas has been developed, which may be freely downloaded from our website. An instrument is described in the McStas meta-language and is composed of elements from the McStas component library, which is under constant development and debugging by both the users and us. The McStas front- and back-ends take care of performing the simulations and displaying their results, respectively. McStas 1.1 facilities detailed simulations of complicated triple-axis instruments like the Riso RITA spectrometer, and it is equally well equipped for time-of flight spectrometers. At ECNS'99, a brief tutorial of McStas including a few on-line demonstrations is presented. Further, results from the latest simulation work in the growing McStas user group are presented and the future of this project is discussed. (author)

  6. Simulating Controlled Radical Polymerizations with mcPolymer—A Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Georg Drache

    2012-07-01

    Full Text Available Utilizing model calculations may lead to a better understanding of the complex kinetics of the controlled radical polymerization. We developed a universal simulation tool (mcPolymer, which is based on the widely used Monte Carlo simulation technique. This article focuses on the software architecture of the program, including its data management and optimization approaches. We were able to simulate polymer chains as individual objects, allowing us to gain more detailed microstructural information of the polymeric products. For all given examples of controlled radical polymerization (nitroxide mediated radical polymerization (NMRP homo- and copolymerization, atom transfer radical polymerization (ATRP, reversible addition fragmentation chain transfer polymerization (RAFT, we present detailed performance analyses demonstrating the influence of the system size, concentrations of reactants, and the peculiarities of data. Different possibilities were exemplarily illustrated for finding an adequate balance between precision, memory consumption, and computation time of the simulation. Due to its flexible software architecture, the application of mcPolymer is not limited to the controlled radical polymerization, but can be adjusted in a straightforward manner to further polymerization models.

  7. MC/DC and Toggle Coverage Measurement Tool for FBD Program Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eui Sub; Jung, Se Jin; Kim, Jae Yeob; Yoo, Jun Beom [Konkuk University, Seoul (Korea, Republic of)

    2016-05-15

    The functional verification of FBD program can be implemented with various techniques such as testing and simulation. Simulation is preferable to verify FBD program, because it replicates operation of the PLC as well. The PLC is executed repeatedly as long as the controlled system is running based on scan time. Likewise, the simulation technique operates continuously and sequentially. Although engineers try to verify the functionality wholly, it is difficult to find residual errors in the design. Even if 100% functional coverage is accomplished, code coverage have 50%, which might indicate that the scenario is missing some key features of the design. Unfortunately, errors and bugs are often found in the missing points. To assure a high quality of functional verification, code coverage is important as well as functional coverage. We developed a pair tool 'FBDSim' and 'FBDCover' for FBD simulation and coverage measurement. The 'FBDSim' automatically simulates a set of FBD simulation scenarios. While the 'FBDSim' simulates the FBD program, it calculates the MC/DC and Toggle coverage and identifies unstimulated points. After FBD simulation is done, the 'FBDCover' reads the coverage results and shows the coverage with graphical feature and uncovered points with tree feature. The coverages and uncovered points can help engineers to improve the quality of simulation. We slightly dealt with the both coverages, but the coverage is dealt with more concrete and rigorous manner.

  8. Simulation of streamflow in the McTier Creek watershed, South Carolina

    Science.gov (United States)

    Feaster, Toby D.; Golden, Heather E.; Odom, Kenneth R.; Lowery, Mark A.; Conrads, Paul; Bradley, Paul M.

    2010-01-01

    The McTier Creek watershed is located in the Sand Hills ecoregion of South Carolina and is a small catchment within the Edisto River Basin. Two watershed hydrology models were applied to the McTier Creek watershed as part of a larger scientific investigation to expand the understanding of relations among hydrologic, geochemical, and ecological processes that affect fish-tissue mercury concentrations within the Edisto River Basin. The two models are the topography-based hydrological model (TOPMODEL) and the grid-based mercury model (GBMM). TOPMODEL uses the variable-source area concept for simulating streamflow, and GBMM uses a spatially explicit modified curve-number approach for simulating streamflow. The hydrologic output from TOPMODEL can be used explicitly to simulate the transport of mercury in separate applications, whereas the hydrology output from GBMM is used implicitly in the simulation of mercury fate and transport in GBMM. The modeling efforts were a collaboration between the U.S. Geological Survey and the U.S. Environmental Protection Agency, National Exposure Research Laboratory. Calibrations of TOPMODEL and GBMM were done independently while using the same meteorological data and the same period of record of observed data. Two U.S. Geological Survey streamflow-gaging stations were available for comparison of observed daily mean flow with simulated daily mean flow-station 02172300, McTier Creek near Monetta, South Carolina, and station 02172305, McTier Creek near New Holland, South Carolina. The period of record at the Monetta gage covers a broad range of hydrologic conditions, including a drought and a significant wet period. Calibrating the models under these extreme conditions along with the normal flow conditions included in the record enhances the robustness of the two models. Several quantitative assessments of the goodness of fit between model simulations and the observed daily mean flows were done. These included the Nash-Sutcliffe coefficient

  9. Research on Monte Carlo simulation method of industry CT system

    International Nuclear Information System (INIS)

    Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan

    2010-01-01

    There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)

  10. GEM simulation methods development

    International Nuclear Information System (INIS)

    Tikhonov, V.; Veenhof, R.

    2002-01-01

    A review of methods used in the simulation of processes in gas electron multipliers (GEMs) and in the accurate calculation of detector characteristics is presented. Such detector characteristics as effective gas gain, transparency, charge collection and losses have been calculated and optimized for a number of GEM geometries and compared with experiment. A method and a new special program for calculations of detector macro-characteristics such as signal response in a real detector readout structure, and spatial and time resolution of detectors have been developed and used for detector optimization. A detailed development of signal induction on readout electrodes and electronics characteristics are included in the new program. A method for the simulation of charging-up effects in GEM detectors is described. All methods show good agreement with experiment

  11. Crew-MC communication and characteristics of crewmembers' sleep under conditions of simulated prolonged space flight

    Science.gov (United States)

    Shved, Dmitry; Gushin, Vadim; Yusupova, Anna; Ehmann, Bea; Balazs, Laszlo; Zavalko, Irina

    Characteristics of crew-MC communication and psychophysiological state of the crewmembers were studied in simulation experiment with 520-day isolation. We used method of computerized quantitative content analysis to investigate psychologically relevant characteristics of the crew’s messages content. Content analysis is a systematic, reproducible method of reducing of a text array to a limited number of categories by means of preset scientifically substantiated rules of coding (Berelson, 1971, Krippendorff, 2004). All statements in the crew’s messages to MC were coded with certain psychologically relevant content analysis categories (e.g. ‘Needs’, ‘Negativism’, ‘Time’). We attributed to the ‘Needs’ category statements (semantic units), containing the words, related to subject’s needs and their satisfaction, e.g. ‘‘necessary, need, wish, want, demand’’. To the ‘Negativism’ category we refer critical statements, containing such words as ‘‘mistakes, faults, deficit, shortage’’. The ‘Time’ category embodies statements related to time perception, e.g. “hour, day, always, never, constantly”. Sleep study was conducted with use of EEG and actigraphy techniques to assess characteristics of the crewmembers’ night sleep, reflecting the crew’s adaptation to the experimental conditions. The overall amount of communication (quantity of messages and their length) positively correlated with sleep effectiveness (time of sleep related to time in bed) and with delta sleep latency. Occurrences of semantic units in categories ‘Time’ and ‘Negativism’ negatively correlated with sleep latency, and positively - with delta sleep latency and sleep effectiveness. Frequency of time-related semantic units’ utilization in the crew’s messages was significantly increasing during or before the key events of the experiment (beginning of high autonomy, planetary landing simulation, etc.). It is known that subjective importance of time

  12. Steady-state molecular dynamics simulation of vapor to liquid nucleation with Mc Donald's demon

    International Nuclear Information System (INIS)

    Horsch, M.; Miroshnichenko, S.; Vrabec, J.

    2009-01-01

    Grand canonical MD with McDonald's demon is discussed in the present contribution and applied for sampling both nucleation kinetics and steady-state properties of a supersaturated vapor. The idea behind the new approach is to simulate the production of clusters up to a given size for a specified supersaturation. The classical nucleation theory is found to overestimate the free energy of cluster formation and deviate by two orders of magnitude from the nucleation rate below the triple point at high supersaturations.

  13. Methods of channeling simulation

    International Nuclear Information System (INIS)

    Barrett, J.H.

    1989-06-01

    Many computer simulation programs have been used to interpret experiments almost since the first channeling measurements were made. Certain aspects of these programs are important in how accurately they simulate ions in crystals; among these are the manner in which the structure of the crystal is incorporated, how any quantity of interest is computed, what ion-atom potential is used, how deflections are computed from the potential, incorporation of thermal vibrations of the lattice atoms, correlations of thermal vibrations, and form of stopping power. Other aspects of the programs are included to improve the speed; among these are table lookup, importance sampling, and the multiparameter method. It is desirable for programs to facilitate incorporation of special features of interest in special situations; examples are relaxations and enhanced vibrations of surface atoms, easy substitution of an alternate potential for comparison, change of row directions from layer to layer in strained-layer lattices, and different vibration amplitudes for substitutional solute or impurity atoms. Ways of implementing all of these aspects and features and the consequences of them will be discussed. 30 refs., 3 figs

  14. The McGill simulator for endoscopic sinus surgery (MSESS): a validation study.

    Science.gov (United States)

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Saad, Elias; Funnell, W Robert J; Tewfik, Marc A

    2014-10-24

    Endoscopic sinus surgery (ESS) is a technically challenging procedure, associated with a significant risk of complications. Virtual reality simulation has demonstrated benefit in many disciplines as an important educational tool for surgical training. Within the field of rhinology, there is a lack of ESS simulators with appropriate validity evidence supporting their integration into residency education. The objectives of this study are to evaluate the acceptability, perceived realism and benefit of the McGill Simulator for Endoscopic Sinus Surgery (MSESS) among medical students, otolaryngology residents and faculty, and to present evidence supporting its ability to differentiate users based on their level of training through the performance metrics. 10 medical students, 10 junior residents, 10 senior residents and 3 expert sinus surgeons performed anterior ethmoidectomies, posterior ethmoidectomies and wide sphenoidotomies on the MSESS. Performance metrics related to quality (e.g. percentage of tissue removed), efficiency (e.g. time, path length, bimanual dexterity, etc.) and safety (e.g. contact with no-go zones, maximum applied force, etc.) were calculated. All users completed a post-simulation questionnaire related to realism, usefulness and perceived benefits of training on the MSESS. The MSESS was found to be realistic and useful for training surgical skills with scores of 7.97 ± 0.29 and 8.57 ± 0.69, respectively on a 10-point rating scale. Most students and residents (29/30) believed that it should be incorporated into their curriculum. There were significant differences between novice surgeons (10 medical students and 10 junior residents) and senior surgeons (10 senior residents and 3 sinus surgeons) in performance metrics related to quality (p education. This simulator may be a potential resource to help fill the void in endoscopic sinus surgery training.

  15. MC 93 - Proceedings of the International Conference on Monte Carlo Simulation in High Energy and Nuclear Physics

    Science.gov (United States)

    Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi

    1994-01-01

    Calorimeter Geometry * Simulations with EGS4/PRESTA for Thin Si Sampling Calorimeter * SIBERIA -- Monte Carlo Code for Simulation of Hadron-Nuclei Interactions * CALOR89 Predictions for the Hanging File Test Configurations * Estimation of the Multiple Coulomb Scattering Error for Various Numbers of Radiation Lengths * Monte Carlo Generator for Nuclear Fragmentation Induced by Pion Capture * Calculation and Randomization of Hadron-Nucleus Reaction Cross Section * Developments in GEANT Physics * Status of the MC++ Event Generator Toolkit * Theoretical Overview of QCD Event Generators * Random Numbers? * Simulation of the GEM LKr Barrel Calorimeter Using CALOR89 * Recent Improvement of the EGS4 Code, Implementation of Linearly Polarized Photon Scattering * Interior-Flux Simulation in Enclosures with Electron-Emitting Walls * Some Recent Developments in Global Determinations of Parton Distributions * Summary of the Workshop on Simulating Accelerator Radiation Environments * Simulating the SDC Radiation Background and Activation * Applications of Cluster Monte Carlo Method to Lattice Spin Models * PDFLIB: A Library of All Available Parton Density Functions of the Nucleon, the Pion and the Photon and the Corresponding αs Calculations * DTUJET92: Sampling Hadron Production at Supercolliders * A New Model for Hadronic Interactions at Intermediate Energies for the FLUKA Code * Matrix Generator of Pseudo-Random Numbers * The OPAL Monte Carlo Production System * Monte Carlo Simulation of the Microstrip Gas Counter * Inner Detector Simulations in ATLAS * Simulation and Reconstruction in H1 Liquid Argon Calorimetry * Polarization Decomposition of Fluxes and Kinematics in ep Reactions * Towards Object-Oriented GEANT -- ProdiG Project * Parallel Processing of AMY Detector Simulation on Fujitsu AP1000 * Enigma: An Event Generator for Electron-Photon- or Pion-Induced Events in the ~1 GeV Region * SSCSIM: Development and Use by the Fermilab SDC Group * The GEANT-CALOR Interface

  16. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    Science.gov (United States)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX

  17. Simulation tools for scattering corrections in spectrally resolved x-ray computed tomography using McXtrace

    Science.gov (United States)

    Busi, Matteo; Olsen, Ulrik L.; Knudsen, Erik B.; Frisvad, Jeppe R.; Kehres, Jan; Dreier, Erik S.; Khalil, Mohamad; Haldrup, Kristoffer

    2018-03-01

    Spectral computed tomography is an emerging imaging method that involves using recently developed energy discriminating photon-counting detectors (PCDs). This technique enables measurements at isolated high-energy ranges, in which the dominating undergoing interaction between the x-ray and the sample is the incoherent scattering. The scattered radiation causes a loss of contrast in the results, and its correction has proven to be a complex problem, due to its dependence on energy, material composition, and geometry. Monte Carlo simulations can utilize a physical model to estimate the scattering contribution to the signal, at the cost of high computational time. We present a fast Monte Carlo simulation tool, based on McXtrace, to predict the energy resolved radiation being scattered and absorbed by objects of complex shapes. We validate the tool through measurements using a CdTe single PCD (Multix ME-100) and use it for scattering correction in a simulation of a spectral CT. We found the correction to account for up to 7% relative amplification in the reconstructed linear attenuation. It is a useful tool for x-ray CT to obtain a more accurate material discrimination, especially in the high-energy range, where the incoherent scattering interactions become prevailing (>50 keV).

  18. One-dimensional simulation of stratification and dissolved oxygen in McCook Reservoir, Illinois

    Science.gov (United States)

    Robertson, Dale M.

    2000-01-01

    As part of the Chicagoland Underflow Plan/Tunnel and Reservoir Plan, the U.S. Army Corps of Engineers, Chicago District, plans to build McCook Reservoir.a flood-control reservoir to store combined stormwater and raw sewage (combined sewage). To prevent the combined sewage in the reservoir from becoming anoxic and producing hydrogen sulfide gas, a coarse-bubble aeration system will be designed and installed on the basis of results from CUP 0-D, a zero-dimensional model, and MAC3D, a three-dimensional model. Two inherent assumptions in the application of MAC3D are that density stratification in the simulated water body is minimal or not present and that surface heat transfers are unimportant and, therefore, may be neglected. To test these assumptions, the previously tested, one-dimensional Dynamic Lake Model (DLM) was used to simulate changes in temperature and dissolved oxygen in the reservoir after a 1-in-100-year event. Results from model simulations indicate that the assumptions made in MAC3D application are valid as long as the aeration system, with an air-flow rate of 1.2 cubic meters per second or more, is operated while the combined sewage is stored in the reservoir. Results also indicate that the high biochemical oxygen demand of the combined sewage will quickly consume the dissolved oxygen stored in the reservoir and the dissolved oxygen transferred through the surface of the reservoir; therefore, oxygen must be supplied by either the rising bubbles of the aeration system (a process not incorporated in DLM) or some other technique to prevent anoxia.

  19. Evaluation of the McMahon Competence Assessment Instrument for Use with Midwifery Students During a Simulated Shoulder Dystocia.

    Science.gov (United States)

    McMahon, Erin; Jevitt, Cecilia; Aronson, Barbara

    2018-03-01

    Intrapartum emergencies occur infrequently but require a prompt and competent response from the midwife to prevent morbidity and mortality of the woman, fetus, and newborn. Simulation provides the opportunity for student midwives to develop competence in a safe environment. The purpose of this study was to determine the inter-rater reliability of the McMahon Competence Assessment Instrument (MCAI) for use with student midwives during a simulated shoulder dystocia scenario. A pilot study using a nonprobability convenience sample was used to evaluate the MCAI. Content validity indices were calculated for the individual items and the overall instrument using data from a panel of expert reviewers. Fourteen student midwives consented to be video recorded while participating in a simulated shoulder dystocia scenario. Three faculty raters used the MCAI to evaluate the student performance. These quantitative data were used to determine the inter-rater reliability of the MCAI. The intraclass correlation coefficient (ICC) was used to assess the inter-rater reliability of MCAI scores between 2 or more raters. The ICC was 0.86 (95% confidence interval, 0.60-0.96). Fleiss's kappa was calculated to determine the inter-rater reliability for individual items. Twenty-three of the 42 items corresponded to excellent strength of agreement. This study demonstrates a method to determine the inter-rater reliability of a competence assessment instrument to be used with student midwives. Data produced by this study were used to revise and improve the instrument. Additional research will further document the inter-rater reliability and can be used to determine changes in student competence. Valid and reliable methods of assessment will encourage the use of simulation to efficiently develop the competence of student midwives. © 2018 by the American College of Nurse-Midwives.

  20. Precipitation kinetics in binary Fe–Cu and ternary Fe–Cu–Ni alloys via kMC method

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2017-08-01

    Full Text Available The precipitation kinetics of coherent Cu rich precipitates (CRPs in binary Fe–Cu and ternary Fe–Cu–Ni alloys during thermal aging was modelled by the kinetic Monte Carlo method (kMC. A good agreement of the precipitation kinetics of Fe–Cu was found between the simulation and experimental results, as observed by means of advancement factor and cluster number density. This agreement was obtained owing to the correct description of the fast cluster mobility. The simulation results indicate that the effects of Ni are two-fold: Ni promotes the nucleation of Cu clusters; while the precipitation kinetics appears to be delayed by Ni addition during the coarsening stage. The apparent delayed precipitation kinetics is revealed to be related with the cluster mobility, which are reduced by Ni addition. The reduction effect of the cluster mobility weakens when the CRPs sizes increase. The results provide a view angle on the effects of solute elements upon Cu precipitation kinetics through the consideration of the non-conventional cluster growth mechanism, and kMC is verified to be a powerful approach on that.

  1. Simulating vegetation response to climate change in the Blue Mountains with MC2 dynamic global vegetation model

    Directory of Open Access Journals (Sweden)

    John B. Kim

    2018-04-01

    Full Text Available Warming temperatures are projected to greatly alter many forests in the Pacific Northwest. MC2 is a dynamic global vegetation model, a climate-aware, process-based, and gridded vegetation model. We calibrated and ran MC2 simulations for the Blue Mountains Ecoregion, Oregon, USA, at 30 arc-second spatial resolution. We calibrated MC2 using the best available spatial datasets from land managers. We ran future simulations using climate projections from four global circulation models (GCM under representative concentration pathway 8.5. Under this scenario, forest productivity is projected to increase as the growing season lengthens, and fire occurrence is projected to increase steeply throughout the century, with burned area peaking early- to mid-century. Subalpine forests are projected to disappear, and the coniferous forests to contract by 32.8%. Large portions of the dry and mesic forests are projected to convert to woodlands, unless precipitation were to increase. Low levels of change are projected for the Umatilla National Forest consistently across the four GCM’s. For the Wallowa-Whitman and the Malheur National Forest, forest conversions are projected to vary more across the four GCM-based simulations, reflecting high levels of uncertainty arising from climate. For simulations based on three of the four GCMs, sharply increased fire activity results in decreases in forest carbon stocks by the mid-century, and the fire activity catalyzes widespread biome shift across the study area. We document the full cycle of a structured approach to calibrating and running MC2 for transparency and to serve as a template for applications of MC2. Keywords: Climate change, Regional change, Simulation, Calibration, Forests, Fire, Dynamic global vegetation model

  2. McGill wetland model: evaluation of a peatland carbon simulator developed for global assessments

    Directory of Open Access Journals (Sweden)

    F. St-Hilaire

    2010-11-01

    Full Text Available We developed the McGill Wetland Model (MWM based on the general structure of the Peatland Carbon Simulator (PCARS and the Canadian Terrestrial Ecosystem Model. Three major changes were made to PCARS: (1 the light use efficiency model of photosynthesis was replaced with a biogeochemical description of photosynthesis; (2 the description of autotrophic respiration was changed to be consistent with the formulation of photosynthesis; and (3 the cohort, multilayer soil respiration model was changed to a simple one box peat decomposition model divided into an oxic and anoxic zones by an effective water table, and a one-year residence time litter pool. MWM was then evaluated by comparing its output to the estimates of net ecosystem production (NEP, gross primary production (GPP and ecosystem respiration (ER from 8 years of continuous measurements at the Mer Bleue peatland, a raised ombrotrophic bog located in southern Ontario, Canada (index of agreement [dimensionless]: NEP = 0.80, GPP = 0.97, ER = 0.97; systematic RMSE [g C m−2 d−1]: NEP = 0.12, GPP = 0.07, ER = 0.14; unsystematic RMSE: NEP = 0.15, GPP = 0.27, ER = 0.23. Simulated moss NPP approximates what would be expected for a bog peatland, but shrub NPP appears to be underestimated. Sensitivity analysis revealed that the model output did not change greatly due to variations in water table because of offsetting responses in production and respiration, but that even a modest temperature increase could lead to converting the bog from a sink to a source of CO2. General weaknesses and further developments of MWM are discussed.

  3. M.C. simulation of GEM neutron beam monitor with 10B

    International Nuclear Information System (INIS)

    Wang Yanfeng; Sun Zhijia; Liu Ben; Zhou Jianrong; Yang Guian; Dong Jing; Xu Hong; Zhou Liang; Huang Guangming; Yang Lei; Li Yi

    2010-01-01

    The neutron beam monitor based on GEM detector has been carefully studied with the Monte-Carlo method in this article. The simulation framework is including the ANSYS and the Garfield, which was used to compute the electric field of GEM foils and simulate the movement of electrons in gas mixture respectively. The GEM foils' focus and extract coefficients have been obtained. According to the primary results, the performing of the monitor is improved. (authors)

  4. BER Performance Simulation of Generalized MC DS-CDMA System with Time-Limited Blackman Chip Waveform

    Directory of Open Access Journals (Sweden)

    I. Develi

    2010-09-01

    Full Text Available Multiple access interference encountered in multicarrier direct sequence-code division multiple access (MC DS-CDMA is the most important difficulty that depends mainly on the correlation properties of the spreading sequences as well as the shape of the chip waveforms employed. In this paper, bit error rate (BER performance of the generalized MC DS-CDMA system that employs time-limited Blackman chip waveform is presented for Nakagami-m fading channels. Simulation results show that the use of Blackman chip waveform can improve the BER performance of the generalized MC DS-CDMA system, as compared to the performances achieved by using timelimited chip waveforms in the literature.

  5. Shaping ability of NT Engine and McXim rotary nickel-titanium instruments in simulated root canals. Part 1.

    Science.gov (United States)

    Thompson, S A; Dummer, P M

    1997-07-01

    The aim of this study was to determine the shaping ability of NT Engine and McXim nickel-titanium rotary instruments in simulated root canals. In all, 40 canals consisting of four different shapes in terms of angle and position of curvature were prepared by a combination of NT Engine and McXim instruments using the technique recommended by the manufacturer. Part 1 of this two-part report describes the efficacy of the instruments in terms of preparation time, instrument failure, canal blockages, loss of canal length and three-dimensional canal form. Overall, the mean preparation time for all canals was 6.01 min, with canal shape having a significant effect (P Engine and McXim instruments prepared canals rapidly, with few deformations, no canal blockages and with minimal change in working length. The three-dimensional form of the canals demonstrated good flow and taper characteristics.

  6. McStas

    DEFF Research Database (Denmark)

    Willendrup, Peter Kjær; Farhi, Emmanuel; Bergbäck Knudsen, Erik

    2014-01-01

    experiments. McStas is being actively used for the design-update of the European Spallation Source (ESS) in Lund. This paper includes an introduction to the McStas package, recent and ongoing simulation projects. Further, new features in releases McStas 1.12c and 2.0 are discussed.......The McStas neutron ray-tracing simulation package is a collaboration between Risø DTU, ILL, University of Copenhagen and the PSI. During its lifetime, McStas has evolved to become the world leading software in the area of neutron scattering simulations for instrument design, optimisation, virtual...

  7. Anatomic and histological characteristics of vagina reconstructed by McIndoe method

    Directory of Open Access Journals (Sweden)

    Kozarski Jefta

    2009-01-01

    Full Text Available Background/Aim. Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH syndrome and compare them with normal vagina. Methods. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau. Comparatively, 21 females of 18 and more years with normal vaginas were also studied. All the subjects were divided into the groups R (reconstructed and C (control and the subgroups according to age up to 30 years (1 R, 1C, from 30 to 50 (2R, 2C, and over 50 (3R, 3C. Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p < 0.05 was considered statistically significant. Results. The results show that there are differences in the depth and the wideness of reconstructed vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. Conclusion. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.

  8. A single-column particle-resolved model for simulating the vertical distribution of aerosol mixing state: WRF-PartMC-MOSAIC-SCM v1.0

    Science.gov (United States)

    Curtis, Jeffrey H.; Riemer, Nicole; West, Matthew

    2017-11-01

    The PartMC-MOSAIC particle-resolved aerosol model was previously developed to predict the aerosol mixing state as it evolves in the atmosphere. However, the modeling framework was limited to a zero-dimensional box model approach without resolving spatial gradients in aerosol concentrations. This paper presents the development of stochastic particle methods to simulate turbulent diffusion and dry deposition of aerosol particles in a vertical column within the planetary boundary layer. The new model, WRF-PartMC-MOSAIC-SCM, resolves the vertical distribution of aerosol mixing state. We verified the new algorithms with analytical solutions for idealized test cases and illustrate the capabilities with results from a 2-day urban scenario that shows the evolution of black carbon mixing state in a vertical column.

  9. iFit: a new data analysis framework. Applications for data reduction and optimization of neutron scattering instrument simulations with McStas

    DEFF Research Database (Denmark)

    Farhi, E.; Y., Debab,; Willendrup, Peter Kjær

    2014-01-01

    and noisy problems. These optimizers can then be used to fit models onto data objects, and optimize McStas instrument simulations. As an application, we propose a methodology to analyse neutron scattering measurements in a pure Monte Carlo optimization procedure using McStas and iFit. As opposed...

  10. Advanced sources and optical components for the McStas neutron scattering instrument simulation package

    DEFF Research Database (Denmark)

    Farhi, E.; Monzat, C.; Arnerin, R.

    2014-01-01

    -up, including lenses and prisms. A new library for McStas adds the ability to describe any geometrical arrangement as a set of polygons. This feature has been implemented in most sample scattering components such as Single_crystal, Incoherent, Isotropic_Sqw (liquids/amorphous/powder), PowderN as well...

  11. Is Mc Leod's Patent Pending Naturoptic Method for Restoring Healthy Vision Easy and Verifiable?

    Science.gov (United States)

    Niemi, Paul; McLeod, David; McLeod, Roger

    2006-10-01

    RDM asserts that he and people he has trained can assign visual tasks from standard vision assessment charts, or better replacements, proceeding through incremental changes and such rapid improvements that healthy vision can be restored. Mc Leod predicts that in visual tasks with pupil diameter changes, wavelengths change proportionally. A longer, quasimonochromatic wavelength interval is coincident with foveal cones, and rods. A shorter, partially overlapping interval separately aligns with extrafoveal cones. Wavelengths follow the Airy disk radius formula. Niemi can evaluate if it is true that visual health merely requires triggering and facilitating the demands of possibly overridden feedback signals. The method and process are designed so that potential Naturopathic and other select graduate students should be able to self-fund their higher- level educations from preferential franchising arrangements of earnings while they are in certain programs.

  12. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    Science.gov (United States)

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. New methods in plasma simulation

    International Nuclear Information System (INIS)

    Mason, R.J.

    1990-01-01

    The development of implicit methods of particle-in-cell (PIC) computer simulation in recent years, and their merger with older hybrid methods have created a new arsenal of simulation techniques for the treatment of complex practical problems in plasma physics. The new implicit hybrid codes are aimed at transitional problems that lie somewhere between the long time scale, high density regime associated with MHD modeling, and the short time scale, low density regime appropriate to PIC particle-in-cell techniques. This transitional regime arises in ICF coronal plasmas, in pulsed power plasma switches, in Z-pinches, and in foil implosions. Here, we outline how such a merger of implicit and hybrid methods has been carried out, specifically in the ANTHEM computer code, and demonstrate the utility of implicit hybrid simulation in applications. 25 refs., 5 figs

  14. Shaping ability of NT Engine and McXim rotary nickel-titanium instruments in simulated root canals. Part 2.

    Science.gov (United States)

    Thompson, S A; Dummer, P M

    1997-07-01

    The aim of this laboratory-based study was to determine the shaping ability of NT Engine and McXim nickel-titanium rotary instruments in simulated root canals. A total of 40 canals with four different shapes in terms of angle and position of curve were prepared with NT Engine and McXim instruments, using the technique recommended by the manufacturer. Part 2 of this report describes the efficacy of the instruments in terms of prevalence of canal aberrations, the amount and direction of canal transportation and overall postoperative shape. Pre- and postoperative images of the canals were taken using a video camera attached to a computer with image analysis software. The pre- and postoperative views were superimposed to highlight the amount and position of material removed during preparation. No zips, elbows, perforations or danger zones were created during preparation. Forty-two per cent of canals had ledges on the outer aspect of the curve, the majority of which (16 out of 17) occurred in canals with short acute curves. There were significant differences (P Engine and McXim rotary nickel-titanium instruments created no aberrations other than ledges and produced only minimal transportation. The overall shape of canals was good.

  15. Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-09-12

    We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of $$\\\\mathrm {TOL}$$TOL, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.

  16. Human reliability-based MC and A methods for evaluating the effectiveness of protecting nuclear material - 59379

    International Nuclear Information System (INIS)

    Duran, Felicia A.; Wyss, Gregory D.

    2012-01-01

    Material control and accountability (MC and A) operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. MC and A activities, from monitoring to inventory measurements, provide critical information about target materials and define security elements that are useful against insider threats. However, these activities have been difficult to characterize in ways that are compatible with the path analysis methods that are used to systematically evaluate the effectiveness of a site's protection system. The path analysis methodology focuses on a systematic, quantitative evaluation of the physical protection component of the system for potential external threats, and often calculates the probability that the physical protection system (PPS) is effective (PE) in defeating an adversary who uses that attack pathway. In previous work, Dawson and Hester observed that many MC and A activities can be considered a type of sensor system with alarm and assessment capabilities that provide recurring opportunities for 'detecting' the status of critical items. This work has extended that characterization of MC and A activities as probabilistic sensors that are interwoven within each protection layer of the PPS. In addition, MC and A activities have similar characteristics to operator tasks performed in a nuclear power plant (NPP) in that the reliability of these activities depends significantly on human performance. Many of the procedures involve human performance in checking for anomalous conditions. Further characterization of MC and A activities as operational procedures that check the status of critical assets provides a basis for applying human reliability analysis (HRA) models and methods to determine probabilities of detection for MC and A protection elements. This paper will discuss the application of HRA methods used in nuclear power plant probabilistic risk assessments to define detection

  17. Simulation for developing new pulse neutron spectrometers I. Creation of new McStas components of moderators of JSNS

    CERN Document Server

    Tamura, I; Arai, M; Harada, M; Maekawa, F; Shibata, K; Soyama, K

    2003-01-01

    Moderators components of the McStas code have been created for the design of JSNS instruments. Three cryogenic moderators are adopted in JSNS, one is coupled H sub 2 moderators for high intensity experiments and other two are decoupled H sub 2 with poisoned or unpoisoned for high resolution moderators. Since the characteristics of neutron beams generated from moderators make influence on the performance of pulse neutron spectrometers, it is important to perform the Monte Carlo simulation with neutron source component written precisely. The neutron spectrum and time structure were calculated using NMTC/JAERI97 and MCNP4a codes. The simulation parameters, which describe the pulse shape over entire spectrum as a function of time, are optimized. In this paper, the creation of neutron source components for port No.16 viewed to coupled H sub 2 moderator and for port No.11 viewed to decoupled H sub 2 moderator of JSNS are reported.

  18. GPM GROUND VALIDATION SATELLITE SIMULATED ORBITS MC3E V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The Satellite Simulator database is available for several campaigns: Light Precipitation Evaluation Experiment (LPVEX), Midlatitude Continental Convective Clouds...

  19. Development of the McGill simulator for endoscopic sinus surgery: a new high-fidelity virtual reality simulator for endoscopic sinus surgery.

    Science.gov (United States)

    Varshney, Rickul; Frenkiel, Saul; Nguyen, Lily H P; Young, Meredith; Del Maestro, Rolando; Zeitouni, Anthony; Tewfik, Marc A

    2014-01-01

    The technical challenges of endoscopic sinus surgery (ESS) and the high risk of complications support the development of alternative modalities to train residents in these procedures. Virtual reality simulation is becoming a useful tool for training the skills necessary for minimally invasive surgery; however, there are currently no ESS virtual reality simulators available with valid evidence supporting their use in resident education. Our aim was to develop a new rhinology simulator, as well as to define potential performance metrics for trainee assessment. The McGill simulator for endoscopic sinus surgery (MSESS), a new sinus surgery virtual reality simulator with haptic feedback, was developed (a collaboration between the McGill University Department of Otolaryngology-Head and Neck Surgery, the Montreal Neurologic Institute Simulation Lab, and the National Research Council of Canada). A panel of experts in education, performance assessment, rhinology, and skull base surgery convened to identify core technical abilities that would need to be taught by the simulator, as well as performance metrics to be developed and captured. The MSESS allows the user to perform basic sinus surgery skills, such as an ethmoidectomy and sphenoidotomy, through the use of endoscopic tools in a virtual nasal model. The performance metrics were developed by an expert panel and include measurements of safety, quality, and efficiency of the procedure. The MSESS incorporates novel technological advancements to create a realistic platform for trainees. To our knowledge, this is the first simulator to combine novel tools such as the endonasal wash and elaborate anatomic deformity with advanced performance metrics for ESS.

  20. A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon

    Science.gov (United States)

    D.J. McGarvey; J.M. Johnston

    2011-01-01

    Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...

  1. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: Case study

    Energy Technology Data Exchange (ETDEWEB)

    Bieda, Bogusław

    2014-05-01

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.

  2. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    Science.gov (United States)

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-09-10

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  3. Sensibility of vagina reconstructed by McIndoe method in Mayer-Küster-Rokitansky-Hauser syndrome

    Directory of Open Access Journals (Sweden)

    Vesanović Svetlana

    2008-01-01

    Full Text Available Background/Aim. Congenital absence of vagina is a failure present in Mayer-Küster-Rokitansky-Hauser syndrome. Treatment of this anomaly includes nonoperative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to determine sensitivity (touch, warmness, coldness of a vagina reconstructed by McIndoe method in Mayer-Küster-Rokitansky-Hauser syndrome and compare it with the normal vagina. Methods. A total of 21 female persons with reconstructed vagina by McIndoe method and 21 female persons with normal vagina were observed. All female persons were divided into groups and subgroups (according to age. Sensibility to touch, warmness and coldness were examined, applying VonFrey's esthesiometer and termoesthesiometer for warmness and coldness in three regions of vagina (enter, middle wall, bothom. The number of positive answers was registrated by touching the mucosa regions for five seconds, five times. Results. The obtained results showed that female patients with a reconstructed vagina by McIndoe method, felt touch at the middle part of wall and in the bottom of vagina better than patients with normal one. Also, the first ones felt warmness at the middle part of wall and coldness in the bottom of vagina, better than the patients with normal vagina. Other results showed no difference in sensibility between reconstructed and normal vagina. Conclusion. Various types of sensibility (touch, warmness, coldness are better or the same in vaginas reconstructed by McIndoe method, in comparison with normal ones. This could be explained by the fact that skin grafts are capable of recovering sensibility.

  4. Study of new scaling of direct photon production in pp collisions at high energies using MC simulation

    International Nuclear Information System (INIS)

    Tokarev, M.V.; Potrebenikova, E.V.

    1998-01-01

    The new scaling, z scaling, of prompt photon production in pp collisions at high energies is studied. The scaling function H(z) is expressed via the inclusive cross section of photon production Ed 3 σ / dq 3 and the multiplicity density of charged particles, ρ (s), at pseudorapidity η = 0. Monte Carlo (MC) simulation based on the PYTHIA code is used to calculate the cross section and to verify the scaling. The MC technique used to construct the scaling function is described. The H (z) dependence on the scaling variable z, the center-of-mass energy √ s at a produced angle of θ = 90 deg is investigated. The predictions of the Ed 3 σ / dq 3 dependence on transverse momentum q at a colliding energy of √ s = 0.5, 5.0 and 14.0 TeV are made. The obtained results are compared with the experimental data and can be of interest for future experiments at RHIC (BNL). LHC (CERN), HERA (DESY) and Tevatron (Batavia)

  5. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    Science.gov (United States)

    Fonseca, T. C. F.; Mendes, B. M.; Lacerda, M. A. S.; Silva, L. A. C.; Paixão, L.; Bastos, F. M.; Ramirez, J. V.; Junior, J. P. R.

    2017-11-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm2. This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results.

  6. Advances in time series methods and applications the A. Ian McLeod festschrift

    CERN Document Server

    Stanford, David; Yu, Hao

    2016-01-01

    This volume reviews and summarizes some of A. I. McLeod's significant contributions to time series analysis. It also contains original contributions to the field and to related areas by participants of the festschrift held in June 2014 and friends of Dr. McLeod. Covering a diverse range of state-of-the-art topics, this volume well balances applied and theoretical research across fourteen contributions by experts in the field. It will be of interest to researchers and practitioners in time series, econometricians, and graduate students in time series or econometrics, as well as environmental statisticians, data scientists, statisticians interested in graphical models, and researchers in quantitative risk management.

  7. MCMEG: Simulations of both PDD and TPR for 6 MV LINAC photon beam using different MC codes

    International Nuclear Information System (INIS)

    Fonseca, T.C.F.; Mendes, B.M.; Lacerda, M.A.S.; Silva, L.A.C.; Paixão, L.

    2017-01-01

    The Monte Carlo Modelling Expert Group (MCMEG) is an expert network specializing in Monte Carlo radiation transport and the modelling and simulation applied to the radiation protection and dosimetry research field. For the first inter-comparison task the group launched an exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes available within their laboratories and validate their simulated results by comparing them with experimental measurements carried out in the National Cancer Institute (INCA) in Rio de Janeiro, Brazil. The experimental measurements were performed using an ionization chamber with calibration traceable to a Secondary Standard Dosimetry Laboratory (SSDL). The detector was immersed in a water phantom at different depths and was irradiated with a radiation field size of 10×10 cm 2 . This exposure setup was used to determine the dosimetric parameters Percentage Depth Dose (PDD) and Tissue Phantom Ratio (TPR). The validation process compares the MC calculated results to the experimental measured PDD20,10 and TPR20,10. Simulations were performed reproducing the experimental TPR20,10 quality index which provides a satisfactory description of both the PDD curve and the transverse profiles at the two depths measured. This paper reports in detail the modelling process using MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes, the source and tally descriptions, the validation processes and the results. - Highlights: • MCMEG is an expert network specializing in Monte Carlo radiation transport. • MCNPx, MCNP6, EGSnrc and Penelope Monte Carlo codes are used. • Exercise to model and simulate a 6 MV LINAC photon beam using the Monte Carlo codes. • The PDD 20,10 and TPR 20,10 dosimetric parameters were compared with real data. • The paper reports in the modelling process using different Monte Carlo codes.

  8. Comparison of McMaster and FECPAKG2 methods for counting nematode eggs in the faeces of alpacas.

    Science.gov (United States)

    Rashid, Mohammed H; Stevenson, Mark A; Waenga, Shea; Mirams, Greg; Campbell, Angus J D; Vaughan, Jane L; Jabbar, Abdul

    2018-05-02

    This study aimed to compare the FECPAK G2 and the McMaster techniques for counting of gastrointestinal nematode eggs in the faeces of alpacas using two floatation solutions (saturated sodium chloride and sucrose solutions). Faecal eggs counts from both techniques were compared using the Lin's concordance correlation coefficient and Bland and Altman statistics. Results showed moderate to good agreement between the two methods, with better agreement achieved when saturated sugar is used as a floatation fluid, particularly when faecal egg counts are less than 1000 eggs per gram of faeces. To the best of our knowledge this is the first study to assess agreement of measurements between McMaster and FECPAK G2 methods for estimating faecal eggs in South American camelids.

  9. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) of the rare earth elements (REEs) in beneficiation rare earth waste from the gold processing: case study

    Science.gov (United States)

    Bieda, Bogusław; Grzesik, Katarzyna

    2017-11-01

    The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.

  10. Rapid resolution of chronic shoulder pain classified as derangement using the McKenzie method: a case series

    Science.gov (United States)

    Aytona, Maria Corazon; Dudley, Karlene

    2013-01-01

    The McKenzie method, also known as Mechanical Diagnosis and Therapy (MDT), is primarily recognized as an evaluation and treatment method for the spine. However, McKenzie suggested that this method could also be applied to the extremities. Derangement is an MDT classification defined as an anatomical disturbance in the normal resting position of the joint, and McKenzie proposed that repeated movements could be applied to reduce internal joint displacement and rapidly reduce derangement symptoms. However, the current literature on MDT application to shoulder disorders is limited. Here, we present a case series involving four patients with chronic shoulder pain from a duration of 2–18 months classified as derangement and treated using MDT principles. Each patient underwent mechanical assessment and was treated with repeated movements based on their directional preference. All patients demonstrated rapid and clinically significant improvement in baseline measures and the disabilities of the arm, shoulder, and hand (QuickDASH) scores from an average of 38% at initial evaluation to 5% at discharge within 3–5 visits. Our findings suggest that MDT may be an effective treatment approach for shoulder pain. PMID:24421633

  11. Picosecond Water Radiolysis at High Temperature. Br- Oxidation - Experiments and MC-Simulations

    International Nuclear Information System (INIS)

    Baldacchino, G.; Saffre, D.; Jeunesse, J.P.; Schmidhammer, U.; Larbre, J.P.; Mostafavi, M.; Beuve, M.; Gervais, B.

    2012-09-01

    Acidic solutions of bromhydric acid have been irradiated by picosecond pulses of 7 MeV-electrons provided by ELYSE accelerator (LCP Orsay). At elevated temperatures up to 350 deg. C, salts like NaBr or KBr usually precipitate and organic compound are decomposed. Another choice of OH-scavenger may be acidic halogenates like HBr or HCl. In this situation, the processes involving H + and Br - must be considerate: while hydrated electrons are scavenged by H + , . OH reacts with Br - . Then the formations of BrOH . and Br 2 .- have been investigated by using a devoted picosecond pump-probe setup. A dedicated small-size high temperature optical flow cell has been developed for fitting the picosecond duration of the electron pulses. This cell replaces the one used also with nanosecond resolution. The picosecond time resolution remains roughly not affected by the material crossed by electrons (0.4 mm of Inconel 718) and by the white light continuum (20 mm of Sapphire windows and 6 mm of liquid solution). Depending on the concentration of HBr, the growing up of the signal can be attributed to mainly BrOH . or Br2 .- . Actually with a relatively low scavenging power ([HBr] = 25 mM), Br 2 .- is formed with a reaction between Br . and Br - which delays of around 4 ns the apparition of Br2 .- . In this particular case we then assume the absorbance is due to BrOH . . With higher and higher temperature, from 100 deg. C to 300 deg. C, the rate constant of this formation is lightly less and less. This observation must be associated to the fact that the formation of BrOH . is actually equilibrium with a lower and lower equilibrium constant value when temperature is increased. This presentation tries to explain this fact in detail by also considering Monte Carlo simulations. This will allows following all transient species from ps to μs. (authors)

  12. Comparison between McMaster and Mini-FLOTAC methods for the enumeration of Eimeria maxima oocysts in poultry excreta.

    Science.gov (United States)

    Bortoluzzi, C; Paras, K L; Applegate, T J; Verocai, G G

    2018-04-30

    Monitoring Eimeria shedding has become more important due to the recent restrictions to the use of antibiotics within the poultry industry. Therefore, there is a need for the implementation of more precise and accurate quantitative diagnostic techniques. The objective of this study was to compare the precision and accuracy between the Mini-FLOTAC and the McMaster techniques for quantitative diagnosis of Eimeria maxima oocyst in poultry. Twelve pools of excreta samples of broiler chickens experimentally infected with E. maxima were analyzed for the comparison between Mini-FLOTAC and McMaster technique using, the detection limits (dl) of 23 and 25, respectively. Additionally, six excreta samples were used to compare the precision of different dl (5, 10, 23, and 46) using the Mini-FLOTAC technique. For precision comparisons, five technical replicates of each sample (five replicate slides on one excreta slurry) were read for calculating the mean oocyst per gram of excreta (OPG) count, standard deviation (SD), coefficient of variation (CV), and precision of both aforementioned comparisons. To compare accuracy between the methods (McMaster, and Mini-FLOTAC dl 5 and 23), excreta from uninfected chickens was spiked with 100, 500, 1,000, 5,000, or 10,000 OPG; additional samples remained unspiked (negative control). For each spiking level, three samples were read in triplicate, totaling nine reads per spiking level per technique. Data were transformed using log10 to obtain normality and homogeneity of variances. A significant correlation (R = 0.74; p = 0.006) was observed between the mean OPG of the McMaster dl 25 and the Mini-FLOTAC dl 23. Mean OPG, CV, SD, and precision were not statistically different between the McMaster dl 25 and Mini-FLOTAC dl 23. Despite the absence of statistical difference (p > 0.05), Mini-FLOTAC dl 5 showed a numerically lower SD and CV than Mini-FLOTAC dl 23. The Pearson correlation coefficient revealed significant and positive

  13. Stochastic approach to municipal solid waste landfill life based on the contaminant transit time modeling using the Monte Carlo (MC) simulation

    International Nuclear Information System (INIS)

    Bieda, Bogusław

    2013-01-01

    The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball® (CB), simulation program that helps analyze the uncertainties associated with Microsoft® Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection–dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Kraków, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. -- Highlights: ► Numerical simulation of waste in porous media is proposed. ► Statistic outputs based on correct assumptions about probability distribution are presented. ► The benefits of a MC simulation are examined. ► The uniform probability distribution is studied. ► I report a useful tool applied to determine the life of a modern MSW landfill.

  14. Stochastic approach to municipal solid waste landfill life based on the contaminant transit time modeling using the Monte Carlo (MC) simulation

    Energy Technology Data Exchange (ETDEWEB)

    Bieda, Boguslaw, E-mail: bbieda@zarz.agh.edu.pl

    2013-01-01

    The paper is concerned with application and benefits of MC simulation proposed for estimating the life of a modern municipal solid waste (MSW) landfill. The software Crystal Ball Registered-Sign (CB), simulation program that helps analyze the uncertainties associated with Microsoft Registered-Sign Excel models by MC simulation, was proposed to calculate the transit time contaminants in porous media. The transport of contaminants in soil is represented by the one-dimensional (1D) form of the advection-dispersion equation (ADE). The computer program CONTRANS written in MATLAB language is foundation to simulate and estimate the thickness of landfill compacted clay liner. In order to simplify the task of determining the uncertainty of parameters by the MC simulation, the parameters corresponding to the expression Z2 taken from this program were used for the study. The tested parameters are: hydraulic gradient (HG), hydraulic conductivity (HC), porosity (POROS), linear thickness (TH) and diffusion coefficient (EDC). The principal output report provided by CB and presented in the study consists of the frequency chart, percentiles summary and statistics summary. Additional CB options provide a sensitivity analysis with tornado diagrams. The data that was used include available published figures as well as data concerning the Mittal Steel Poland (MSP) S.A. in Krakow, Poland. This paper discusses the results and show that the presented approach is applicable for any MSW landfill compacted clay liner thickness design. -- Highlights: Black-Right-Pointing-Pointer Numerical simulation of waste in porous media is proposed. Black-Right-Pointing-Pointer Statistic outputs based on correct assumptions about probability distribution are presented. Black-Right-Pointing-Pointer The benefits of a MC simulation are examined. Black-Right-Pointing-Pointer The uniform probability distribution is studied. Black-Right-Pointing-Pointer I report a useful tool applied to determine the life of a

  15. Monte Carlo simulation of the Tomotherapy treatment unit in the static mode using MC HAMMER, a Monte Carlo tool dedicated to Tomotherapy

    International Nuclear Information System (INIS)

    Sterpin, E; Tomsej, M; Cravens, B; Salvat, F; Ruchala, K; Olivera, G H; Vynckier, S

    2007-01-01

    Helical tomotherapy (HT) is designed to deliver highly modulated IMRT treatments. The concept of HT provides new challenges in MC simulation, because simultaneous movement of the gantry, the couch and the multi-leaf collimator (MLC) must be simulated accurately. However, before accounting for gantry, couch movement and multileaf collimator configurations, high accuracy must be achieved while simulating open static fields (1 x 40, 2.5 x 40 and 5 x 40 cm 2 ). This is performed using MC HAMMER, which is a graphical user interface allowing MC simulation using PENELOPE for various configurations of HT. Since the geometry of the different elements and materials involved in the beam generation are precisely known and defined, the only parameters that need to be tuned on are therefore electron source spot size and electron energy. Beyond the build up region, good agreement (2%/1mm) is achieved for all the field sizes between measurements (ion chamber) and simulations with an electron source energy set to 5.5 MeV. The electron source spot size is modelled as a gaussian distribution with full width half maximum equal to 1.4 mm. This value was chosen to match measured and calculated penumbras in the longitudinal direction

  16. Comparison of tracheal intubation using the Airtraq® and Mc Coy laryngoscope in the presence of rigid cervical collar simulating cervical immobilisation for traumatic cervical spine injury

    Directory of Open Access Journals (Sweden)

    Padmaja Durga

    2012-01-01

    Full Text Available Background: It is difficult to visualise the larynx using conventional laryngoscopy in the presence of cervical spine immobilisation. Airtraq® provides for easy and successful intubation in the neutral neck position. Objective: To evaluate the effectiveness of Airtraq in comparison with the Mc Coy laryngoscope, when performing tracheal intubation in patients with neck immobilisation using hard cervical collar and manual in-line axial cervical spine stabilisation. Methods: A randomised, cross-over, open-labelled study was undertaken in 60 ASA I and II patients aged between 20 and 50 years, belonging to either gender, scheduled to undergo elective surgical procedures. Following induction and adequate muscle relaxation, they were intubated using either of the techniques first, followed by the other. Intubation time and Intubation Difficulty Score (IDS were noted using Mc Coy laryngoscope and Airtraq. The anaesthesiologist was asked to grade the ease of intubation on a Visual Analogue Scale (VAS of 1-10. Chi-square test was used for comparison of categorical data between the groups and paired sample t-test for comparison of continuous data. IDS score and VAS were compared using Wilcoxon Signed ranked test. Results: The mean intubation time was 33.27 sec (13.25 for laryngoscopy and 28.95 sec (18.53 for Airtraq (P=0.32. The median IDS values were 4 (interquartile range (IQR 1-6 and 0 (IQR 0-1 for laryngoscopy and Airtraq, respectively (P=0.007. The median Cormack Lehane glottic view grade was 3 (IQR 2-4 and 1 (IQR 1-1 for laryngoscopy and Airtraq, respectively (P=0.003. The ease of intubation on VAS was graded as 4 (IQR 3-5 for laryngoscopy and 2 (IQR 2-2 for Airtraq (P=0.033. There were two failures to intubate with the Airtraq. Conclusion: Airtraq improves the ease of intubation significantly when compared to Mc Coy blade in patients immobilised with cervical collar and manual in-line stabilisation simulating cervical spine injury.

  17. Numerical methods used in simulation

    International Nuclear Information System (INIS)

    Caseau, Paul; Perrin, Michel; Planchard, Jacques

    1978-01-01

    The fundamental numerical problem posed by simulation problems is the stability of the resolution diagram. The system of the most used equations is defined, since there is a family of models of increasing complexity with 3, 4 or 5 equations although only models with 3 and 4 equations have been used extensively. After defining what is meant by explicit or implicit, the best established stability results is given for one-dimension problems and then for two-dimension problems. It is shown that two types of discretisation may be defined: four and eight point diagrams (in one or two dimensions) and six and ten point diagrams (in one or two dimensions). To end, some results are given on problems that are not usually treated very much, i.e. non-asymptotic stability and the stability of diagrams based on finite elements [fr

  18. Methods for simulating turbulent phase screen

    International Nuclear Information System (INIS)

    Zhang Jianzhu; Zhang Feizhou; Wu Yi

    2012-01-01

    Some methods for simulating turbulent phase screen are summarized, and their characteristics are analyzed by calculating the phase structure function, decomposing phase screens into Zernike polynomials, and simulating laser propagation in the atmosphere. Through analyzing, it is found that, the turbulent high-frequency components are well contained by those phase screens simulated by the FFT method, but the low-frequency components are little contained. The low-frequency components are well contained by screens simulated by Zernike method, but the high-frequency components are not contained enough. The high frequency components contained will be improved by increasing the order of the Zernike polynomial, but they mainly lie in the edge-area. Compared with the two methods above, the fractal method is a better method to simulate turbulent phase screens. According to the radius of the focal spot and the variance of the focal spot jitter, there are limitations in the methods except the fractal method. Combining the FFT and Zernike method or combining the FFT method and self-similar theory to simulate turbulent phase screens is an effective and appropriate way. In general, the fractal method is probably the best way. (authors)

  19. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    Science.gov (United States)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2017-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  20. Detector Simulation: Data Treatment and Analysis Methods

    CERN Document Server

    Apostolakis, J

    2011-01-01

    Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...

  1. Isogeometric methods for numerical simulation

    CERN Document Server

    Bordas, Stéphane

    2015-01-01

    The book presents the state of the art in isogeometric modeling and shows how the method has advantaged. First an introduction to geometric modeling with NURBS and T-splines is given followed by the implementation into computer software. The implementation in both the FEM and BEM is discussed.

  2. Tracheal intubation with a flexible fibreoptic scope or the McGrath videolaryngoscope in simulated difficult airway scenarios

    DEFF Research Database (Denmark)

    Jepsen, Cecilie H; Gätke, Mona R; Thøgersen, Bente

    2014-01-01

    Grath videolaryngoscope and FFE. The participants then performed tracheal intubation on a SimMan manikin once with the McGrath videolaryngoscope and once with the FFE in three difficult airway scenarios: (1) pharyngeal obstruction; (2) pharyngeal obstruction and cervical rigidity; (3) tongue oedema. MAIN OUTCOME MEASURES...

  3. Molecular Simulation towards Efficient and Representative Subsurface Reservoirs Modeling

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-01-01

    This dissertation focuses on the application of Monte Carlo (MC) molecular simulation and Molecular Dynamics (MD) in modeling thermodynamics and flow of subsurface reservoir fluids. At first, MC molecular simulation is proposed as a promising method

  4. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: McDonnell-Douglas Helicopter Company achievements

    Science.gov (United States)

    Toossi, Mostafa; Weisenburger, Richard; Hashemi-Kia, Mostafa

    1993-01-01

    This paper presents a summary of some of the work performed by McDonnell Douglas Helicopter Company under NASA Langley-sponsored rotorcraft structural dynamics program known as DAMVIBS (Design Analysis Methods for VIBrationS). A set of guidelines which is applicable to dynamic modeling, analysis, testing, and correlation of both helicopter airframes and a large variety of structural finite element models is presented. Utilization of these guidelines and the key features of their applications to vibration modeling of helicopter airframes are discussed. Correlation studies with the test data, together with the development and applications of a set of efficient finite element model checkout procedures, are demonstrated on a large helicopter airframe finite element model. Finally, the lessons learned and the benefits resulting from this program are summarized.

  5. McXtrace

    DEFF Research Database (Denmark)

    Bergbäck Knudsen, Erik; Prodi, Andrea; Baltser, Jana

    2013-01-01

    to the standard X-ray simulation software SHADOW. McXtrace is open source, licensed under the General Public License, and does not require the user to have access to any proprietary software for its operation. The structure of the software is described in detail, and various examples are given to showcase...

  6. Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation

    KAUST Repository

    Haji Ali, Abdul Lateef; Tempone, Raul

    2017-01-01

    of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting

  7. A multigroup analysis from a continuos energy spectrum approach by a MC method

    International Nuclear Information System (INIS)

    Camargo, Dayana Q. de; Bodmann, Bardo E.J.; Vilhena, Marco T. de

    2009-01-01

    In this work, the Monte Carlo method is applied to the energy dependent three- dimensional neutron transport equation, in order to analyze the change in the spectrum energy depending on the Monte Carlo step. The present work is a first step into a new direction where spectral influence on criticality may be analyzed. The method is based on the monitoring of a large number of individual realizations of neutron histories (i.e. microscopic interaction sequence) where the average behavior of neutrons yields an approximate solution for the neutron transport equation. The Monte Carlo is implemented using continuous functions, with respect to energy, for the cross sections of materials, functions which are obtained by parametrizations of the cross sections. The type of interaction that the neutron will suffer and the characteristics of their displacement in the element are estimated randomly following the relevant probability distributions. (author)

  8. An efficient method for in vitro callus induction in Myrciaria dubia (Kunth Mc Vaugh "Camu Camu"

    Directory of Open Access Journals (Sweden)

    Ana M. Córdova

    2014-03-01

    Full Text Available Due to the high variability in vitamin C production in Myrciaria dubia "camu camu", biotechnological procedures are necessary for mass clonal propagation of promising genotypes of this species. The aim was to establish an efficient method for in vitro callus induction from explants of M. dubia. Leaf and knot sex plants were obtained from branches grown in the laboratory and from fruit pulp collected in the field. These were desinfected and sown on Murashige-Skoog (1962 medium supplemented with 2,4-dichlorophenoxyacetic acid (2,4-D, benzylaminopurine (BAP and kinetin(Kin. The cultures were maintained at 25±2°C in darkness for 2 weeks and subsequently with a photoperiod of 16 hours in light and 8 hours in dark for 6 weeks. Treatment with 2 mg/L 2,4-D and 0.1 mg/L BAP allowed major callus formation in the three types of explants. Calluswere generated from the first week (knots, fourth week (leaves and sixth week (pulp and these were friable (leaves and nodes and non-friable (pulp. In conclusion, the described method is efficient for in vitro callus induction in leaves, knots and pulp of M. dubia, been leaves and knots explants more suitable for callus obtention

  9. Spectral Methods in Numerical Plasma Simulation

    DEFF Research Database (Denmark)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...

  10. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  11. 2-d Simulations of Test Methods

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm

    2004-01-01

    One of the main obstacles for the further development of self-compacting concrete is to relate the fresh concrete properties to the form filling ability. Therefore, simulation of the form filling ability will provide a powerful tool in obtaining this goal. In this paper, a continuum mechanical...... approach is presented by showing initial results from 2-d simulations of the empirical test methods slump flow and L-box. This method assumes a homogeneous material, which is expected to correspond to particle suspensions e.g. concrete, when it remains stable. The simulations have been carried out when...... using both a Newton and Bingham model for characterisation of the rheological properties of the concrete. From the results, it is expected that both the slump flow and L-box can be simulated quite accurately when the model is extended to 3-d and the concrete is characterised according to the Bingham...

  12. Novel Methods for Electromagnetic Simulation and Design

    Science.gov (United States)

    2016-08-03

    modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow design by simulation. 15. SUBJECT...electrically large objects in a manner that is sufficiently fast to allow design by simulation. We also developed new methods for scattering from cavities in a...basis for high fidelity modeling software that can handle complicated, electrically large objects in a manner that is sufficiently fast to allow

  13. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  14. Numerical simulation studies of gas production scenarios from hydrate accumulations at the Mallik Site, McKenzie Delta, Canada

    International Nuclear Information System (INIS)

    Moridis, George J.; Collett, Timothy S.; Dallimore, Scott R.; Satoh, Tohru; Hancock, Stephen; Weatherill, Brian

    2002-01-01

    The Mallik site represents an onshore permafrost-associated gas hydrate accumulation in the Mackenzie Delta, Northwest Territories, Canada. An 1150 m deep gas hydrate research well was drilled at the site in 1998. The objective of this study is the analysis of various gas production scenarios from several gas-hydrate-bearing zones at the Mallik site. The TOUGH2 general-purpose simulator with the EOSHYDR2 module were used for the analysis. EOSHYDR2 is designed to model the non-isothermal CH(sub 4) (methane) release, phase behavior and flow under conditions typical of methane-hydrate deposits by solving the coupled equations of mass and heat balance, and can describe any combination of gas hydrate dissociation mechanisms. Numerical simulations indicated that significant gas hydrate production at the Mallik site was possible by drawing down the pressure on a thin free-gas zone at the base of the hydrate stability field. Gas hydrate zones with underlying aquifers yielded significant gas production entirely from dissociated gas hydrate, but large amounts of produced water. Lithologically isolated gas-hydrate-bearing reservoirs with no underlying free gas or water zones, and gas-hydrate saturations of at least 50% were also studied. In these cases, it was assumed that thermal stimulation by circulating hot water in the well was the method used to induce dissociation. Sensitivity studies indicated that the methane release from the hydrate accumulations increases with gas-hydrate saturation, the initial formation temperature, the temperature of the circulating water in the well, and the formation thermal conductivity. Methane production appears to be less sensitive to the rock and hydrate specific heat and permeability of the formation

  15. Analysis of the accuracy and precision of the McMaster method in detection of the eggs of Toxocara and Trichuris species (Nematoda) in dog faeces.

    Science.gov (United States)

    Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew

    2013-07-01

    The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification.

  16. Simulation teaching method in Engineering Optics

    Science.gov (United States)

    Lu, Qieni; Wang, Yi; Li, Hongbin

    2017-08-01

    We here introduce a pedagogical method of theoretical simulation as one major means of the teaching process of "Engineering Optics" in course quality improvement action plan (Qc) in our school. Students, in groups of three to five, complete simulations of interference, diffraction, electromagnetism and polarization of light; each student is evaluated and scored in light of his performance in the interviews between the teacher and the student, and each student can opt to be interviewed many times until he is satisfied with his score and learning. After three years of Qc practice, the remarkable teaching and learning effect is obatined. Such theoretical simulation experiment is a very valuable teaching method worthwhile for physical optics which is highly theoretical and abstruse. This teaching methodology works well in training students as to how to ask questions and how to solve problems, which can also stimulate their interest in research learning and their initiative to develop their self-confidence and sense of innovation.

  17. Hybrid Method Simulation of Slender Marine Structures

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye

    This present thesis consists of an extended summary and five appended papers concerning various aspects of the implementation of a hybrid method which combines classical simulation methods and artificial neural networks. The thesis covers three main topics. Common for all these topics...... only recognize patterns similar to those comprised in the data used to train the network. Fatigue life evaluation of marine structures often considers simulations of more than a hundred different sea states. Hence, in order for this method to be useful, the training data must be arranged so...... that a single neural network can cover all relevant sea states. The applicability and performance of the present hybrid method is demonstrated on a numerical model of a mooring line attached to a floating offshore platform. The second part of the thesis demonstrates how sequential neural networks can be used...

  18. A Simulation Method Measuring Psychomotor Nursing Skills.

    Science.gov (United States)

    McBride, Helena; And Others

    1981-01-01

    The development of a simulation technique to evaluate performance of psychomotor skills in an undergraduate nursing program is described. This method is used as one admission requirement to an alternate route nursing program. With modifications, any health profession could use this technique where psychomotor skills performance is important.…

  19. VERA Pin and Fuel Assembly Depletion Benchmark Calculations by McCARD and DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Monte Carlo (MC) codes have been developed and used to simulate a neutron transport since MC method was devised in the Manhattan project. Solving the neutron transport problem with the MC method is simple and straightforward to understand. Because there are few essential approximations for the 6- dimension phase of a neutron such as the location, energy, and direction in MC calculations, highly accurate solutions can be obtained through such calculations. In this work, the VERA pin and fuel assembly (FA) depletion benchmark calculations are performed to examine the depletion capability of the newly generated DeCART multi-group cross section library. To obtain the reference solutions, MC depletion calculations are conducted using McCARD. Moreover, to scrutinize the effect by stochastic uncertainty propagation, uncertainty propagation analyses are performed using a sensitivity and uncertainty (S/U) analysis method and stochastic sampling (S.S) method. It is still expensive and challenging to perform a depletion analysis by a MC code. Nevertheless, many studies and works for a MC depletion analysis have been conducted to utilize the benefits of the MC method. In this study, McCARD MC and DeCART MOC transport calculations are performed for the VERA pin and FA depletion benchmarks. The DeCART depletion calculations are conducted to examine the depletion capability of the newly generated multi-group cross section library. The DeCART depletion calculations give excellent agreement with the McCARD reference one. From the McCARD results, it is observed that the MC depletion results depend on how to split the burnup interval. First, only to quantify the effect of the stochastic uncertainty propagation at 40 DTS, the uncertainty propagation analyses are performed using the S/U and S.S. method.

  20. Simulation methods for nuclear production scheduling

    International Nuclear Information System (INIS)

    Miles, W.T.; Markel, L.C.

    1975-01-01

    Recent developments and applications of simulation methods for use in nuclear production scheduling and fuel management are reviewed. The unique characteristics of the nuclear fuel cycle as they relate to the overall optimization of a mixed nuclear-fossil system in both the short-and mid-range time frame are described. Emphasis is placed on the various formulations and approaches to the mid-range planning problem, whose objective is the determination of an optimal (least cost) system operation strategy over a multi-year planning horizon. The decomposition of the mid-range problem into power system simulation, reactor core simulation and nuclear fuel management optimization, and system integration models is discussed. Present utility practices, requirements, and research trends are described. 37 references

  1. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Directory of Open Access Journals (Sweden)

    Danilo ePezo

    2014-11-01

    Full Text Available To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie’s method for Markov Chains (MC simulation is highly accurate, yet it becomes computationally intensive in the regime of high channel numbers. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA. Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties – such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Dangerfield et al., 2012; Linaro et al., 2011; Huang et al., 2013a; Orio and Soudry, 2012; Schmandt and Galán, 2012; Goldwyn et al., 2011; Güler, 2013, comparing all of them in a set of numerical simulations that asses numerical accuracy and computational efficiency on three different models: the original Hodgkin and Huxley model, a model with faster sodium channels, and a multi-compartmental model inspired in granular cells. We conclude that for low channel numbers (usually below 1000 per simulated compartment one should use MC – which is both the most accurate and fastest method. For higher channel numbers, we recommend using the method by Orio and Soudry (2012, possibly combined with the method by Schmandt and Galán (2012 for increased speed and slightly reduced accuracy. Consequently, MC modelling may be the best method for detailed multicompartment neuron models – in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels.

  2. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    Science.gov (United States)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the

  3. Idealized Simulations of a Squall Line from the MC3E Field Campaign Applying Three Bin Microphysics Schemes: Dynamic and Thermodynamic Structure

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Lulin [National Center for Atmospheric Research, Boulder, Colorado; Fan, Jiwen [Pacific Northwest National Laboratory, Richland, Washington; Lebo, Zachary J. [University of Wyoming, Laramie, Wyoming; Wu, Wei [National Center for Atmospheric Research, Boulder, Colorado; University of Illinois at Urbana–Champaign, Urbana, Illinois; Morrison, Hugh [National Center for Atmospheric Research, Boulder, Colorado; Grabowski, Wojciech W. [National Center for Atmospheric Research, Boulder, Colorado; Chu, Xia [University of Wyoming, Laramie, Wyoming; Geresdi, István [University of Pécs, Pécs, Hungary; North, Kirk [McGill University, Montréal, Québec, Canada; Stenz, Ronald [University of North Dakota, Grand Forks, North Dakota; Gao, Yang [Pacific Northwest National Laboratory, Richland, Washington; Lou, Xiaofeng [Chinese Academy of Meteorological Sciences, Beijing, China; Bansemer, Aaron [National Center for Atmospheric Research, Boulder, Colorado; Heymsfield, Andrew J. [National Center for Atmospheric Research, Boulder, Colorado; McFarquhar, Greg M. [National Center for Atmospheric Research, Boulder, Colorado; University of Illinois at Urbana–Champaign, Urbana, Illinois; Rasmussen, Roy M. [National Center for Atmospheric Research, Boulder, Colorado

    2017-12-01

    The squall line event on May 20, 2011, during the Midlatitude Continental Convective Clouds (MC3E) field campaign has been simulated by three bin (spectral) microphysics schemes coupled into the Weather Research and Forecasting (WRF) model. Semi-idealized three-dimensional simulations driven by temperature and moisture profiles acquired by a radiosonde released in the pre-convection environment at 1200 UTC in Morris, Oklahoma show that each scheme produced a squall line with features broadly consistent with the observed storm characteristics. However, substantial differences in the details of the simulated dynamic and thermodynamic structure are evident. These differences are attributed to different algorithms and numerical representations of microphysical processes, assumptions of the hydrometeor processes and properties, especially ice particle mass, density, and terminal velocity relationships with size, and the resulting interactions between the microphysics, cold pool, and dynamics. This study shows that different bin microphysics schemes, designed to be conceptually more realistic and thus arguably more accurate than bulk microphysics schemes, still simulate a wide spread of microphysical, thermodynamic, and dynamic characteristics of a squall line, qualitatively similar to the spread of squall line characteristics using various bulk schemes. Future work may focus on improving the representation of ice particle properties in bin schemes to reduce this uncertainty and using the similar assumptions for all schemes to isolate the impact of physics from numerics.

  4. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    Science.gov (United States)

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (psmartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (psmartphone system were 64.51%, 21.67%, and 32.53%, respectively. The Mini-FLOTAC was significantly more accurate than the McMaster (psmartphone system (psmartphone and McMaster counts did not have statistically different accuracies

  5. Lagrangian numerical methods for ocean biogeochemical simulations

    Science.gov (United States)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  6. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  7. OpenMC In Situ Source Convergence Detection

    Energy Technology Data Exchange (ETDEWEB)

    Aldrich, Garrett Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Univ. of California, Davis, CA (United States); Dutta, Soumya [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); The Ohio State Univ., Columbus, OH (United States); Woodring, Jonathan Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are able to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.

  8. Simulating colloid hydrodynamics with lattice Boltzmann methods

    International Nuclear Information System (INIS)

    Cates, M E; Stratford, K; Adhikari, R; Stansell, P; Desplat, J-C; Pagonabarraga, I; Wagner, A J

    2004-01-01

    We present a progress report on our work on lattice Boltzmann methods for colloidal suspensions. We focus on the treatment of colloidal particles in binary solvents and on the inclusion of thermal noise. For a benchmark problem of colloids sedimenting and becoming trapped by capillary forces at a horizontal interface between two fluids, we discuss the criteria for parameter selection, and address the inevitable compromise between computational resources and simulation accuracy

  9. An improved method for simulating radiographs

    International Nuclear Information System (INIS)

    Laguna, G.W.

    1986-01-01

    The parameters involved in generating actual radiographs and what can and cannot be modeled are examined in this report. Using the spectral distribution of the radiation source and the mass absorption curve for the material comprising the part to be modeled, the actual amount of radiation that would pass through the part and reach the film is determined. This method increases confidence in the results of the simulation and enables the modeling of parts made of multiple materials

  10. The greening of the McGill Paleoclimate Model. Part II: Simulation of Holocene millennial-scale natural climate changes

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yi; Mysak, Lawrence A.; Wang, Zhaomin [McGill University, Department of Atmospheric and Oceanic Sciences and Global Environmental and Climate Change Centre (GEC3), Montreal, Quebec (Canada); Brovkin, Victor [Potsdam Institute for Climate Impact Research (PIK), 601203, Potsdam (Germany)

    2005-04-01

    Various proxy data reveal that in many regions of the Northern Hemisphere (NH), the middle Holocene (6 kyr BP) was warmer than the early Holocene (8 kyr BP) as well as the later Holocene, up to the end of the pre-industrial period (1800 AD). This pattern of warming and then cooling in the NH represents the response of the climate system to changes in orbital forcing, vegetation cover and the Laurentide Ice Sheet (LIS) during the Holocene. In an attempt to better understand these changes in the climate system, the McGill Paleoclimate Model (MPM) has been coupled to the dynamic global vegetation model known as VECODE (see Part I of this two-part paper), and a number of sensitivity experiments have been performed with the ''green'' MPM. The model results illustrate the following: (1) the orbital forcing together with the vegetation - albedo feedback result in the gradual cooling of global SAT from about 6 kyr BP to the end of the pre-industrial period; (2) the disappearance of the LIS over the period 8-6 kyr BP, associated with vegetation - albedo feedback, allows the global SAT to increase and reach its maximum at around 6 kyr BP; (3) the northern limit of the boreal forest moves northward during the period 8-6.4 kyr BP due to the LIS retreat; (4) during the period 6.4-0 kyr BP, the northern limit of the boreal forest moves southward about 120 km in response to the decreasing summer insolation in the NH; and (5) the desertification of northern Africa during the period 8-2.6 kyr BP is mainly explained by the decreasing summer monsoon precipitation. (orig.)

  11. Spectral methods in numerical plasma simulation

    International Nuclear Information System (INIS)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)

  12. Electromagnetic simulation using the FDTD method

    CERN Document Server

    Sullivan, Dennis M

    2013-01-01

    A straightforward, easy-to-read introduction to the finite-difference time-domain (FDTD) method Finite-difference time-domain (FDTD) is one of the primary computational electrodynamics modeling techniques available. Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run and treat nonlinear material properties in a natural way. Written in a tutorial fashion, starting with the simplest programs and guiding the reader up from one-dimensional to the more complex, three-dimensional programs, this book provides a simple, yet comp

  13. A Randomized Controlled Trial Comparing the McKenzie Method to Motor Control Exercises in People With Chronic Low Back Pain and a Directional Preference.

    Science.gov (United States)

    Halliday, Mark H; Pappas, Evangelos; Hancock, Mark J; Clare, Helen A; Pinto, Rafael Z; Robertson, Gavin; Ferreira, Paulo H

    2016-07-01

    Study Design Randomized clinical trial. Background Motor control exercises are believed to improve coordination of the trunk muscles. It is unclear whether increases in trunk muscle thickness can be facilitated by approaches such as the McKenzie method. Furthermore, it is unclear which approach may have superior clinical outcomes. Objectives The primary aim was to compare the effects of the McKenzie method and motor control exercises on trunk muscle recruitment in people with chronic low back pain classified with a directional preference. The secondary aim was to conduct a between-group comparison of outcomes for pain, function, and global perceived effect. Methods Seventy people with chronic low back pain who demonstrated a directional preference using the McKenzie assessment were randomized to receive 12 treatments over 8 weeks with the McKenzie method or with motor control approaches. All outcomes were collected at baseline and at 8-week follow-up by blinded assessors. Results No significant between-group difference was found for trunk muscle thickness of the transversus abdominis (-5.8%; 95% confidence interval [CI]: -15.2%, 3.7%), obliquus internus (-0.7%; 95% CI: -6.6%, 5.2%), and obliquus externus (1.2%; 95% CI: -4.3%, 6.8%). Perceived recovery was slightly superior in the McKenzie group (-0.8; 95% CI: -1.5, -0.1) on a -5 to +5 scale. No significant between-group differences were found for pain or function (P = .99 and P = .26, respectively). Conclusion We found no significant effect of treatment group for trunk muscle thickness. Participants reported a slightly greater sense of perceived recovery with the McKenzie method than with the motor control approach. Level of Evidence Therapy, level 1b-. Registered September 7, 2011 at www.anzctr.org.au (ACTRN12611000971932). J Orthop Sports Phys Ther 2016;46(7):514-522. Epub 12 May 2016. doi:10.2519/jospt.2016.6379.

  14. Hybrid method coupling molecular dynamics and Monte Carlo simulations to study the properties of gases in microchannels and nanochannels

    NARCIS (Netherlands)

    Nedea, S.V.; Frijns, A.J.H.; Steenhoven, van A.A.; Markvoort, Albert. J.; Hilbers, P.A.J.

    2005-01-01

    We combine molecular dynamics (MD) and Monte Carlo (MC) simulations to study the properties of gas molecules confined between two hard walls of a microchannel or nanochannel. The coupling between MD and MC simulations is introduced by performing MD near the boundaries for accuracy and MC in the bulk

  15. Study on influences of TiN capping layer on time-dependent dielectric breakdown characteristic of ultra-thin EOT high- k metal gate NMOSFET with kMC TDDB simulations

    International Nuclear Information System (INIS)

    Xu Hao; Yang Hong; Luo Wei-Chun; Xu Ye-Feng; Wang Yan-Rong; Tang Bo; Wang Wen-Wu; Qi Lu-Wei; Li Jun-Feng; Yan Jiang; Zhu Hui-Long; Zhao Chao; Chen Da-Peng; Ye Tian-Chun

    2016-01-01

    The thickness effect of the TiN capping layer on the time dependent dielectric breakdown (TDDB) characteristic of ultra-thin EOT high- k metal gate NMOSFET is investigated in this paper. Based on experimental results, it is found that the device with a thicker TiN layer has a more promising reliability characteristic than that with a thinner TiN layer. From the charge pumping measurement and secondary ion mass spectroscopy (SIMS) analysis, it is indicated that the sample with the thicker TiN layer introduces more Cl passivation at the IL/Si interface and exhibits a lower interface trap density. In addition, the influences of interface and bulk trap density ratio N it / N ot are studied by TDDB simulations through combining percolation theory and the kinetic Monte Carlo (kMC) method. The lifetime reduction and Weibull slope lowering are explained by interface trap effects for TiN capping layers with different thicknesses. (paper)

  16. Simulations of ex-vessel fuel coolant interactions in a Nordic BWR using MC3D code

    International Nuclear Information System (INIS)

    Thakre, S.; Ma, W.

    2013-08-01

    Nordic Boiling Water Reactors (BWRs) employ a drywell cavity flooding technique as a nuclear severe accident management strategy. In case of core melt accident where the reactor pressure vessel will fail and the melt will eject from the lower head and fall into a water pool, may be in the form of a continuous jet. It is assumed that the melt jet will fragment, quench and form a coolable debris bed into the water pool. The melt interaction with a water pool may cause an energetic steam explosion which creates a potential risk towards the integrity of containment, leading to fission products release into the atmosphere. The results of the APRI-7 project suggest that the significant damage to containment structures by steam explosion cannot be ruled according to the state-of-the-art knowledge about corresponding accident scenario. In the follow-up project APRI-8 (2012-2016) one of the goals of the KTH research is to resolve the steam explosion energetics (SEE) issue, developing a risk-oriented framework for quantifying conditional threats to containment integrity for a Nordic type BWR. The present study deals with the premixing and explosion phase calculations of a Nordic BWR dry cavity, using MC3D, a multiphase CFD code for fuel coolant interactions. The main goal of the study is the assessment of pressure buildup in the cavity and the impact loading on the side walls. The conditions for the calculations are used from the SERENA-II BWR case exercise. The other objective was to do the sensitivity analysis of the parameters in modeling of fuel coolant interactions, which can help to reduce uncertainty in assessment of steam explosion energetics. The results show that the amount of liquid melt droplets in the water (region of void<0.6) is maximum even before reaching the jet at the bottom. In the explosion phase, maximum pressure is attained at the bottom and the maximum impulse on the wall is at the bottom of the wall. The analysis is carried out using two different

  17. Simulations of ex-vessel fuel coolant interactions in a Nordic BWR using MC3D code

    Energy Technology Data Exchange (ETDEWEB)

    Thakre, S.; Ma, W. [Royal Institute of Technology, KTH. Div. of Nuclear Power Safety, Stockholm (Sweden)

    2013-08-15

    Nordic Boiling Water Reactors (BWRs) employ a drywell cavity flooding technique as a nuclear severe accident management strategy. In case of core melt accident where the reactor pressure vessel will fail and the melt will eject from the lower head and fall into a water pool, may be in the form of a continuous jet. It is assumed that the melt jet will fragment, quench and form a coolable debris bed into the water pool. The melt interaction with a water pool may cause an energetic steam explosion which creates a potential risk towards the integrity of containment, leading to fission products release into the atmosphere. The results of the APRI-7 project suggest that the significant damage to containment structures by steam explosion cannot be ruled according to the state-of-the-art knowledge about corresponding accident scenario. In the follow-up project APRI-8 (2012-2016) one of the goals of the KTH research is to resolve the steam explosion energetics (SEE) issue, developing a risk-oriented framework for quantifying conditional threats to containment integrity for a Nordic type BWR. The present study deals with the premixing and explosion phase calculations of a Nordic BWR dry cavity, using MC3D, a multiphase CFD code for fuel coolant interactions. The main goal of the study is the assessment of pressure buildup in the cavity and the impact loading on the side walls. The conditions for the calculations are used from the SERENA-II BWR case exercise. The other objective was to do the sensitivity analysis of the parameters in modeling of fuel coolant interactions, which can help to reduce uncertainty in assessment of steam explosion energetics. The results show that the amount of liquid melt droplets in the water (region of void<0.6) is maximum even before reaching the jet at the bottom. In the explosion phase, maximum pressure is attained at the bottom and the maximum impulse on the wall is at the bottom of the wall. The analysis is carried out using two different

  18. Meshless Method for Simulation of Compressible Flow

    Science.gov (United States)

    Nabizadeh Shahrebabak, Ebrahim

    In the present age, rapid development in computing technology and high speed supercomputers has made numerical analysis and computational simulation more practical than ever before for large and complex cases. Numerical simulations have also become an essential means for analyzing the engineering problems and the cases that experimental analysis is not practical. There are so many sophisticated and accurate numerical schemes, which do these simulations. The finite difference method (FDM) has been used to solve differential equation systems for decades. Additional numerical methods based on finite volume and finite element techniques are widely used in solving problems with complex geometry. All of these methods are mesh-based techniques. Mesh generation is an essential preprocessing part to discretize the computation domain for these conventional methods. However, when dealing with mesh-based complex geometries these conventional mesh-based techniques can become troublesome, difficult to implement, and prone to inaccuracies. In this study, a more robust, yet simple numerical approach is used to simulate problems in an easier manner for even complex problem. The meshless, or meshfree, method is one such development that is becoming the focus of much research in the recent years. The biggest advantage of meshfree methods is to circumvent mesh generation. Many algorithms have now been developed to help make this method more popular and understandable for everyone. These algorithms have been employed over a wide range of problems in computational analysis with various levels of success. Since there is no connectivity between the nodes in this method, the challenge was considerable. The most fundamental issue is lack of conservation, which can be a source of unpredictable errors in the solution process. This problem is particularly evident in the presence of steep gradient regions and discontinuities, such as shocks that frequently occur in high speed compressible flow

  19. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    Science.gov (United States)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  20. A new method for simulating human emotions

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    How to make machines express emotions would be instrumental in establishing a completely new paradigm for man ma-chine interaction. A new method for simulating and assessing artificial psychology has been developed for the research of the emo-tion robot. The human psychology activity is regarded as a Markov process. An emotion space and psychology model is constructedbased on Markov process. The conception of emotion entropy is presented to assess the artificial emotion complexity. The simulatingresults play up to human psychology activity. This model can also be applied to consumer-friendly human-computer interfaces, andinteractive video etc.

  1. Comparison of validation methods for forming simulations

    Science.gov (United States)

    Schug, Alexander; Kapphan, Gabriel; Bardl, Georg; Hinterhölzl, Roland; Drechsler, Klaus

    2018-05-01

    The forming simulation of fibre reinforced thermoplastics could reduce the development time and improve the forming results. But to take advantage of the full potential of the simulations it has to be ensured that the predictions for material behaviour are correct. For that reason, a thorough validation of the material model has to be conducted after characterising the material. Relevant aspects for the validation of the simulation are for example the outer contour, the occurrence of defects and the fibre paths. To measure these features various methods are available. Most relevant and also most difficult to measure are the emerging fibre orientations. For that reason, the focus of this study was on measuring this feature. The aim was to give an overview of the properties of different measuring systems and select the most promising systems for a comparison survey. Selected were an optical, an eddy current and a computer-assisted tomography system with the focus on measuring the fibre orientations. Different formed 3D parts made of unidirectional glass fibre and carbon fibre reinforced thermoplastics were measured. Advantages and disadvantages of the tested systems were revealed. Optical measurement systems are easy to use, but are limited to the surface plies. With an eddy current system also lower plies can be measured, but it is only suitable for carbon fibres. Using a computer-assisted tomography system all plies can be measured, but the system is limited to small parts and challenging to evaluate.

  2. An improved method for high precision measurement of chromium isotopes using double spike MC-ICP-MS

    Science.gov (United States)

    Zhu, J.; Wu, G. L.; Wang, X.; Zhang, L. X.; Han, G.

    2017-12-01

    Chormium(Cr) isotopes have been used to trace pollution processes and reconstruct paleo-redox conditions. However, the precise determination of Cr isotopes has still been challenged due to difficulties in purifying Cr from samples with low Cr content and complex matrices. Here we report an improved four-step column chromatographic procedure to separate Cr from matrix elements. Firstly, Cr in sample solution was mixed with 50Cr-54Cr double spike (the optimized 54Crspike/52Crsample = 0.4 and (50Cr/54Cr)spike = 1.3:1) was completely converted into Cr (III) in 8.5mol/L HCl and loaded onto 2ml of AG50W-X8(200-400m) resin conditioned with 11 mol/L HCl. The 2.65ml of eluent was adjusted to 4.5ml of 6mol/L HCl and immediately loaded onto a Bio-Rad column filled with 2ml of AG1-X8 anion resin (100-200m). These two steps can remove at least 99% of Ca, Fe and most matrix elements. Secondly, the 7.5ml of eluent was dried down and dissolved in 0.1ml of 0.5mol/L HNO3.before adding 2ml 4mol/L HF, which was then loaded onto 1ml of AG1-X8 anion resin (100-200m) to remove Ti and V. Finally, sample was dissolved in 0.1ml of 0.5 mol/L HNO3 and oxidized by 0.5mL 0.2 mol/L (NH4)2S2O8 and 4.4mL H2O, which was then centrifuged to remove Mn oxide, and supernatant was loaded onto AG1-X8 resin to remove SO42-, Ni, Al, Na and some Mg using 8ml H2O and 3ml 2mol/L HCl. Cr was eluted by 2 mol/L HNO3 containing 5% H2O2 and the dried Cr was dissolved in 3% HNO3for isotopic analysis. The total yield to Cr is great than 80% even for samples with low Cr content. Chromium isotopes was measured on a Neptune plus MC-ICP-MS in China University of Geosciences(Beijing). Using our improved method, the δ53/52CrSRM979 values of USGS reference materials BHVO-2, BCR-2 and SGR-1b are -0.12±0.06‰(n=15), -0.09±0.06‰ (n=5), and 0.30±0.06‰ (n=12), respectively, which agree well with previously reported values. The δ53/52CrSRM979 of carbonaceous shale CP0-1 and CP0-12 collected from Hubei, China are 2.05

  3. Novel MC/BZY Proton Conductor: Materials Development, Device Evaluation, and Theoretical Exploration using CI and DFT Methods

    Science.gov (United States)

    2017-09-05

    the protons produced by surface defect reactions were transferred to the neighboring carbonate-ions ( CO3 2-) at the BZY/MC interface to form HCO3...static DFT study of the proton transfer in the crystal structure of lithium carbonate 33. The calculated energy barrier was 0.34 eV along the...However, the value is only 20.5 kcal/mol in Ref. 33, which was calculated based on a single molecule of HCO3 -. To eliminate possible uncertainty in the

  4. Dynamic bounds coupled with Monte Carlo simulations

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.

    2011-01-01

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper

  5. The Americleft Project: A Modification of Asher-McDade Method for Rating Nasolabial Esthetics in Patients With Unilateral Cleft Lip and Palate Using Q-sort.

    Science.gov (United States)

    Stoutland, Alicia; Long, Ross E; Mercado, Ana; Daskalogiannakis, John; Hathaway, Ronald R; Russell, Kathleen A; Singer, Emily; Semb, Gunvor; Shaw, William C

    2017-11-01

    The purpose of this study was to investigate ways to improve rater reliability and satisfaction in nasolabial esthetic evaluations of patients with complete unilateral cleft lip and palate (UCLP), by modifying the Asher-McDade method with use of Q-sort methodology. Blinded ratings of cropped photographs of one hundred forty-nine 5- to 7-year-old consecutively treated patients with complete UCLP from 4 different centers were used in a rating of frontal and profile nasolabial esthetic outcomes by 6 judges involved in the Americleft Project's intercenter outcome comparisons. Four judges rated in previous studies using the original Asher-McDade approach. For the Q-sort modification, rather than projection of images, each judge had cards with frontal and profile photographs of each patient and rated them on a scale of 1 to 5 for vermillion border, nasolabial frontal, and profile, using the Q-sort method with placement of cards into categories 1 to 5. Inter- and intrarater reliabilities were calculated using the Weighted Kappa (95% confidence interval). For 4 raters, the reliabilities were compared with those in previous studies. There was no significant improvement in inter-rater reliabilities using the new method. Intrarater reliability consistently improved. All raters preferred the Q-sort method with rating cards rather than a PowerPoint of photos, which improved internal consistency in rating compared to previous studies using the original Asher-McDade method. All raters preferred this method because of the ability to continuously compare photos and adjust relative ratings between patients.

  6. Sensitivity of Cirrus and Mixed-phase Clouds to the Ice Nuclei Spectra in McRAS-AC: Single Column Model Simulations

    Science.gov (United States)

    Betancourt, R. Morales; Lee, D.; Oreopoulos, L.; Sud, Y. C.; Barahona, D.; Nenes, A.

    2012-01-01

    The salient features of mixed-phase and ice clouds in a GCM cloud scheme are examined using the ice formation parameterizations of Liu and Penner (LP) and Barahona and Nenes (BN). The performance of LP and BN ice nucleation parameterizations were assessed in the GEOS-5 AGCM using the McRAS-AC cloud microphysics framework in single column mode. Four dimensional assimilated data from the intensive observation period of ARM TWP-ICE campaign was used to drive the fluxes and lateral forcing. Simulation experiments where established to test the impact of each parameterization in the resulting cloud fields. Three commonly used IN spectra were utilized in the BN parameterization to described the availability of IN for heterogeneous ice nucleation. The results show large similarities in the cirrus cloud regime between all the schemes tested, in which ice crystal concentrations were within a factor of 10 regardless of the parameterization used. In mixed-phase clouds there are some persistent differences in cloud particle number concentration and size, as well as in cloud fraction, ice water mixing ratio, and ice water path. Contact freezing in the simulated mixed-phase clouds contributed to transfer liquid to ice efficiently, so that on average, the clouds were fully glaciated at T approximately 260K, irrespective of the ice nucleation parameterization used. Comparison of simulated ice water path to available satellite derived observations were also performed, finding that all the schemes tested with the BN parameterization predicted 20 average values of IWP within plus or minus 15% of the observations.

  7. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  8. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.

    2015-06-08

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  9. A fast simulation method for the Log-normal sum distribution using a hazard rate twisting technique

    KAUST Repository

    Rached, Nadhir B.; Benkhelifa, Fatma; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The probability density function of the sum of Log-normally distributed random variables (RVs) is a well-known challenging problem. For instance, an analytical closed-form expression of the Log-normal sum distribution does not exist and is still an open problem. A crude Monte Carlo (MC) simulation is of course an alternative approach. However, this technique is computationally expensive especially when dealing with rare events (i.e. events with very small probabilities). Importance Sampling (IS) is a method that improves the computational efficiency of MC simulations. In this paper, we develop an efficient IS method for the estimation of the Complementary Cumulative Distribution Function (CCDF) of the sum of independent and not identically distributed Log-normal RVs. This technique is based on constructing a sampling distribution via twisting the hazard rate of the original probability measure. Our main result is that the estimation of the CCDF is asymptotically optimal using the proposed IS hazard rate twisting technique. We also offer some selected simulation results illustrating the considerable computational gain of the IS method compared to the naive MC simulation approach.

  10. Computational Simulations and the Scientific Method

    Science.gov (United States)

    Kleb, Bil; Wood, Bill

    2005-01-01

    As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.

  11. Application of Monte Carlo perturbation methods to a neutron porosity logging tool, using DUCKPOND/McBEND

    International Nuclear Information System (INIS)

    Kemshell, P.B.; Wright, W.V.; Sanders, L.G.

    1984-01-01

    DUCKPOND, the sensitivity option of the Monte Carlo code McBEND, is being used to study the effect of environmental perturbations on the response of a dual detector neutron porosity logging tool. Using a detailed model of an actual tool, calculations have been performed for a 19% porosity limestone rock sample in the API Test Pit. Within a single computer run, the tool response, or near-to-far detector count ratio, and the sensitivity of this response to the concentration of each isotope present in the formation have been estimated. The calculated tool response underestimates the measured value by about 10%, which is equal to 1.5 ''standard errors'', but this apparent discrepancy is shown to be within the spread of calculated values arising from uncertainties on the rock composition

  12. Diffusion approximation-based simulation of stochastic ion channels: which method to use?

    Science.gov (United States)

    Pezo, Danilo; Soudry, Daniel; Orio, Patricio

    2014-01-01

    To study the effects of stochastic ion channel fluctuations on neural dynamics, several numerical implementation methods have been proposed. Gillespie's method for Markov Chains (MC) simulation is highly accurate, yet it becomes computationally intensive in the regime of a high number of channels. Many recent works aim to speed simulation time using the Langevin-based Diffusion Approximation (DA). Under this common theoretical approach, each implementation differs in how it handles various numerical difficulties—such as bounding of state variables to [0,1]. Here we review and test a set of the most recently published DA implementations (Goldwyn et al., 2011; Linaro et al., 2011; Dangerfield et al., 2012; Orio and Soudry, 2012; Schmandt and Galán, 2012; Güler, 2013; Huang et al., 2013a), comparing all of them in a set of numerical simulations that assess numerical accuracy and computational efficiency on three different models: (1) the original Hodgkin and Huxley model, (2) a model with faster sodium channels, and (3) a multi-compartmental model inspired in granular cells. We conclude that for a low number of channels (usually below 1000 per simulated compartment) one should use MC—which is the fastest and most accurate method. For a high number of channels, we recommend using the method by Orio and Soudry (2012), possibly combined with the method by Schmandt and Galán (2012) for increased speed and slightly reduced accuracy. Consequently, MC modeling may be the best method for detailed multicompartment neuron models—in which a model neuron with many thousands of channels is segmented into many compartments with a few hundred channels. PMID:25404914

  13. Comparison Of Simulation Results When Using Two Different Methods For Mold Creation In Moldflow Simulation

    Directory of Open Access Journals (Sweden)

    Kaushikbhai C. Parmar

    2017-04-01

    Full Text Available Simulation gives different results when using different methods for the same simulation. Autodesk Moldflow Simulation software provide two different facilities for creating mold for the simulation of injection molding process. Mold can be created inside the Moldflow or it can be imported as CAD file. The aim of this paper is to study the difference in the simulation results like mold temperature part temperature deflection in different direction time for the simulation and coolant temperature for this two different methods.

  14. A comparison of modifications of the McMaster method for the enumeration of Ascaris suum eggs in pig faecal samples.

    Science.gov (United States)

    Pereckiene, A; Kaziūnaite, V; Vysniauskas, A; Petkevicius, S; Malakauskas, A; Sarkūnas, M; Taylor, M A

    2007-10-21

    The comparative efficacies of seven published McMaster method modifications for faecal egg counting were evaluated on pig faecal samples containing Ascaris suum eggs. Comparisons were made as to the number of samples found to be positive by each of the methods, the total egg counts per gram (EPG) of faeces, the variations in EPG obtained in the samples examined, and the ease of use of each of the methods. Each method was evaluated after the examination of 30 samples of faeces. The positive samples were identified by counting A. suum eggs in one, two and three sections of newly designed McMaster chamber. In the present study compared methods were reported by: I-Henriksen and Aagaard [Henriksen, S.A., Aagaard, K.A., 1976. A simple flotation and McMaster method. Nord. Vet. Med. 28, 392-397]; II-Kassai [Kassai, T., 1999. Veterinary Helminthology. Butterworth-Heinemann, Oxford, 260 pp.]; III and IV-Urquhart et al. [Urquhart, G.M., Armour, J., Duncan, J.L., Dunn, A.M., Jennings, F.W., 1996. Veterinary Parasitology, 2nd ed. Blackwell Science Ltd., Oxford, UK, 307 pp.] (centrifugation and non-centrifugation methods); V and VI-Grønvold [Grønvold, J., 1991. Laboratory diagnoses of helminths common routine methods used in Denmark. In: Nansen, P., Grønvold, J., Bjørn, H. (Eds.), Seminars on Parasitic Problems in Farm Animals Related to Fodder Production and Management. The Estonian Academy of Sciences, Tartu, Estonia, pp. 47-48] (salt solution, and salt and glucose solution); VII-Thienpont et al. [Thienpont, D., Rochette, F., Vanparijs, O.F.J., 1986. Diagnosing Helminthiasis by Coprological Examination. Coprological Examination, 2nd ed. Janssen Research Foundation, Beerse, Belgium, 205 pp.]. The number of positive samples by examining single section ranged from 98.9% (method I), to 51.1% (method VII). Only with methods I and II, there was a 100% positivity in two out of three of the chambers examined, and FEC obtained using these methods were significantly (pcoefficient

  15. A hybrid method for flood simulation in small catchments combining hydrodynamic and hydrological techniques

    Science.gov (United States)

    Bellos, Vasilis; Tsakiris, George

    2016-09-01

    The study presents a new hybrid method for the simulation of flood events in small catchments. It combines a physically-based two-dimensional hydrodynamic model and the hydrological unit hydrograph theory. Unit hydrographs are derived using the FLOW-R2D model which is based on the full form of two-dimensional Shallow Water Equations, solved by a modified McCormack numerical scheme. The method is tested at a small catchment in a suburb of Athens-Greece for a storm event which occurred in February 2013. The catchment is divided into three friction zones and unit hydrographs of 15 and 30 min are produced. The infiltration process is simulated by the empirical Kostiakov equation and the Green-Ampt model. The results from the implementation of the proposed hybrid method are compared with recorded data at the hydrometric station at the outlet of the catchment and the results derived from the fully hydrodynamic model FLOW-R2D. It is concluded that for the case studied, the proposed hybrid method produces results close to those of the fully hydrodynamic simulation at substantially shorter computational time. This finding, if further verified in a variety of case studies, can be useful in devising effective hybrid tools for the two-dimensional flood simulations, which are lead to accurate and considerably faster results than those achieved by the fully hydrodynamic simulations.

  16. The Kreek-McHugh-Schluger-Kellogg scale: a new, rapid method for quantifying substance abuse and its possible applications.

    Science.gov (United States)

    Kellogg, Scott H; McHugh, Pauline F; Bell, Kathy; Schluger, James H; Schluger, Rosemary P; LaForge, K Steven; Ho, Ann; Kreek, Mary Jeanne

    2003-03-01

    The new Kreek-McHugh-Schluger-Kellogg scale ('KMSK scale') is designed to quantify self-exposure to opiates, cocaine, alcohol, and/or tobacco. Each section of the KMSK scale assesses the frequency, amount, and duration of use of a particular substance during the individual's period of greatest consumption. The scale also assesses the mode of use, whether the substance use is current or past, and whether each substance is the substance of choice. The administration time is under 5 min. In an initial validation study of this scale, 100 human subjects were administered the KMSK scale concurrently with the Structured Clinical Interview for DSM-IV (SCID-I DSM-IV version). The sensitivity and specificity were very good for opiates, cocaine, and alcohol use. In addition, the correlations between KMSK scores and the number of SCID-I criteria items met were excellent for opiates and cocaine and good for alcohol use. Nicotine dependence was not assessed in this study as there is no SCID-I nicotine criteria. These preliminary results show that the KMSK scale may have both construct validity similar to that of other established self-report measures and the potential to be an effective screening instrument for the assessment of a lifetime diagnosis of alcohol, opiate, or cocaine dependence. Copyright 2002 Elsevier Science Ireland Ltd.

  17. MVP/GMVP II, MC Codes for Neutron and Photon Transport Calc. based on Continuous Energy and Multigroup Methods

    International Nuclear Information System (INIS)

    2005-01-01

    specified. (6) Variance reduction techniques: The basic variance reduction techniques Russian roulette kill and splitting are implemented. In addition, importance and weight window based on them are available. Path stretching and source biasing can be also used. (7) Estimator: The track length, collision, point and surface crossing estimators are available. The eigenvalue is estimated by the track length, collision and analog estimators for neutron production and neutron balance methods. In the final estimation, the most probable value and its variance are calculated by the maximum likelihood method with the combination of the estimators. (8) Tallies: GMVP calculates the eigenvalue, the particle flux and reaction rates in each spatial region, each energy group and each time bin for each material, each nuclide and each type of reactions, and their variances as the basic statistical parameters. In addition to these physical quantities, MVP calculates the effective microscopic and macroscopic cross sections and the corresponding reaction rates in the specified regions. These quantities are basically tallied for each spatial region but can be tallied for the arbitrary combination of the regions with options. Furthermore, the calculated quantities are output to files and can be then used for the input data of a drawing program mentioned later or a burnup calculation code MVP-BURN. (9) Drawing geometry: The CGVIEW code draws the cross-sectional view on an arbitrary plane and output it on a display or in the postscript or encapsulated postscript form. These functions are useful for checking the calculation geometry. (10) Burnup calculation: The auxiliary code MVP-BURN implemented in the MVP/GMVP system is available for burnup calculations. (11) Parallelism: Parallel calculations can be performed with standard libraries MPI and PVM. (12) Other capabilities: MVP/GMVP has a capability of reactor noise analysis based on simulation of Feynman-alpha experiments. B - Methods: MVP and

  18. McStas and Mantid integration

    DEFF Research Database (Denmark)

    Nielsen, T. R.; Markvardsen, A. J.; Willendrup, Peter Kjær

    2015-01-01

    McStas and Mantid are two well-established software frameworks within the neutron scattering community. McStas has been primarily used for simulating the neutron transport mechanisms in instruments, while Mantid has been primarily used for data reduction. We report here the status of our work don...

  19. McGET: A rapid image-based method to determine the morphological characteristics of gravels on the Gobi desert surface

    Science.gov (United States)

    Mu, Yue; Wang, Feng; Zheng, Bangyou; Guo, Wei; Feng, Yiming

    2018-03-01

    The relationship between morphological characteristics (e.g. gravel size, coverage, angularity and orientation) and local geomorphic features (e.g. slope gradient and aspect) of desert has been used to explore the evolution process of Gobi desert. Conventional quantification methods are time-consuming, inefficient and even prove impossible to determine the characteristics of large numbers of gravels. We propose a rapid image-based method to obtain the morphological characteristics of gravels on the Gobi desert surface, which is called the "morphological characteristics gained effectively technique" (McGET). The image of the Gobi desert surface was classified into gravel clusters and background by a machine-learning "classification and regression tree" (CART) algorithm. Then gravel clusters were segmented into individual gravel clasts by separating objects in images using a "watershed segmentation" algorithm. Thirdly, gravel coverage, diameter, aspect ratio and orientation were calculated based on the basic principles of 2D computer graphics. We validated this method with two independent datasets in which the gravel morphological characteristics were obtained from 2728 gravels measured in the field and 7422 gravels measured by manual digitization. Finally, we applied McGET to derive the spatial variation of gravel morphology on the Gobi desert along an alluvial-proluvial fan located in Hami, Xinjiang, China. The validated results show that the mean gravel diameter measured in the field agreed well with that calculated by McGET for large gravels (R2 = 0.89, P < 0.001). Compared to manual digitization, the McGET accuracies for gravel coverage, gravel diameter and aspect ratio were 97%, 83% and 96%, respectively. The orientation distributions calculated were consistent across two different methods. More importantly, McGET significantly shortens the time cost in obtaining gravel morphological characteristics in the field and laboratory. The spatial variation results

  20. The OpenMC Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit

    2013-01-01

    Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.

  1. A Proposal of New Spherical Particle Modeling Method Based on Stochastic Sampling of Particle Locations in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.

  2. Numerical methods in simulation of resistance welding

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi

    2015-01-01

    Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...... thecontact area and the distribution of contact pressure. The numerical simulation of resistancewelding is illustrated by a spot welding example that includes subsequent tensile shear testing...

  3. Virtual Crowds Methods, Simulation, and Control

    CERN Document Server

    Pelechano, Nuria; Allbeck, Jan

    2008-01-01

    There are many applications of computer animation and simulation where it is necessary to model virtual crowds of autonomous agents. Some of these applications include site planning, education, entertainment, training, and human factors analysis for building evacuation. Other applications include simulations of scenarios where masses of people gather, flow, and disperse, such as transportation centers, sporting events, and concerts. Most crowd simulations include only basic locomotive behaviors possibly coupled with a few stochastic actions. Our goal in this survey is to establish a baseline o

  4. Technical note: Comparison of metal-on-metal hip simulator wear measured by gravimetric, CMM and optical profiling methods

    Science.gov (United States)

    Alberts, L. Russell; Martinez-Nogues, Vanesa; Baker Cook, Richard; Maul, Christian; Bills, Paul; Racasan, R.; Stolz, Martin; Wood, Robert J. K.

    2018-03-01

    Simulation of wear in artificial joint implants is critical for evaluating implant designs and materials. Traditional protocols employ the gravimetric method to determine the loss of material by measuring the weight of the implant components before and after various test intervals and after the completed test. However, the gravimetric method cannot identify the location, area coverage or maximum depth of the wear and it has difficulties with proportionally small weight changes in relatively heavy implants. In this study, we compare the gravimetric method with two geometric surface methods; an optical light method (RedLux) and a coordinate measuring method (CMM). We tested ten Adept hips in a simulator for 2 million cycles (MC). Gravimetric and optical methods were performed at 0.33, 0.66, 1.00, 1.33 and 2 MC. CMM measurements were done before and after the test. A high correlation was found between the gravimetric and optical methods for both heads (R 2  =  0.997) and for cups (R 2  =  0.96). Both geometric methods (optical and CMM) measured more volume loss than the gravimetric method (for the heads, p  =  0.004 (optical) and p  =  0.08 (CMM); for the cups p  =  0.01 (optical) and p  =  0.003 (CMM)). Two cups recorded negative wear at 2 MC by the gravimetric method but none did by either the optical method or by CMM. The geometric methods were prone to confounding factors such as surface deformation and the gravimetric method could be confounded by protein absorption and backside wear. Both of the geometric methods were able to show the location, area covered and depth of the wear on the bearing surfaces, and track their changes during the test run; providing significant advantages to solely using the gravimetric method.

  5. Multi-time scale Climate Informed Stochastic Hybrid Simulation-Optimization Model (McISH model) for Multi-Purpose Reservoir System

    Science.gov (United States)

    Lu, M.; Lall, U.

    2013-12-01

    decadal flow simulations are re-initialized every year with updated climate projections to improve the reliability of the operation rules for the next year, within which the seasonal operation strategies are nested. The multi-level structure can be repeated for monthly operation with weekly subperiods to take advantage of evolving weather forecasts and seasonal climate forecasts. As a result of the hierarchical structure, sub-seasonal even weather time scale updates and adjustment can be achieved. Given an ensemble of these scenarios, the McISH reservoir simulation-optimization model is able to derive the desired reservoir storage levels, including minimum and maximum, as a function of calendar date, and the associated release patterns. The multi-time scale approach allows adaptive management of water supplies acknowledging the changing risks, meeting both the objectives over the decade in expected value and controlling the near term and planning period risk through probabilistic reliability constraints. For the applications presented, the target season is the monsoon season from June to September. The model also includes a monthly flood volume forecast model, based on a Copula density fit to the monthly flow and the flood volume flow. This is used to guide dynamic allocation of the flood control volume given the forecasts.

  6. Deliverable 6.2 - Software: upgraded MC simulation tools capable of simulating a complete in-beam ET experiment, from the beam to the detected events. Report with the description of one (or few) reference clinical case(s), including the complete patient model and beam characteristics

    CERN Document Server

    The ENVISION Collaboration

    2014-01-01

    Deliverable 6.2 - Software: upgraded MC simulation tools capable of simulating a complete in-beam ET experiment, from the beam to the detected events. Report with the description of one (or few) reference clinical case(s), including the complete patient model and beam characteristics

  7. Collaborative simulation method with spatiotemporal synchronization process control

    Science.gov (United States)

    Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian

    2016-10-01

    When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.

  8. Math-Based Simulation Tools and Methods

    National Research Council Canada - National Science Library

    Arepally, Sudhakar

    2007-01-01

    .... The following methods are reviewed: matrix operations, ordinary and partial differential system of equations, Lagrangian operations, Fourier transforms, Taylor Series, Finite Difference Methods, implicit and explicit finite element...

  9. A method for ensemble wildland fire simulation

    Science.gov (United States)

    Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain

    2011-01-01

    An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...

  10. On Analyzing LDPC Codes over Multiantenna MC-CDMA System

    Directory of Open Access Journals (Sweden)

    S. Suresh Kumar

    2014-01-01

    Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.

  11. Designing new guides and instruments using McStas

    CERN Document Server

    Farhi, E; Wildes, A R; Ghosh, R; Lefmann, K

    2002-01-01

    With the increasing complexity of modern neutron-scattering instruments, the need for powerful tools to optimize their geometry and physical performances (flux, resolution, divergence, etc.) has become essential. As the usual analytical methods reach their limit of validity in the description of fine effects, the use of Monte Carlo simulations, which can handle these latter, has become widespread. The McStas program was developed at Riso National Laboratory in order to provide neutron scattering instrument scientists with an efficient and flexible tool for building Monte Carlo simulations of guides, neutron optics and instruments. To date, the McStas package has been extensively used at the Institut Laue-Langevin, Grenoble, France, for various studies including cold and thermal guides with ballistic geometry, diffractometers, triple-axis, backscattering and time-of-flight spectrometers. In this paper, we present some simulation results concerning different guide geometries that may be used in the future at th...

  12. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....

  13. Interactive methods for exploring particle simulation data

    Energy Technology Data Exchange (ETDEWEB)

    Co, Christopher S.; Friedman, Alex; Grote, David P.; Vay, Jean-Luc; Bethel, E. Wes; Joy, Kenneth I.

    2004-05-01

    In this work, we visualize high-dimensional particle simulation data using a suite of scatter plot-based visualizations coupled with interactive selection tools. We use traditional 2D and 3D projection scatter plots as well as a novel oriented disk rendering style to convey various information about the data. Interactive selection tools allow physicists to manually classify ''interesting'' sets of particles that are highlighted across multiple, linked views of the data. The power of our application is the ability to correspond new visual representations of the simulation data with traditional, well understood visualizations. This approach supports the interactive exploration of the high-dimensional space while promoting discovery of new particle behavior.

  14. Hospital Registration Process Reengineering Using Simulation Method

    Directory of Open Access Journals (Sweden)

    Qiang Su

    2010-01-01

    Full Text Available With increasing competition, many healthcare organizations have undergone tremendous reform in the last decade aiming to increase efficiency, decrease waste, and reshape the way that care is delivered. This study focuses on the operational efficiency improvement of hospital’s registration process. The operational efficiency related factors including the service process, queue strategy, and queue parameters were explored systematically and illustrated with a case study. Guided by the principle of business process reengineering (BPR, a simulation approach was employed for process redesign and performance optimization. As a result, the queue strategy is changed from multiple queues and multiple servers to single queue and multiple servers with a prepare queue. Furthermore, through a series of simulation experiments, the length of the prepare queue and the corresponding registration process efficiency was quantitatively evaluated and optimized.

  15. Numerical simulation methods for electron and ion optics

    International Nuclear Information System (INIS)

    Munro, Eric

    2011-01-01

    This paper summarizes currently used techniques for simulation and computer-aided design in electron and ion beam optics. Topics covered include: field computation, methods for computing optical properties (including Paraxial Rays and Aberration Integrals, Differential Algebra and Direct Ray Tracing), simulation of Coulomb interactions, space charge effects in electron and ion sources, tolerancing, wave optical simulations and optimization. Simulation examples are presented for multipole aberration correctors, Wien filter monochromators, imaging energy filters, magnetic prisms, general curved axis systems and electron mirrors.

  16. Benchmarking HRA methods against different NPP simulator data

    International Nuclear Information System (INIS)

    Petkov, Gueorgui; Filipov, Kalin; Velev, Vladimir; Grigorov, Alexander; Popov, Dimiter; Lazarov, Lazar; Stoichev, Kosta

    2008-01-01

    The paper presents both international and Bulgarian experience in assessing HRA methods, underlying models approaches for their validation and verification by benchmarking HRA methods against different NPP simulator data. The organization, status, methodology and outlooks of the studies are described

  17. Math-Based Simulation Tools and Methods

    National Research Council Canada - National Science Library

    Arepally, Sudhakar

    2007-01-01

    ...: HMMWV 30-mph Rollover Test, Soldier Gear Effects, Occupant Performance in Blast Effects, Anthropomorphic Test Device, Human Models, Rigid Body Modeling, Finite Element Methods, Injury Criteria...

  18. Particle-transport simulation with the Monte Carlo method

    International Nuclear Information System (INIS)

    Carter, L.L.; Cashwell, E.D.

    1975-01-01

    Attention is focused on the application of the Monte Carlo method to particle transport problems, with emphasis on neutron and photon transport. Topics covered include sampling methods, mathematical prescriptions for simulating particle transport, mechanics of simulating particle transport, neutron transport, and photon transport. A literature survey of 204 references is included. (GMT)

  19. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  20. Simulation of tunneling construction methods of the Cisumdawu toll road

    Science.gov (United States)

    Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.

    2017-11-01

    Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.

  1. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  2. The frontal method in hydrodynamics simulations

    Science.gov (United States)

    Walters, R.A.

    1980-01-01

    The frontal solution method has proven to be an effective means of solving the matrix equations resulting from the application of the finite element method to a variety of problems. In this study, several versions of the frontal method were compared in efficiency for several hydrodynamics problems. Three basic modifications were shown to be of value: 1. Elimination of equations with boundary conditions beforehand, 2. Modification of the pivoting procedures to allow dynamic management of the equation size, and 3. Storage of the eliminated equations in a vector. These modifications are sufficiently general to be applied to other classes of problems. ?? 1980.

  3. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  4. Natural tracer test simulation by stochastic particle tracking method

    International Nuclear Information System (INIS)

    Ackerer, P.; Mose, R.; Semra, K.

    1990-01-01

    Stochastic particle tracking methods are well adapted to 3D transport simulations where discretization requirements of other methods usually cannot be satisfied. They do need a very accurate approximation of the velocity field. The described code is based on the mixed hybrid finite element method (MHFEM) to calculated the piezometric and velocity field. The random-walk method is used to simulate mass transport. The main advantages of the MHFEM over FD or FE are the simultaneous calculation of pressure and velocity, which are considered as unknowns; the possibility of interpolating velocities everywhere; and the continuity of the normal component of the velocity vector from one element to another. For these reasons, the MHFEM is well adapted for particle tracking methods. After a general description of the numerical methods, the model is used to simulate the observations made during the Twin Lake Tracer Test in 1983. A good match is found between observed and simulated heads and concentrations. (Author) (12 refs., 4 figs.)

  5. Factorization method for simulating QCD at finite density

    International Nuclear Information System (INIS)

    Nishimura, Jun

    2003-01-01

    We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)

  6. Evaluation of full-scope simulator testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Feher, M P; Moray, N; Senders, J W; Biron, K [Human Factors North Inc., Toronto, ON (Canada)

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs.

  7. Evaluation of full-scope simulator testing methods

    International Nuclear Information System (INIS)

    Feher, M.P.; Moray, N.; Senders, J.W.; Biron, K.

    1995-03-01

    This report discusses the use of full scope nuclear power plant simulators in licensing examinations for Unit First Operators of CANDU reactors. The existing literature is reviewed, and an annotated bibliography of the more important sources provided. Since existing methods are judged inadequate, conceptual bases for designing a system for licensing are discussed, and a method proposed which would make use of objective scoring methods based on data collection in full-scope simulators. A field trial of such a method is described. The practicality of such a method is critically discussed and possible advantages of subjective methods of evaluation considered. (author). 32 refs., 1 tab., 4 figs

  8. Constraint methods that accelerate free-energy simulations of biomolecules.

    Science.gov (United States)

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  9. New method of fast simulation for a hadron calorimeter response

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.; Sutiak, J.; Tokar, S.; Zenis, T.

    2003-01-01

    In this work we present the new method of a fast Monte-Carlo simulation of a hadron calorimeter response. It is based on the three-dimensional parameterization of the hadronic shower obtained from the ATLAS TILECAL test beam data and GEANT simulations. A new approach of including the longitudinal fluctuations of hadronic shower is described. The obtained results of the fast simulation are in good agreement with the TILECAL experimental data

  10. Daylighting simulation: methods, algorithms, and resources

    Energy Technology Data Exchange (ETDEWEB)

    Carroll, William L.

    1999-12-01

    This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but

  11. ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code

    Directory of Open Access Journals (Sweden)

    Jaafar EL Bakkali

    2016-07-01

    Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.

  12. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  13. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua

    2012-01-01

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke's law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  14. A simple method for potential flow simulation of cascades

    Indian Academy of Sciences (India)

    vortex panel method to simulate potential flow in cascades is presented. The cascade ... The fluid loading on the blades, such as the normal force and pitching moment, may ... of such discrete infinite array singularities along the blade surface.

  15. A particle-based method for granular flow simulation

    KAUST Repository

    Chang, Yuanzhang

    2012-03-16

    We present a new particle-based method for granular flow simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the momentum governing equation to handle the friction of granular materials. Viscosity force is also added to simulate the dynamic friction for the purpose of smoothing the velocity field and further maintaining the simulation stability. Benefiting from the Lagrangian nature of the SPH method, large flow deformation can be well handled easily and naturally. In addition, a signed distance field is also employed to enforce the solid boundary condition. The experimental results show that the proposed method is effective and efficient for handling the flow of granular materials, and different kinds of granular behaviors can be well simulated by adjusting just one parameter. © 2012 Science China Press and Springer-Verlag Berlin Heidelberg.

  16. Comparing three methods for participatory simulation of hospital work systems

    DEFF Research Database (Denmark)

    Broberg, Ole; Andersen, Simone Nyholm

    Summative Statement: This study compared three participatory simulation methods using different simulation objects: Low resolution table-top setup using Lego figures, full scale mock-ups, and blueprints using Lego figures. It was concluded the three objects by differences in fidelity and affordance...... scenarios using the objects. Results: Full scale mock-ups significantly addressed the local space and technology/tool elements of a work system. In contrast, the table-top simulation object addressed the organizational issues of the future work system. The blueprint based simulation addressed...

  17. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  18. Simulation methods of nuclear electromagnetic pulse effects in integrated circuits

    International Nuclear Information System (INIS)

    Cheng Jili; Liu Yuan; En Yunfei; Fang Wenxiao; Wei Aixiang; Yang Yuanzhen

    2013-01-01

    In the paper the ways to compute the response of transmission line (TL) illuminated by electromagnetic pulse (EMP) were introduced firstly, which include finite-difference time-domain (FDTD) and trans-mission line matrix (TLM); then the feasibility of electromagnetic topology (EMT) in ICs nuclear electromagnetic pulse (NEMP) effect simulation was discussed; in the end, combined with the methods computing the response of TL, a new method of simulate the transmission line in IC illuminated by NEMP was put forward. (authors)

  19. An introduction to computer simulation methods applications to physical systems

    CERN Document Server

    Gould, Harvey; Christian, Wolfgang

    2007-01-01

    Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...

  20. Motion simulation of hydraulic driven safety rod using FSI method

    International Nuclear Information System (INIS)

    Jung, Jaeho; Kim, Sanghaun; Yoo, Yeonsik; Cho, Yeonggarp; Kim, Jong In

    2013-01-01

    Hydraulic driven safety rod which is one of them is being developed by Division for Reactor Mechanical Engineering, KAERI. In this paper the motion of this rod is simulated by fluid structure interaction (FSI) method before manufacturing for design verification and pump sizing. A newly designed hydraulic driven safety rod which is one of reactivity control mechanism is simulated using FSI method for design verification and pump sizing. The simulation is done in CFD domain with UDF. The pressure drop is changed slightly by flow rates. It means that the pressure drop is mainly determined by weight of moving part. The simulated velocity of piston is linearly proportional to flow rates so the pump can be sized easily according to the rising and drop time requirement of the safety rod using the simulation results

  1. A simulation method for lightning surge response of switching power

    International Nuclear Information System (INIS)

    Wei, Ming; Chen, Xiang

    2013-01-01

    In order to meet the need of protection design for lighting surge, a prediction method of lightning electromagnetic pulse (LEMP) response which is based on system identification is presented. Experiments of switching power's surge injection were conducted, and the input and output data were sampled, de-noised and de-trended. In addition, the model of energy coupling transfer function was obtained by system identification method. Simulation results show that the system identification method can predict the surge response of linear circuit well. The method proposed in the paper provided a convenient and effective technology for simulation of lightning effect.

  2. Real time simulation method for fast breeder reactors dynamics

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.

    1985-01-01

    The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)

  3. Simulation of plume dynamics by the Lattice Boltzmann Method

    Science.gov (United States)

    Mora, Peter; Yuen, David A.

    2017-09-01

    The Lattice Boltzmann Method (LBM) is a semi-microscopic method to simulate fluid mechanics by modelling distributions of particles moving and colliding on a lattice. We present 2-D simulations using the LBM of a fluid in a rectangular box being heated from below, and cooled from above, with a Rayleigh of Ra = 108, similar to current estimates of the Earth's mantle, and a Prandtl number of 5000. At this Prandtl number, the flow is found to be in the non-inertial regime where the inertial terms denoted I ≪ 1. Hence, the simulations presented lie within the regime of relevance for geodynamical problems. We obtain narrow upwelling plumes with mushroom heads and chutes of downwelling fluid as expected of a flow in the non-inertial regime. The method developed demonstrates that the LBM has great potential for simulating thermal convection and plume dynamics relevant to geodynamics, albeit with some limitations.

  4. Method of simulating dose reduction for digital radiographic systems

    International Nuclear Information System (INIS)

    Baath, M.; Haakansson, M.; Tingberg, A.; Maansson, L. G.

    2005-01-01

    The optimisation of image quality vs. radiation dose is an important task in medical imaging. To obtain maximum validity of the optimisation, it must be based on clinical images. Images at different dose levels can then either be obtained by collecting patient images at the different dose levels sought to investigate - including additional exposures and permission from an ethical committee - or by manipulating images to simulate different dose levels. The aim of the present work was to develop a method of simulating dose reduction for digital radiographic systems. The method uses information about the detective quantum efficiency and noise power spectrum at the original and simulated dose levels to create an image containing filtered noise. When added to the original image this results in an image with noise which, in terms of frequency content, agrees with the noise present in an image collected at the simulated dose level. To increase the validity, the method takes local dose variations in the original image into account. The method was tested on a computed radiography system and was shown to produce images with noise behaviour similar to that of images actually collected at the simulated dose levels. The method can, therefore, be used to modify an image collected at one dose level so that it simulates an image of the same object collected at any lower dose level. (authors)

  5. A tool for simulating parallel branch-and-bound methods

    Science.gov (United States)

    Golubeva, Yana; Orlov, Yury; Posypkin, Mikhail

    2016-01-01

    The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer's interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  6. A tool for simulating parallel branch-and-bound methods

    Directory of Open Access Journals (Sweden)

    Golubeva Yana

    2016-01-01

    Full Text Available The Branch-and-Bound method is known as one of the most powerful but very resource consuming global optimization methods. Parallel and distributed computing can efficiently cope with this issue. The major difficulty in parallel B&B method is the need for dynamic load redistribution. Therefore design and study of load balancing algorithms is a separate and very important research topic. This paper presents a tool for simulating parallel Branchand-Bound method. The simulator allows one to run load balancing algorithms with various numbers of processors, sizes of the search tree, the characteristics of the supercomputer’s interconnect thereby fostering deep study of load distribution strategies. The process of resolution of the optimization problem by B&B method is replaced by a stochastic branching process. Data exchanges are modeled using the concept of logical time. The user friendly graphical interface to the simulator provides efficient visualization and convenient performance analysis.

  7. MC++ and a transport physics framework

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.; Keen, N.D.

    1997-01-01

    The Department of Energy has launched the Accelerated Strategic Computing Initiative (ASCI) to address a pressing need for more comprehensive computer simulation capabilities in the area of nuclear weapons safety and reliability. In light of the decision by the US Government to abandon underground nuclear testing, the Science-Based Stockpile Stewardship (SBSS) program is focused on using computer modeling to assure the continued safety and effectiveness of the nuclear stockpile. The authors believe that the utilization of object-oriented design and programming techniques can help in this regard. Object-oriented programming (OOP) has become a popular model in the general software community for several reasons. MC++ is a specific ASCI-relevant application project which demonstrates the effectiveness of the object-oriented approach. It is a Monte Carlo neutron transport code written in C++. It is designed to be simple yet flexible, with the ability to quickly introduce new numerical algorithms or representations of the physics into the code. MC++ is easily ported to various types of Unix workstations and parallel computers such as the three new ASCI platforms, largely because it makes extensive use of classes from the Parallel Object-Oriented Methods and Applications (POOMA) C++ class library. The MC++ code has been successfully benchmarked using some simple physics test problems, has been shown to provide comparable serial performance and a parallel efficiency superior to that of a well-known Monte Carlo neutronics package written in Fortran, and was the first ASCI-relevant application to run in parallel on all three ASCI computing platforms

  8. A method to generate equivalent energy spectra and filtration models based on measurement for multidetector CT Monte Carlo dosimetry simulations

    International Nuclear Information System (INIS)

    Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.

    2009-01-01

    The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL 1 and HVL 2 ) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL 1 and HVL 2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL 1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types

  9. Plasma simulations using the Car-Parrinello method

    International Nuclear Information System (INIS)

    Clerouin, J.; Zerah, G.; Benisti, D.; Hansen, J.P.

    1990-01-01

    A simplified version of the Car-Parrinello method, based on the Thomas-Fermi (local density) functional for the electrons, is adapted to the simulation of the ionic dynamics in dense plasmas. The method is illustrated by an explicit application to a degenerate one-dimensional hydrogen plasma

  10. Nonequilibrium relaxation method – An alternative simulation strategy

    Indian Academy of Sciences (India)

    One well-established simulation strategy to study the thermal phases and transitions of a given microscopic model system is the so-called equilibrium method, in which one first realizes the equilibrium ensemble of a finite system and then extrapolates the results to infinite system. This equilibrium method traces over the ...

  11. A direct simulation method for flows with suspended paramagnetic particles

    NARCIS (Netherlands)

    Kang, T.G.; Hulsen, M.A.; Toonder, den J.M.J.; Anderson, P.D.; Meijer, H.E.H.

    2008-01-01

    A direct numerical simulation method based on the Maxwell stress tensor and a fictitious domain method has been developed to solve flows with suspended paramagnetic particles. The numerical scheme enables us to take into account both hydrodynamic and magnetic interactions between particles in a

  12. DRK methods for time-domain oscillator simulation

    NARCIS (Netherlands)

    Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.

  13. The afforestation problem: a heuristic method based on simulated annealing

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    1992-01-01

    This paper presents the afforestation problem, that is the location and design of new forest compartments to be planted in a given area. This optimization problem is solved by a two-step heuristic method based on simulated annealing. Tests and experiences with this method are also presented....

  14. Multilevel panel method for wind turbine rotor flow simulations

    NARCIS (Netherlands)

    van Garrel, Arne

    2016-01-01

    Simulation methods of wind turbine aerodynamics currently in use mainly fall into two categories: the first is the group of traditional low-fidelity engineering models and the second is the group of computationally expensive CFD methods based on the Navier-Stokes equations. For an engineering

  15. LOMEGA: a low frequency, field implicit method for plasma simulation

    International Nuclear Information System (INIS)

    Barnes, D.C.; Kamimura, T.

    1982-04-01

    Field implicit methods for low frequency plasma simulation by the LOMEGA (Low OMEGA) codes are described. These implicit field methods may be combined with particle pushing algorithms using either Lorentz force or guiding center force models to study two-dimensional, magnetized, electrostatic plasmas. Numerical results for ωsub(e)deltat>>1 are described. (author)

  16. Performance evaluation of sea surface simulation methods for target detection

    Science.gov (United States)

    Xia, Renjie; Wu, Xin; Yang, Chen; Han, Yiping; Zhang, Jianqi

    2017-11-01

    With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

  17. Clinical simulation as an evaluation method in health informatics

    DEFF Research Database (Denmark)

    Jensen, Sanne

    2016-01-01

    Safe work processes and information systems are vital in health care. Methods for design of health IT focusing on patient safety are one of many initiatives trying to prevent adverse events. Possible patient safety hazards need to be investigated before health IT is integrated with local clinical...... work practice including other technology and organizational structure. Clinical simulation is ideal for proactive evaluation of new technology for clinical work practice. Clinical simulations involve real end-users as they simulate the use of technology in realistic environments performing realistic...... tasks. Clinical simulation study assesses effects on clinical workflow and enables identification and evaluation of patient safety hazards before implementation at a hospital. Clinical simulation also offers an opportunity to create a space in which healthcare professionals working in different...

  18. Architecture oriented modeling and simulation method for combat mission profile

    Directory of Open Access Journals (Sweden)

    CHEN Xia

    2017-05-01

    Full Text Available In order to effectively analyze the system behavior and system performance of combat mission profile, an architecture-oriented modeling and simulation method is proposed. Starting from the architecture modeling,this paper describes the mission profile based on the definition from National Military Standard of China and the US Department of Defense Architecture Framework(DoDAFmodel, and constructs the architecture model of the mission profile. Then the transformation relationship between the architecture model and the agent simulation model is proposed to form the mission profile executable model. At last,taking the air-defense mission profile as an example,the agent simulation model is established based on the architecture model,and the input and output relations of the simulation model are analyzed. It provides method guidance for the combat mission profile design.

  19. A nondissipative simulation method for the drift kinetic equation

    International Nuclear Information System (INIS)

    Watanabe, Tomo-Hiko; Sugama, Hideo; Sato, Tetsuya

    2001-07-01

    With the aim to study the ion temperature gradient (ITG) driven turbulence, a nondissipative kinetic simulation scheme is developed and comprehensively benchmarked. The new simulation method preserving the time-reversibility of basic kinetic equations can successfully reproduce the analytical solutions of asymmetric three-mode ITG equations which are extended to provide a more general reference for benchmarking than the previous work [T.-H. Watanabe, H. Sugama, and T. Sato: Phys. Plasmas 7 (2000) 984]. It is also applied to a dissipative three-mode system, and shows a good agreement with the analytical solution. The nondissipative simulation result of the ITG turbulence accurately satisfies the entropy balance equation. Usefulness of the nondissipative method for the drift kinetic simulations is confirmed in comparisons with other dissipative schemes. (author)

  20. Adaptive implicit method for thermal compositional reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, A.; Tchelepi, H.A. [Society of Petroleum Engineers, Richardson, TX (United States)]|[Stanford Univ., Palo Alto (United States)

    2008-10-15

    As the global demand for oil increases, thermal enhanced oil recovery techniques are becoming increasingly important. Numerical reservoir simulation of thermal methods such as steam assisted gravity drainage (SAGD) is complex and requires a solution of nonlinear mass and energy conservation equations on a fine reservoir grid. The most currently used technique for solving these equations is the fully IMplicit (FIM) method which is unconditionally stable, allowing for large timesteps in simulation. However, it is computationally expensive. On the other hand, the method known as IMplicit pressure explicit saturations, temperature and compositions (IMPEST) is computationally inexpensive, but it is only conditionally stable and restricts the timestep size. To improve the balance between the timestep size and computational cost, the thermal adaptive IMplicit (TAIM) method uses stability criteria and a switching algorithm, where some simulation variables such as pressure, saturations, temperature, compositions are treated implicitly while others are treated with explicit schemes. This presentation described ongoing research on TAIM with particular reference to thermal displacement processes such as the stability criteria that dictate the maximum allowed timestep size for simulation based on the von Neumann linear stability analysis method; the switching algorithm that adapts labeling of reservoir variables as implicit or explicit as a function of space and time; and, complex physical behaviors such as heat and fluid convection, thermal conduction and compressibility. Key numerical results obtained by enhancing Stanford's General Purpose Research Simulator (GPRS) were also presented along with a list of research challenges. 14 refs., 2 tabs., 11 figs., 1 appendix.

  1. Activity coefficients from molecular simulations using the OPAS method

    Science.gov (United States)

    Kohns, Maximilian; Horsch, Martin; Hasse, Hans

    2017-10-01

    A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.

  2. SU-D-BRC-01: An Automatic Beam Model Commissioning Method for Monte Carlo Simulations in Pencil-Beam Scanning Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qin, N; Shen, C; Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy, and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with

  3. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.

    2013-08-21

    (and expensive) forward simulations are run with fewer samples, while less accurate (and inexpensive) forward simulations are run with a larger number of samples. Selecting the number of expensive and inexpensive simulations based on the number of coarse degrees of freedom, one can show that MLMC methods can provide better accuracy at the same cost as Monte Carlo (MC) methods. The main objective of the paper is twofold. First, we would like to compare NLSO and LSO mixed MsFEMs. Further, we use both approaches in the context of MLMC to speedup MC calculations. © 2013 Springer Science+Business Media Dordrecht.

  4. Simulation of the acoustic wave propagation using a meshless method

    Directory of Open Access Journals (Sweden)

    Bajko J.

    2017-01-01

    Full Text Available This paper presents numerical simulations of the acoustic wave propagation phenomenon modelled via Linearized Euler equations. A meshless method based on collocation of the strong form of the equation system is adopted. Moreover, the Weighted least squares method is used for local approximation of derivatives as well as stabilization technique in a form of spatial ltering. The accuracy and robustness of the method is examined on several benchmark problems.

  5. Numerical simulation methods for wave propagation through optical waveguides

    International Nuclear Information System (INIS)

    Sharma, A.

    1993-01-01

    The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs

  6. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    Science.gov (United States)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  7. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  8. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  9. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    International Nuclear Information System (INIS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was

  10. Advance in research on aerosol deposition simulation methods

    International Nuclear Information System (INIS)

    Liu Keyang; Li Jingsong

    2011-01-01

    A comprehensive analysis of the health effects of inhaled toxic aerosols requires exact data on airway deposition. A knowledge of the effect of inhaled drugs is essential to the optimization of aerosol drug delivery. Sophisticated analytical deposition models can be used for the computation of total, regional and generation specific deposition efficiencies. The continuously enhancing computer seem to allow us to study the particle transport and deposition in more and more realistic airway geometries with the help of computational fluid dynamics (CFD) simulation method. In this article, the trends in aerosol deposition models and lung models, and the methods for achievement of deposition simulations are also reviewed. (authors)

  11. Finite element method for simulation of the semiconductor devices

    International Nuclear Information System (INIS)

    Zikatanov, L.T.; Kaschiev, M.S.

    1991-01-01

    An iterative method for solving the system of nonlinear equations of the drift-diffusion representation for the simulation of the semiconductor devices is worked out. The Petrov-Galerkin method is taken for the discretization of these equations using the bilinear finite elements. It is shown that the numerical scheme is a monotonous one and there are no oscillations of the solutions in the region of p-n transition. The numerical calculations of the simulation of one semiconductor device are presented. 13 refs.; 3 figs

  12. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  13. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Gabriela Ižaríková

    2015-12-01

    Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.

  14. Computational methods for coupling microstructural and micromechanical materials response simulations

    Energy Technology Data Exchange (ETDEWEB)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.

  15. Simulation methods with extended stability for stiff biochemical Kinetics

    Directory of Open Access Journals (Sweden)

    Rué Pau

    2010-08-01

    Full Text Available Abstract Background With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (biochemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA. The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes. Conclusions The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (biochemical systems.

  16. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang

    2011-03-01

    We present a new particle-based method for high viscosity fluid simulation. In the method, a new elastic stress term, which is derived from a modified form of the Hooke\\'s law, is included in the traditional Navier-Stokes equation to simulate the movements of the high viscosity fluids. Benefiting from the Lagrangian nature of Smoothed Particle Hydrodynamics method, large flow deformation can be well handled easily and naturally. In addition, in order to eliminate the particle deficiency problem near the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn\\'t need to store and compare to an initial rest state. The experimental results show that the proposed method is effective and efficient to handle the movements of highly viscous flows, and a large variety of different kinds of fluid behaviors can be well simulated by adjusting just one parameter. © 2011 IEEE.

  17. Flat-histogram methods in quantum Monte Carlo simulations: Application to the t-J model

    International Nuclear Information System (INIS)

    Diamantis, Nikolaos G.; Manousakis, Efstratios

    2016-01-01

    We discuss that flat-histogram techniques can be appropriately applied in the sampling of quantum Monte Carlo simulation in order to improve the statistical quality of the results at long imaginary time or low excitation energy. Typical imaginary-time correlation functions calculated in quantum Monte Carlo are subject to exponentially growing errors as the range of imaginary time grows and this smears the information on the low energy excitations. We show that we can extract the low energy physics by modifying the Monte Carlo sampling technique to one in which configurations which contribute to making the histogram of certain quantities flat are promoted. We apply the diagrammatic Monte Carlo (diag-MC) method to the motion of a single hole in the t-J model and we show that the implementation of flat-histogram techniques allows us to calculate the Green's function in a wide range of imaginary-time. In addition, we show that applying the flat-histogram technique alleviates the “sign”-problem associated with the simulation of the single-hole Green's function at long imaginary time. (paper)

  18. Computerized simulation methods for dose reduction, in radiodiagnosis

    International Nuclear Information System (INIS)

    Brochi, M.A.C.

    1990-01-01

    The present work presents computational methods that allow the simulation of any situation encountered in diagnostic radiology. Parameters of radiographic techniques that yield a standard radiographic image, previously chosen, and so could compare the dose of radiation absorbed by the patient is studied. Initially the method was tested on a simple system composed of 5.0 cm of water and 1.0 mm of aluminium and, after verifying experimentally its validity, it was applied in breast and arm fracture radiographs. It was observed that the choice of the filter material is not an important factor, because analogous behaviours were presented by aluminum, iron, copper, gadolinium, and other filters. A method of comparison of materials based on the spectral match is shown. Both the results given by this simulation method and the experimental measurements indicate an equivalence of brass and copper, both more efficient than aluminium, in terms of exposition time, but not of dose. (author)

  19. Simulation of quantum systems by the tomography Monte Carlo method

    International Nuclear Information System (INIS)

    Bogdanov, Yu I

    2007-01-01

    A new method of statistical simulation of quantum systems is presented which is based on the generation of data by the Monte Carlo method and their purposeful tomography with the energy minimisation. The numerical solution of the problem is based on the optimisation of the target functional providing a compromise between the maximisation of the statistical likelihood function and the energy minimisation. The method does not involve complicated and ill-posed multidimensional computational procedures and can be used to calculate the wave functions and energies of the ground and excited stationary sates of complex quantum systems. The applications of the method are illustrated. (fifth seminar in memory of d.n. klyshko)

  20. Vectorization of a particle simulation method for hypersonic rarefied flow

    Science.gov (United States)

    Mcdonald, Jeffrey D.; Baganoff, Donald

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry.

  1. A multiscale quantum mechanics/electromagnetics method for device simulations.

    Science.gov (United States)

    Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua

    2015-04-07

    Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.

  2. A mixed finite element method for particle simulation in lasertron

    International Nuclear Information System (INIS)

    Le Meur, G.

    1987-03-01

    A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown

  3. A simulation based engineering method to support HAZOP studies

    DEFF Research Database (Denmark)

    Enemark-Rasmussen, Rasmus; Cameron, David; Angelo, Per Bagge

    2012-01-01

    the conventional HAZOP procedure. The method systematically generates failure scenarios by considering process equipment deviations with pre-defined failure modes. The effect of failure scenarios is then evaluated using dynamic simulations -in this study the K-Spice® software used. The consequences of each failure...

  4. Vectorization of a particle simulation method for hypersonic rarefied flow

    International Nuclear Information System (INIS)

    Mcdonald, J.D.; Baganoff, D.

    1988-01-01

    An efficient particle simulation technique for hypersonic rarefied flows is presented at an algorithmic and implementation level. The implementation is for a vector computer architecture, specifically the Cray-2. The method models an ideal diatomic Maxwell molecule with three translational and two rotational degrees of freedom. Algorithms are designed specifically for compatibility with fine grain parallelism by reducing the number of data dependencies in the computation. By insisting on this compatibility, the method is capable of performing simulation on a much larger scale than previously possible. A two-dimensional simulation of supersonic flow over a wedge is carried out for the near-continuum limit where the gas is in equilibrium and the ideal solution can be used as a check on the accuracy of the gas model employed in the method. Also, a three-dimensional, Mach 8, rarefied flow about a finite-span flat plate at a 45 degree angle of attack was simulated. It utilized over 10 to the 7th particles carried through 400 discrete time steps in less than one hour of Cray-2 CPU time. This problem was chosen to exhibit the capability of the method in handling a large number of particles and a true three-dimensional geometry. 14 references

  5. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  6. Kinematics and simulation methods to determine the target thickness

    International Nuclear Information System (INIS)

    Rosales, P.; Aguilar, E.F.; Martinez Q, E.

    2001-01-01

    Making use of the kinematics and of the particles energy loss two methods for calculating the thickness of a target are described. Through a computer program and other of simulation in which parameters obtained experimentally are used. Several values for a 12 C target thickness were obtained. It is presented a comparison of the obtained values with each one of the used programs. (Author)

  7. A mixed finite element method for particle simulation in Lasertron

    International Nuclear Information System (INIS)

    Le Meur, G.

    1987-01-01

    A particle simulation code is being developed with the aim to treat the motion of charged particles in electromagnetic devices, such as Lasertron. The paper describes the use of mixed finite element methods in computing the field components, without derivating them from scalar or vector potentials. Graphical results are shown

  8. Dynamical simulation of heavy ion collisions; VUU and QMD method

    International Nuclear Information System (INIS)

    Niita, Koji

    1992-01-01

    We review two simulation methods based on the Vlasov-Uehling-Uhlenbeck (VUU) equation and Quantum Molecular Dynamics (QMD), which are the most widely accepted theoretical framework for the description of intermediate-energy heavy-ion reactions. We show some results of the calculations and compare them with the experimental data. (author)

  9. Simulating water hammer with corrective smoothed particle method

    NARCIS (Netherlands)

    Hou, Q.; Kruisbrink, A.C.H.; Tijsseling, A.S.; Keramat, A.

    2012-01-01

    The corrective smoothed particle method (CSPM) is used to simulate water hammer. The spatial derivatives in the water-hammer equations are approximated by a corrective kernel estimate. For the temporal derivatives, the Euler-forward time integration algorithm is employed. The CSPM results are in

  10. STUDY ON SIMULATION METHOD OF AVALANCHE : FLOW ANALYSIS OF AVALANCHE USING PARTICLE METHOD

    OpenAIRE

    塩澤, 孝哉

    2015-01-01

    In this paper, modeling for the simulation of the avalanche by a particle method is discussed. There are two kinds of the snow avalanches, one is the surface avalanche which shows a smoke-like flow, and another is the total-layer avalanche which shows a flow like Bingham fluid. In the simulation of the surface avalanche, the particle method in consideration of a rotation resistance model is used. The particle method by Bingham fluid is used in the simulation of the total-layer avalanche. At t...

  11. Efficient method for transport simulations in quantum cascade lasers

    Directory of Open Access Journals (Sweden)

    Maczka Mariusz

    2017-01-01

    Full Text Available An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green’s functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.

  12. A method of simulating and visualizing nuclear reactions

    International Nuclear Information System (INIS)

    Atwood, C.H.; Paul, K.M.

    1994-01-01

    Teaching nuclear reactions to students is difficult because the mechanisms are complex and directly visualizing them is impossible. As a teaching tool, the authors have developed a method of simulating nuclear reactions using colliding water droplets. Videotaping of the collisions, taken with a high shutter speed camera and run frame-by-frame, shows details of the collisions that are analogous to nuclear reactions. The method for colliding the water drops and videotaping the collisions are shown

  13. Meshfree simulation of avalanches with the Finite Pointset Method (FPM)

    Science.gov (United States)

    Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios

    2017-04-01

    Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.

  14. Same Content, Different Methods: Comparing Lecture, Engaged Classroom, and Simulation.

    Science.gov (United States)

    Raleigh, Meghan F; Wilson, Garland Anthony; Moss, David Alan; Reineke-Piper, Kristen A; Walden, Jeffrey; Fisher, Daniel J; Williams, Tracy; Alexander, Christienne; Niceler, Brock; Viera, Anthony J; Zakrajsek, Todd

    2018-02-01

    There is a push to use classroom technology and active teaching methods to replace didactic lectures as the most prevalent format for resident education. This multisite collaborative cohort study involving nine residency programs across the United States compared a standard slide-based didactic lecture, a facilitated group discussion via an engaged classroom, and a high-fidelity, hands-on simulation scenario for teaching the topic of acute dyspnea. The primary outcome was knowledge retention at 2 to 4 weeks. Each teaching method was assigned to three different residency programs in the collaborative according to local resources. Learning objectives were determined by faculty. Pre- and posttest questions were validated and utilized as a measurement of knowledge retention. Each site administered the pretest, taught the topic of acute dyspnea utilizing their assigned method, and administered a posttest 2 to 4 weeks later. Differences between the groups were compared using paired t-tests. A total of 146 residents completed the posttest, and scores increased from baseline across all groups. The average score increased 6% in the standard lecture group (n=47), 11% in the engaged classroom (n=53), and 9% in the simulation group (n=56). The differences in improvement between engaged classroom and simulation were not statistically significant. Compared to standard lecture, both engaged classroom and high-fidelity simulation were associated with a statistically significant improvement in knowledge retention. Knowledge retention after engaged classroom and high-fidelity simulation did not significantly differ. More research is necessary to determine if different teaching methods result in different levels of comfort and skill with actual patient care.

  15. Three dimensional electrochemical simulation of solid oxide fuel cell cathode based on microstructure reconstructed by marching cubes method

    Science.gov (United States)

    He, An; Gong, Jiaming; Shikazono, Naoki

    2018-05-01

    In the present study, a model is introduced to correlate the electrochemical performance of solid oxide fuel cell (SOFC) with the 3D microstructure reconstructed by focused ion beam scanning electron microscopy (FIB-SEM) in which the solid surface is modeled by the marching cubes (MC) method. Lattice Boltzmann method (LBM) is used to solve the governing equations. In order to maintain the geometries reconstructed by the MC method, local effective diffusivities and conductivities computed based on the MC geometries are applied in each grid, and partial bounce-back scheme is applied according to the boundary predicted by the MC method. From the tortuosity factor and overpotential calculation results, it is concluded that the MC geometry drastically improves the computational accuracy by giving more precise topology information.

  16. Modified network simulation model with token method of bus access

    Directory of Open Access Journals (Sweden)

    L.V. Stribulevich

    2013-08-01

    Full Text Available Purpose. To study the characteristics of the local network with the marker method of access to the bus its modified simulation model was developed. Methodology. Defining characteristics of the network is carried out on the developed simulation model, which is based on the state diagram-layer network station with the mechanism of processing priorities, both in steady state and in the performance of control procedures: the initiation of a logical ring, the entrance and exit of the station network with a logical ring. Findings. A simulation model, on the basis of which can be obtained the dependencies of the application the maximum waiting time in the queue for different classes of access, and the reaction time usable bandwidth on the data rate, the number of network stations, the generation rate applications, the number of frames transmitted per token holding time, frame length was developed. Originality. The technique of network simulation reflecting its work in the steady condition and during the control procedures, the mechanism of priority ranking and handling was proposed. Practical value. Defining network characteristics in the real-time systems on railway transport based on the developed simulation model.

  17. Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)

    KAUST Repository

    Enayatpour, Saeid; van Oort, Eric; Patzek, Tadeusz

    2018-01-01

    Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.

  18. Thermal shale fracturing simulation using the Cohesive Zone Method (CZM)

    KAUST Repository

    Enayatpour, Saeid

    2018-05-17

    Extensive research has been conducted over the past two decades to improve hydraulic fracturing methods used for hydrocarbon recovery from tight reservoir rocks such as shales. Our focus in this paper is on thermal fracturing of such tight rocks to enhance hydraulic fracturing efficiency. Thermal fracturing is effective in generating small fractures in the near-wellbore zone - or in the vicinity of natural or induced fractures - that may act as initiation points for larger fractures. Previous analytical and numerical results indicate that thermal fracturing in tight rock significantly enhances rock permeability, thereby enhancing hydrocarbon recovery. Here, we present a more powerful way of simulating the initiation and propagation of thermally induced fractures in tight formations using the Cohesive Zone Method (CZM). The advantages of CZM are: 1) CZM simulation is fast compared to similar models which are based on the spring-mass particle method or Discrete Element Method (DEM); 2) unlike DEM, rock material complexities such as scale-dependent failure behavior can be incorporated in a CZM simulation; 3) CZM is capable of predicting the extent of fracture propagation in rock, which is more difficult to determine in a classic finite element approach. We demonstrate that CZM delivers results for the challenging fracture propagation problem of similar accuracy to the eXtended Finite Element Method (XFEM) while reducing complexity and computational effort. Simulation results for thermal fracturing in the near-wellbore zone show the effect of stress anisotropy in fracture propagation in the direction of the maximum horizontal stress. It is shown that CZM can be used to readily obtain the extent and the pattern of induced thermal fractures.

  19. EVALUATING CHAMBERLAIN'S, McGREGOR'S, AND McRAE'S ...

    African Journals Online (AJOL)

    2012-08-08

    Aug 8, 2012 ... spine and base of skull radiographs which however have diagnostic challenges due to the complexity of the ... McGregor's and Mc Rae's using CT bone windows ... metastatic lesion were excluded from the study. RESULTS.

  20. Hybrid numerical methods for multiscale simulations of subsurface biogeochemical processes

    International Nuclear Information System (INIS)

    Scheibe, T D; Tartakovsky, A M; Tartakovsky, D M; Redden, G D; Meakin, P

    2007-01-01

    Many subsurface flow and transport problems of importance today involve coupled non-linear flow, transport, and reaction in media exhibiting complex heterogeneity. In particular, problems involving biological mediation of reactions fall into this class of problems. Recent experimental research has revealed important details about the physical, chemical, and biological mechanisms involved in these processes at a variety of scales ranging from molecular to laboratory scales. However, it has not been practical or possible to translate detailed knowledge at small scales into reliable predictions of field-scale phenomena important for environmental management applications. A large assortment of numerical simulation tools have been developed, each with its own characteristic scale. Important examples include 1. molecular simulations (e.g., molecular dynamics); 2. simulation of microbial processes at the cell level (e.g., cellular automata or particle individual-based models); 3. pore-scale simulations (e.g., lattice-Boltzmann, pore network models, and discrete particle methods such as smoothed particle hydrodynamics); and 4. macroscopic continuum-scale simulations (e.g., traditional partial differential equations solved by finite difference or finite element methods). While many problems can be effectively addressed by one of these models at a single scale, some problems may require explicit integration of models across multiple scales. We are developing a hybrid multi-scale subsurface reactive transport modeling framework that integrates models with diverse representations of physics, chemistry and biology at different scales (sub-pore, pore and continuum). The modeling framework is being designed to take advantage of advanced computational technologies including parallel code components using the Common Component Architecture, parallel solvers, gridding, data and workflow management, and visualization. This paper describes the specific methods/codes being used at each

  1. Discrete Particle Method for Simulating Hypervelocity Impact Phenomena

    Directory of Open Access Journals (Sweden)

    Erkai Watson

    2017-04-01

    Full Text Available In this paper, we introduce a computational model for the simulation of hypervelocity impact (HVI phenomena which is based on the Discrete Element Method (DEM. Our paper constitutes the first application of DEM to the modeling and simulating of impact events for velocities beyond 5 kms-1. We present here the results of a systematic numerical study on HVI of solids. For modeling the solids, we use discrete spherical particles that interact with each other via potentials. In our numerical investigations we are particularly interested in the dynamics of material fragmentation upon impact. We model a typical HVI experiment configuration where a sphere strikes a thin plate and investigate the properties of the resulting debris cloud. We provide a quantitative computational analysis of the resulting debris cloud caused by impact and a comprehensive parameter study by varying key parameters of our model. We compare our findings from the simulations with recent HVI experiments performed at our institute. Our findings are that the DEM method leads to very stable, energy–conserving simulations of HVI scenarios that map the experimental setup where a sphere strikes a thin plate at hypervelocity speed. Our chosen interaction model works particularly well in the velocity range where the local stresses caused by impact shock waves markedly exceed the ultimate material strength.

  2. KMCThinFilm: A C++ Framework for the Rapid Development of Lattice Kinetic Monte Carlo (kMC) Simulations of Thin Film Growth

    Science.gov (United States)

    2015-09-01

    direction, so if the simulation domain is set to be a certain size, then that presents a hard ceiling on the thickness of a film that may be grown in...FFA, Los J, Cuppen HM, Bennema P, Meekes H. MONTY:  Monte Carlo crystal growth on any crystal structure in any crystallographic orientation...mhoffman.github.io/kmos/. 23. Kiravittaya S, Schmidt OG. Quantum-dot crystal defects. Applied Physics Letters. 2008;93:173109. 24. Leetmaa M

  3. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  4. Quantum control with NMR methods: Application to quantum simulations

    International Nuclear Information System (INIS)

    Negrevergne, Camille

    2002-01-01

    Manipulating information according to quantum laws allows improvements in the efficiency of the way we treat certain problems. Liquid state Nuclear Magnetic Resonance methods allow us to initialize, manipulate and read the quantum state of a system of coupled spins. These methods have been used to realize an experimental small Quantum Information Processor (QIP) able to process information through around hundred elementary operations. One of the main themes of this work was to design, optimize and validate reliable RF-pulse sequences used to 'program' the QIP. Such techniques have been used to run a quantum simulation algorithm for anionic systems. Some experimental results have been obtained on the determination of Eigen energies and correlation function for a toy problem consisting of fermions on a lattice, showing an experimental proof of principle for such quantum simulations. (author) [fr

  5. Experience in Collaboration: McDenver at McDonald's.

    Science.gov (United States)

    Combs, Clarice Sue

    2002-01-01

    The McDenver at McDonald's project provided a nontraditional, community-based teaching and learning environment for faculty and students in a health, physical education, and recreation (HPER) department and a school of nursing. Children and parents come to McDonald's, children received developmental screenings, and parents completed conferences…

  6. From fuel cells to batteries: Synergies, scales and simulation methods

    OpenAIRE

    Bessler, Wolfgang G.

    2011-01-01

    The recent years have shown a dynamic growth of battery research and development activities both in academia and industry, supported by large governmental funding initiatives throughout the world. A particular focus is being put on lithium-based battery technologies. This situation provides a stimulating environment for the fuel cell modeling community, as there are considerable synergies in the modeling and simulation methods for fuel cells and batteries. At the same time, batter...

  7. Application of subset simulation methods to dynamic fault tree analysis

    International Nuclear Information System (INIS)

    Liu Mengyun; Liu Jingquan; She Ding

    2015-01-01

    Although fault tree analysis has been implemented in the nuclear safety field over the past few decades, it was recently criticized for the inability to model the time-dependent behaviors. Several methods are proposed to overcome this disadvantage, and dynamic fault tree (DFT) has become one of the research highlights. By introducing additional dynamic gates, DFT is able to describe the dynamic behaviors like the replacement of spare components or the priority of failure events. Using Monte Carlo simulation (MCS) approach to solve DFT has obtained rising attention, because it can model the authentic behaviors of systems and avoid the limitations in the analytical method. In this paper, it provides an overview and MCS information for DFT analysis, including the sampling of basic events and the propagation rule for logic gates. When calculating rare-event probability, large amount of simulations in standard MCS are required. To improve the weakness, subset simulation (SS) approach is applied. Using the concept of conditional probability and Markov Chain Monte Carlo (MCMC) technique, the SS method is able to accelerate the efficiency of exploring the failure region. Two cases are tested to illustrate the performance of SS approach, and the numerical results suggest that it gives high efficiency when calculating complicated systems with small failure probabilities. (author)

  8. A computer method for simulating the decay of radon daughters

    International Nuclear Information System (INIS)

    Hartley, B.M.

    1988-01-01

    The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure

  9. Molecular dynamics simulation based on the multi-component molecular orbital method: Application to H5O2+,D5O2+,andT5O2+

    International Nuclear Information System (INIS)

    Ishimoto, Takayoshi; Koyama, Michihisa

    2012-01-01

    Graphical abstract: Molecular dynamics method based on multi-component molecular orbital method was applied to basic hydrogen bonding systems, H 5 O 2 + , and its isotopomers (D 5 O 2 + andT 5 O 2 + ). Highlights: ► Molecular dynamics method with nuclear quantum effect was developed. ► Multi-component molecular orbital method was used as ab initio MO calculation. ► Developed method applied to basic hydrogen bonding system, H 5 O 2 + , and isotopomers. ► O ⋯ O vibrational stretching reflected to the distribution of protonic wavefunctions. ► H/D/T isotope effect was also analyzed. - Abstract: We propose a molecular dynamics (MD) method based on the multi-component molecular orbital (MC M O) method, which takes into account the quantum effect of proton directly, for the detailed analyses of proton transfer in hydrogen bonding system. The MC M O based MD (MC M O-MD) method is applied to the basic structures, H 5 O 2 + (called “Zundel ion”), and its isotopomers (D 5 O 2 + andT 5 O 2 + ). We clearly demonstrate the geometrical difference of hydrogen bonded O ⋯ O distance induced by H/D/T isotope effect because the O ⋯ O in H-compound was longer than that in D- or T-compound. We also find the strong relation between stretching vibration of O ⋯ O and the distribution of hydrogen bonded protonic wavefunction because the protonic wavefunction tends to delocalize when the O ⋯ O distance becomes short during the dynamics. Our proposed MC M O-MD simulation is expected as a powerful tool to analyze the proton dynamics in hydrogen bonding systems.

  10. Alex McQueen : power

    Index Scriptorium Estoniae

    1998-01-01

    A. McQueeni moevälisest tegevusest. 'American Express' tellis temalt krediitkaardi kujunduse. 1998. a. suvest ajakirja 'Dazed & Confused' abitoimetaja. A. McQueen on lubanud olla Björki (Island) video kunstiline juht.

  11. Atmosphere Re-Entry Simulation Using Direct Simulation Monte Carlo (DSMC Method

    Directory of Open Access Journals (Sweden)

    Francesco Pellicani

    2016-05-01

    Full Text Available Hypersonic re-entry vehicles aerothermodynamic investigations provide fundamental information to other important disciplines like materials and structures, assisting the development of thermal protection systems (TPS efficient and with a low weight. In the transitional flow regime, where thermal and chemical equilibrium is almost absent, a new numerical method for such studies has been introduced, the direct simulation Monte Carlo (DSMC numerical technique. The acceptance and applicability of the DSMC method have increased significantly in the 50 years since its invention thanks to the increase in computer speed and to the parallel computing. Anyway, further verification and validation efforts are needed to lead to its greater acceptance. In this study, the Monte Carlo simulator OpenFOAM and Sparta have been studied and benchmarked against numerical and theoretical data for inert and chemically reactive flows and the same will be done against experimental data in the near future. The results show the validity of the data found with the DSMC. The best setting of the fundamental parameters used by a DSMC simulator are presented for each software and they are compared with the guidelines deriving from the theory behind the Monte Carlo method. In particular, the number of particles per cell was found to be the most relevant parameter to achieve valid and optimized results. It is shown how a simulation with a mean value of one particle per cell gives sufficiently good results with very low computational resources. This achievement aims to reconsider the correct investigation method in the transitional regime where both the direct simulation Monte Carlo (DSMC and the computational fluid-dynamics (CFD can work, but with a different computational effort.

  12. A particle finite element method for machining simulations

    Science.gov (United States)

    Sabel, Matthias; Sator, Christian; Müller, Ralf

    2014-07-01

    The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.

  13. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  14. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    Science.gov (United States)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for

  15. Radon movement simulation in overburden by the 'Scattered Packet Method'

    International Nuclear Information System (INIS)

    Marah, H.; Sabir, A.; Hlou, L.; Tayebi, M.

    1998-01-01

    The analysis of Radon ( 222 Rn) movement in overburden needs the resolution of the General Equation of Transport in porous medium, involving diffusion and convection. Generally this equation was derived and solved analytically. The 'Scattered Packed Method' is a recent mathematical method of resolution, initially developed for the electrons movements in the semiconductors studies. In this paper, we have adapted this method to simulate radon emanation in porous medium. The keys parameters are the radon concentration at the source, the diffusion coefficient, and the geometry. To show the efficiency of this method, several cases of increasing complexity are considered. This model allows to follow the migration, in the time and space, of radon produced as a function of the characteristics of the studied site. Forty soil radon measurements were taken from a North Moroccan fault. Forward modeling of the radon anomalies produces satisfactory fits of the observed data and allows the overburden thickness determination. (author)

  16. Evaluation of null-point detection methods on simulation data

    Science.gov (United States)

    Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano

    2014-05-01

    We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.

  17. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  18. A fast mollified impulse method for biomolecular atomistic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Fath, L., E-mail: lukas.fath@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Hochbruck, M., E-mail: marlis.hochbruck@kit.edu [Institute for App. and Num. Mathematics, Karlsruhe Institute of Technology (Germany); Singh, C.V., E-mail: chandraveer.singh@utoronto.ca [Department of Materials Science & Engineering, University of Toronto (Canada)

    2017-03-15

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementation in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.

  19. Numerical method for IR background and clutter simulation

    Science.gov (United States)

    Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio

    1997-06-01

    The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.

  20. Viscoelastic Earthquake Cycle Simulation with Memory Variable Method

    Science.gov (United States)

    Hirahara, K.; Ohtani, M.

    2017-12-01

    There have so far been no EQ (earthquake) cycle simulations, based on RSF (rate and state friction) laws, in viscoelastic media, except for Kato (2002), who simulated cycles on a 2-D vertical strike-slip fault, and showed nearly the same cycles as those in elastic cases. The viscoelasticity could, however, give more effects on large dip-slip EQ cycles. In a boundary element approach, stress is calculated using a hereditary integral of stress relaxation function and slip deficit rate, where we need the past slip rates, leading to huge computational costs. This is a cause for almost no simulations in viscoelastic media. We have investigated the memory variable method utilized in numerical computation of wave propagation in dissipative media (e.g., Moczo and Kristek, 2005). In this method, introducing memory variables satisfying 1st order differential equations, we need no hereditary integrals in stress calculation and the computational costs are the same order of those in elastic cases. Further, Hirahara et al. (2012) developed the iterative memory variable method, referring to Taylor et al. (1970), in EQ cycle simulations in linear viscoelastic media. In this presentation, first, we introduce our method in EQ cycle simulations and show the effect of the linear viscoelasticity on stick-slip cycles in a 1-DOF block-SLS (standard linear solid) model, where the elastic spring of the traditional block-spring model is replaced by SLS element and we pull, in a constant rate, the block obeying RSF law. In this model, the memory variable stands for the displacement of the dash-pot in SLS element. The use of smaller viscosity reduces the recurrence time to a minimum value. The smaller viscosity means the smaller relaxation time, which makes the stress recovery quicker, leading to the smaller recurrence time. Second, we show EQ cycles on a 2-D dip-slip fault with the dip angel of 20 degrees in an elastic layer with thickness of 40 km overriding a Maxwell viscoelastic half

  1. Amyloid oligomer structure characterization from simulations: A general method

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Phuong H., E-mail: phuong.nguyen@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Li, Mai Suan [Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw (Poland); Derreumaux, Philippe, E-mail: philippe.derreumaux@ibpc.fr [Laboratoire de Biochimie Théorique, UPR 9080, CNRS Université Denis Diderot, Sorbonne Paris Cité IBPC, 13 rue Pierre et Marie Curie, 75005 Paris (France); Institut Universitaire de France, 103 Bvd Saint-Germain, 75005 Paris (France)

    2014-03-07

    Amyloid oligomers and plaques are composed of multiple chemically identical proteins. Therefore, one of the first fundamental problems in the characterization of structures from simulations is the treatment of the degeneracy, i.e., the permutation of the molecules. Second, the intramolecular and intermolecular degrees of freedom of the various molecules must be taken into account. Currently, the well-known dihedral principal component analysis method only considers the intramolecular degrees of freedom, and other methods employing collective variables can only describe intermolecular degrees of freedom at the global level. With this in mind, we propose a general method that identifies all the structures accurately. The basis idea is that the intramolecular and intermolecular states are described in terms of combinations of single-molecule and double-molecule states, respectively, and the overall structures of oligomers are the product basis of the intramolecular and intermolecular states. This way, the degeneracy is automatically avoided. The method is illustrated on the conformational ensemble of the tetramer of the Alzheimer's peptide Aβ{sub 9−40}, resulting from two atomistic molecular dynamics simulations in explicit solvent, each of 200 ns, starting from two distinct structures.

  2. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  3. A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model

    Directory of Open Access Journals (Sweden)

    F. Hossain

    2004-01-01

    Full Text Available This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM. The scheme is assessed within a Monte Carlo (MC simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.

  4. Hardware-in-the-loop grid simulator system and method

    Science.gov (United States)

    Fox, John Curtiss; Collins, Edward Randolph; Rigas, Nikolaos

    2017-05-16

    A hardware-in-the-loop (HIL) electrical grid simulation system and method that combines a reactive divider with a variable frequency converter to better mimic and control expected and unexpected parameters in an electrical grid. The invention provides grid simulation in a manner to allow improved testing of variable power generators, such as wind turbines, and their operation once interconnected with an electrical grid in multiple countries. The system further comprises an improved variable fault reactance (reactive divider) capable of providing a variable fault reactance power output to control a voltage profile, therein creating an arbitrary recovery voltage. The system further comprises an improved isolation transformer designed to isolate zero-sequence current from either a primary or secondary winding in a transformer or pass the zero-sequence current from a primary to a secondary winding.

  5. Simulating condensation on microstructured surfaces using Lattice Boltzmann Method

    Science.gov (United States)

    Alexeev, Alexander; Vasyliv, Yaroslav

    2017-11-01

    We simulate a single component fluid condensing on 2D structured surfaces with different wettability. To simulate the two phase fluid, we use the athermal Lattice Boltzmann Method (LBM) driven by a pseudopotential force. The pseudopotential force results in a non-ideal equation of state (EOS) which permits liquid-vapor phase change. To account for thermal effects, the athermal LBM is coupled to a finite volume discretization of the temperature evolution equation obtained using a thermal energy rate balance for the specific internal energy. We use the developed model to probe the effect of surface structure and surface wettability on the condensation rate in order to identify microstructure topographies promoting condensation. Financial support is acknowledged from Kimberly-Clark.

  6. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods

    International Nuclear Information System (INIS)

    Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.

    2008-01-01

    The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)

  7. Adaptive mesh refinement and adjoint methods in geophysics simulations

    Science.gov (United States)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  8. Modeling and simulation of different and representative engineering problems using Network Simulation Method.

    Science.gov (United States)

    Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.

  9. Modeling and simulation of different and representative engineering problems using Network Simulation Method

    Science.gov (United States)

    2018-01-01

    Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121

  10. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  11. Branding McJobs

    DEFF Research Database (Denmark)

    Noppeney, Claus; Endrissat, Nada; Kärreman, Dan

    Traditionally, employer branding has been considered relevant for knowledge intensive firms that compete in a ‘war for talent’. However, the continuous rise in service sector jobs and the negative image of these so-called McJobs has motivated a trend in rebranding service work. Building on critical...... oriented branding literature, our contribution to this stream of research is twofold: We provide an empirical account of employer branding of a grocery chain, which has repeatedly been voted among the ‘100 best companies to work for’. Second, we outline the role of symbolic compensation that employees...... of employer branding....

  12. McArdle Disease

    DEFF Research Database (Denmark)

    Santalla, Alfredo; Nogales-Gadea, Gisela; Ørtenblad, Niels

    2014-01-01

    McArdle disease is arguably the paradigm of exercise intolerance in humans. This disorder is caused by inherited deficiency of myophosphorylase, the enzyme isoform that initiates glycogen breakdown in skeletal muscles. Because patients are unable to obtain energy from their muscle glycogen stores......, this disease provides an interesting model of study for exercise physiologists, allowing insight to be gained into the understanding of glycogen-dependent muscle functions. Of special interest in the field of muscle physiology and sports medicine are also some specific (if not unique) characteristics...

  13. Reduction Methods for Real-time Simulations in Hybrid Testing

    DEFF Research Database (Denmark)

    Andersen, Sebastian

    2016-01-01

    Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...

  14. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  15. Rapid simulation of spatial epidemics: a spectral method.

    Science.gov (United States)

    Brand, Samuel P C; Tildesley, Michael J; Keeling, Matthew J

    2015-04-07

    Spatial structure and hence the spatial position of host populations plays a vital role in the spread of infection. In the majority of situations, it is only possible to predict the spatial spread of infection using simulation models, which can be computationally demanding especially for large population sizes. Here we develop an approximation method that vastly reduces this computational burden. We assume that the transmission rates between individuals or sub-populations are determined by a spatial transmission kernel. This kernel is assumed to be isotropic, such that the transmission rate is simply a function of the distance between susceptible and infectious individuals; as such this provides the ideal mechanism for modelling localised transmission in a spatial environment. We show that the spatial force of infection acting on all susceptibles can be represented as a spatial convolution between the transmission kernel and a spatially extended 'image' of the infection state. This representation allows the rapid calculation of stochastic rates of infection using fast-Fourier transform (FFT) routines, which greatly improves the computational efficiency of spatial simulations. We demonstrate the efficiency and accuracy of this fast spectral rate recalculation (FSR) method with two examples: an idealised scenario simulating an SIR-type epidemic outbreak amongst N habitats distributed across a two-dimensional plane; the spread of infection between US cattle farms, illustrating that the FSR method makes continental-scale outbreak forecasting feasible with desktop processing power. The latter model demonstrates which areas of the US are at consistently high risk for cattle-infections, although predictions of epidemic size are highly dependent on assumptions about the tail of the transmission kernel. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Discrete vortex method simulations of aerodynamic admittance in bridge aerodynamics

    DEFF Research Database (Denmark)

    Rasmussen, Johannes Tophøj; Hejlesen, Mads Mølholm; Larsen, Allan

    , and to determine aerodynamic forces and the corresponding flutter limit. A simulation of the three-dimensional bridge responseto turbulent wind is carried out by quasi steady theory by modelling the bridge girder as a line like structure [2], applying the aerodynamic load coefficients found from the current version......The meshless and remeshed Discrete Vortex Method (DVM) has been widely used in academia and by the industry to model two-dimensional flow around bluff bodies. The implementation “DVMFLOW” [1] is used by the bridge design company COWI to determine and visualise the flow field around bridge sections...

  17. Numerical Simulation of Plasma Antenna with FDTD Method

    International Nuclear Information System (INIS)

    Chao, Liang; Yue-Min, Xu; Zhi-Jiang, Wang

    2008-01-01

    We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconBgurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design

  18. Numerical simulation of plasma antenna with FDTD method

    International Nuclear Information System (INIS)

    Liang Chao; Xu Yuemin; Wang Zhijiang

    2008-01-01

    We adopt cylindrical-coordinate FDTD algorithm to simulate and analyse a 0.4-m-long column configuration plasma antenna. FDTD method is useful for solving electromagnetic problems, especially when wave characteristics and plasma properties are self-consistently related to each other. Focus on the frequency from 75 MHz to 400 MHz, the input impedance and radiation efficiency of plasma antennas are computed. Numerical results show that, different from copper antenna, the characteristics of plasma antenna vary simultaneously with plasma frequency and collision frequency. The property can be used to construct dynamically reconfigurable antenna. The investigation is meaningful and instructional for the optimization of plasma antenna design. (authors)

  19. Effectiveness of McKenzie Method-Based Self-Management Approach for the Secondary Prevention of a Recurrence of Low Back Pain (SAFE Trial): Protocol for a Pragmatic Randomized Controlled Trial.

    Science.gov (United States)

    de Campos, Tarcisio F; Maher, Chris G; Clare, Helen A; da Silva, Tatiane M; Hancock, Mark J

    2017-08-01

    Although many people recover quickly from an episode of low back pain (LBP), recurrence is very common. There is limited evidence on effective prevention strategies for recurrences of LBP. The purpose of this study was to determine the effectiveness of a McKenzie method-based self-management approach in the secondary prevention of LBP. This will be a pragmatic randomized controlled trial. Participants will be recruited from the community and primary care, with the intervention delivered in a number of physical therapist practices in Sydney, Australia. The study will have 396 participants, all of whom are at least 18 years old. Participants will be randomly assigned to either the McKenzie method-based self-management approach group or a minimal intervention control group. The primary outcome will be days to first self-reported recurrence of an episode of activity-limiting LBP. The secondary outcomes will include: days to first self-reported recurrence of an episode of LBP, days to first self-reported recurrence of an episode of LBP leading to care seeking, and the impact of LBP over a 12-month period. All participants will be followed up monthly for a minimum of 12 months or until they have a recurrence of activity-limiting LBP. All participants will also be followed-up at 3, 6, 9, and 12 months to assess the impact of back pain, physical activity levels, study program adherence, credibility, and adverse events. Participants and therapists will not be masked to the interventions. To our knowledge, this will be the first large, high-quality randomized controlled trial investigating the effectiveness of a McKenzie method-based self-management approach for preventing recurrences of LBP. If this approach is found to be effective, it will offer a low-cost, simple method for reducing the personal and societal burdens of LBP. © 2017 American Physical Therapy Association

  20. Three-dimensional discrete element method simulation of core disking

    Science.gov (United States)

    Wu, Shunchuan; Wu, Haoyan; Kemeny, John

    2018-04-01

    The phenomenon of core disking is commonly seen in deep drilling of highly stressed regions in the Earth's crust. Given its close relationship with the in situ stress state, the presence and features of core disking can be used to interpret the stresses when traditional in situ stress measuring techniques are not available. The core disking process was simulated in this paper using the three-dimensional discrete element method software PFC3D (particle flow code). In particular, PFC3D is used to examine the evolution of fracture initiation, propagation and coalescence associated with core disking under various stress states. In this paper, four unresolved problems concerning core disking are investigated with a series of numerical simulations. These simulations also provide some verification of existing results by other researchers: (1) Core disking occurs when the maximum principal stress is about 6.5 times the tensile strength. (2) For most stress situations, core disking occurs from the outer surface, except for the thrust faulting stress regime, where the fractures were found to initiate from the inner part. (3) The anisotropy of the two horizontal principal stresses has an effect on the core disking morphology. (4) The thickness of core disk has a positive relationship with radial stress and a negative relationship with axial stresses.

  1. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed; Kadoura, Ahmad Salim; Sun, Shuyu

    2016-01-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation

  2. Simulation of ecological processes using response functions method

    International Nuclear Information System (INIS)

    Malkina-Pykh, I.G.; Pykh, Yu. A.

    1998-01-01

    The article describes further development and applications of the already well-known response functions method (MRF). The method is used as a basis for the development of mathematical models of a wide set of ecological processes. The model of radioactive contamination of the ecosystems is chosen as an example. The mathematical model was elaborated for the description of 90 Sr dynamics in the elementary ecosystems of various geographical zones. The model includes the blocks corresponding with the main units of any elementary ecosystem: lower atmosphere, soil, vegetation, surface water. Parameters' evaluation was provided on a wide set of experimental data. A set of computer simulations was done on the model to prove the possibility of the model's use for ecological forecasting

  3. Simulation of bubble motion under gravity by lattice Boltzmann method

    International Nuclear Information System (INIS)

    Takada, Naoki; Misawa, Masaki; Tomiyama, Akio; Hosokawa, Shigeo

    2001-01-01

    We describe the numerical simulation results of bubble motion under gravity by the lattice Boltzmann method (LBM), which assumes that a fluid consists of mesoscopic fluid particles repeating collision and translation and a multiphase interface is reproduced in a self-organizing way by repulsive interaction between different kinds of particles. The purposes in this study are to examine the applicability of LBM to the numerical analysis of bubble motions, and to develop a three-dimensional version of the binary fluid model that introduces a free energy function. We included the buoyancy terms due to the density difference in the lattice Boltzmann equations, and simulated single-and two-bubble motions, setting flow conditions according to the Eoetvoes and Morton numbers. The two-dimensional results by LBM agree with those by the Volume of Fluid method based on the Navier-Stokes equations. The three-dimensional model possesses the surface tension satisfying the Laplace's law, and reproduces the motion of single bubble and the two-bubble interaction of their approach and coalescence in circular tube. There results prove that the buoyancy terms and the 3D model proposed here are suitable, and that LBM is useful for the numerical analysis of bubble motion under gravity. (author)

  4. Research methods of simulate digital compensators and autonomous control systems

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2016-01-01

    Full Text Available The peculiarity of the present stage of development of the production is the need to control and regulate a large number of process parameters, the mutual influence on each other that when using single-circuit systems significantly reduces the quality of the transition process, resulting in significant costs of raw materials and energy, reduce the quality of the products. Using a stand-alone digital control system eliminates the correlation of technological parameters, to give the system the desired dynamic and static properties, improve the quality of regulation. However, the complexity of the configuration and implementation of procedures (modeling compensators autonomous systems of this type, associated with the need to perform a significant amount of complex analytic transformation significantly limit the scope of their application. In this regard, the approach based on the decompo sition proposed methods of calculation and simulation (realization, consisting in submitting elements autonomous control part digital control system in a series parallel connection. The above theoretical study carried out in a general way for any dimension systems. The results of computational experiments, obtained during the simulation of the four autonomous control systems, comparative analysis and conclusions on the effectiveness of the use of each of the methods. The results obtained can be used in the development of multi-dimensional process control systems.

  5. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...

  6. Verification of SuperMC with ITER C-Lite neutronic model

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Shu [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui, 230027 (China); Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); Yu, Shengpeng [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China); He, Peng, E-mail: peng.he@fds.org.cn [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui, 230031 (China)

    2016-12-15

    Highlights: • Verification of the SuperMC Monte Carlo transport code with ITER C-Lite model. • The modeling of the ITER C-Lite model using the latest SuperMC/MCAM. • All the calculated quantities are consistent with MCNP well. • Efficient variance reduction methods are adopted to accelerate the calculation. - Abstract: In pursit of accurate and high fidelity simulation, the reference model of ITER is becoming more and more detailed and complicated. Due to the complexity in geometry and the thick shielding of the reference model, the accurate modeling and precise simulaion of fusion neutronics are very challenging. Facing these difficulties, SuperMC, the Monte Carlo simulation software system developed by the FDS Team, has optimized its CAD interface for the automatic converting of more complicated models and increased its calculation efficiency with advanced variance reduction methods To demonstrate its capabilites of automatic modeling, neutron/photon coupled simulation and visual analysis for the ITER facility, numerical benchmarks using the ITER C-Lite neutronic model were performed. The nuclear heating in divertor and inboard toroidal field (TF) coils and a global neutron flux map were evaluated. All the calculated nuclear heating is compared with the results of the MCNP code and good consistencies between the two codes is shown. Using the global variance reduction methods in SuperMC, the average speed-up is 292 times for the calculation of inboard TF coils nuclear heating, and 91 times for the calculation of global flux map, compared with the analog run. These tests have shown that SuperMC is suitable for the design and analysis of ITER facility.

  7. Application of the direct simulation Monte Carlo method to nanoscale heat transfer between a soot particle and the surrounding gas

    International Nuclear Information System (INIS)

    Yang, M.; Liu, F.; Smallwood, G.J.

    2004-01-01

    Laser-Induced Incandescence (LII) technique has been widely used to measure soot volume fraction and primary particle size in flames and engine exhaust. Currently there is lack of quantitative understanding of the shielding effect of aggregated soot particles on its conduction heat loss rate to the surrounding gas. The conventional approach for this problem would be the application of the Monte Carlo (MC) method. This method is based on simulation of the trajectories of individual molecules and calculation of the heat transfer at each of the molecule/molecule collisions and the molecule/particle collisions. As the first step toward calculating the heat transfer between a soot aggregate and the surrounding gas, the Direct Simulation Monte Carlo (DSMC) method was used in this study to calculate the heat transfer rate between a single spherical aerosol particle and its cooler surrounding gas under different conditions of temperature, pressure, and the accommodation coefficient. A well-defined and simple hard sphere model was adopted to describe molecule/molecule elastic collisions. A combination of the specular reflection and completely diffuse reflection model was used to consider molecule/particle collisions. The results obtained by DSMC are in good agreement with the known analytical solution of heat transfer rate for an isolated, motionless sphere in the free-molecular regime. Further the DSMC method was applied to calculate the heat transfer in the transition regime. Our present DSMC results agree very well with published DSMC data. (author)

  8. SU-E-T-112: An OpenCL-Based Cross-Platform Monte Carlo Dose Engine (oclMC) for Coupled Photon-Electron Transport

    International Nuclear Information System (INIS)

    Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X

    2015-01-01

    Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross

  9. SU-E-T-112: An OpenCL-Based Cross-Platform Monte Carlo Dose Engine (oclMC) for Coupled Photon-Electron Transport

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2015-06-15

    Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross

  10. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  11. A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation

    DEFF Research Database (Denmark)

    Breton, Simon-Philippe; Sumner, J.; Sørensen, Jens Nørkær

    2017-01-01

    surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple......Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review...

  12. Is McMurray′s osteotomy obsolete?

    Directory of Open Access Journals (Sweden)

    Phaltankar P

    1995-10-01

    Full Text Available A review of the method of performing, advantages, disadvantages of McMurray′s displacement osteotomy with regard to treatment of nonunion of transcervical fracture neck femur with viable femoral head was carried out in this study of ten cases, in view of the abandonment of the procedure in favour of angulation osteotomy. Good results obtained in the series attest to the usefulness of McMurray′s osteotomy in the difficult problem of nonunion of transcervical fracture neck femur in well selected cases with certain advantages over the angulation osteotomy due to the ′Armchair effect′.

  13. A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition

    International Nuclear Information System (INIS)

    Zheng Zheming; Stephens, Ryan M.; Braatz, Richard D.; Alkire, Richard C.; Petzold, Linda R.

    2008-01-01

    A hybrid multiscale kinetic Monte Carlo (HMKMC) method for speeding up the simulation of copper electrodeposition is presented. The fast diffusion events are simulated deterministically with a heterogeneous diffusion model which considers site-blocking effects of additives. Chemical reactions are simulated by an accelerated (tau-leaping) method for discrete stochastic simulation which adaptively selects exact discrete stochastic simulation for the appropriate reaction whenever that is necessary. The HMKMC method is seen to be accurate and highly efficient

  14. A calculation method for RF couplers design based on numerical simulation by microwave studio

    International Nuclear Information System (INIS)

    Wang Rong; Pei Yuanji; Jin Kai

    2006-01-01

    A numerical simulation method for coupler design is proposed. It is based on the matching procedure for the 2π/3 structure given by Dr. R.L. Kyhl. Microwave Studio EigenMode Solver is used for such numerical simulation. the simulation for a coupler has been finished with this method and the simulation data are compared with experimental measurements. The results show that this numerical simulation method is feasible for coupler design. (authors)

  15. Nancy McCormick Rambusch: A Reflection

    Science.gov (United States)

    Povell, Phyllis

    2005-01-01

    Fall of 2005 marks the 12th anniversary of Nancy McCormick Rambusch's death. As the founder of the American Montessori Society and as its first president, Rambusch reintroduced Maria Montessori to America at a time--1960--when education for the young was floundering, and a second look at the Montessori method, which had changed the early childhood…

  16. McDonald's Recipe for Success

    Science.gov (United States)

    Weinstein, Margery

    2012-01-01

    Who isn't familiar with McDonald's? Its golden arches are among the most recognizable brand icons in the U.S. What many are less familiar with is the methodical and distinguished learning and development that supports that brand. Training that begins by preparing employees to serve customers at the counter, and extends to programs that help…

  17. Adaptive and dynamic meshing methods for numerical simulations

    Science.gov (United States)

    Acikgoz, Nazmiye

    -hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations

  18. Applying Simulation Method in Formulation of Gluten-Free Cookies

    Directory of Open Access Journals (Sweden)

    Nikitina Marina

    2017-01-01

    Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.

  19. The Multiscale Material Point Method for Simulating Transient Responses

    Science.gov (United States)

    Chen, Zhen; Su, Yu-Chen; Zhang, Hetao; Jiang, Shan; Sewell, Thomas

    2015-06-01

    To effectively simulate multiscale transient responses such as impact and penetration without invoking master/slave treatment, the multiscale material point method (Multi-MPM) is being developed in which molecular dynamics at nanoscale and dissipative particle dynamics at mesoscale might be concurrently handled within the framework of the original MPM at microscale (continuum level). The proposed numerical scheme for concurrently linking different scales is described in this paper with simple examples for demonstration. It is shown from the preliminary study that the mapping and re-mapping procedure used in the original MPM could coarse-grain the information at fine scale and that the proposed interfacial scheme could provide a smooth link between different scales. Since the original MPM is an extension from computational fluid dynamics to solid dynamics, the proposed Multi-MPM might also become robust for dealing with multiphase interactions involving failure evolution. This work is supported in part by DTRA and NSFC.

  20. Numerical Simulation of Antennas with Improved Integral Equation Method

    International Nuclear Information System (INIS)

    Ma Ji; Fang Guang-You; Lu Wei

    2015-01-01

    Simulating antennas around a conducting object is a challenge task in computational electromagnetism, which is concerned with the behaviour of electromagnetic fields. To analyze this model efficiently, an improved integral equation-fast Fourier transform (IE-FFT) algorithm is presented in this paper. The proposed scheme employs two Cartesian grids with different size and location to enclose the antenna and the other object, respectively. On the one hand, IE-FFT technique is used to store matrix in a sparse form and accelerate the matrix-vector multiplication for each sub-domain independently. On the other hand, the mutual interaction between sub-domains is taken as the additional exciting voltage in each matrix equation. By updating integral equations several times, the whole electromagnetic system can achieve a stable status. Finally, the validity of the presented method is verified through the analysis of typical antennas in the presence of a conducting object. (paper)

  1. Optimized Design of Spacer in Electrodialyzer Using CFD Simulation Method

    Science.gov (United States)

    Jia, Yuxiang; Yan, Chunsheng; Chen, Lijun; Hu, Yangdong

    2018-06-01

    In this study, the effects of length-width ratio and diversion trench of the spacer on the fluid flow behavior in an electrodialyzer have been investigated through CFD simulation method. The relevant information, including the pressure drop, velocity vector distribution and shear stress distribution, demonstrates the importance of optimized design of the spacer in an electrodialysis process. The results show width of the diversion trench has a great effect on the fluid flow compared with length. Increase of the diversion trench width could strength the fluid flow, but also increase the pressure drop. Secondly, the dead zone of the fluid flow decreases with increase of length-width ratio of the spacer, but the pressure drop increases with the increase of length-width ratio of the spacer. So the appropriate length-width ratio of the space should be moderate.

  2. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    Science.gov (United States)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  3. Simulation of galvanic corrosion using boundary element method

    International Nuclear Information System (INIS)

    Zaifol Samsu; Muhamad Daud; Siti Radiah Mohd Kamaruddin; Nur Ubaidah Saidin; Abdul Aziz Mohamed; Mohd Saari Ripin; Rusni Rejab; Mohd Shariff Sattar

    2011-01-01

    Boundary element method (BEM) is a numerical technique that used for modeling infinite domain as is the case for galvanic corrosion analysis. The use of boundary element analysis system (BEASY) has allowed cathodic protection (CP) interference to be assessed in terms of the normal current density, which is directly proportional to the corrosion rate. This paper was present the analysis of the galvanic corrosion between Aluminium and Carbon Steel in natural sea water. The result of experimental was validated with computer simulation like BEASY program. Finally, it can conclude that the BEASY software is a very helpful tool for future planning before installing any structure, where it gives the possible CP interference on any nearby unprotected metallic structure. (Author)

  4. An experiment teaching method based on the Optisystem simulation platform

    Science.gov (United States)

    Zhu, Jihua; Xiao, Xuanlu; Luo, Yuan

    2017-08-01

    The experiment teaching of optical communication system is difficult to achieve because of expensive equipment. The Optisystem is optical communication system design software, being able to provide such a simulation platform. According to the characteristic of the OptiSystem, an approach of experiment teaching is put forward in this paper. It includes three gradual levels, the basics, the deeper looks and the practices. Firstly, the basics introduce a brief overview of the technology, then the deeper looks include demoes and example analyses, lastly the practices are going on through the team seminars and comments. A variety of teaching forms are implemented in class. The fact proves that this method can not only make up the laboratory but also motivate the students' learning interest and improve their practical abilities, cooperation abilities and creative spirits. On the whole, it greatly raises the teaching effect.

  5. A Finite Element Method for Simulation of Compressible Cavitating Flows

    Science.gov (United States)

    Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad

    2016-11-01

    This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

  6. Steam generator tube rupture simulation using extended finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Mohanty, Subhasish, E-mail: smohanty@anl.gov; Majumdar, Saurin; Natesan, Ken

    2016-08-15

    Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

  7. Precision of a FDTD method to simulate cold magnetized plasmas

    International Nuclear Information System (INIS)

    Pavlenko, I.V.; Melnyk, D.A.; Prokaieva, A.O.; Girka, I.O.

    2014-01-01

    The finite difference time domain (FDTD) method is applied to describe the propagation of the transverse electromagnetic waves through the magnetized plasmas. The numerical dispersion relation is obtained in a cold plasma approximation. The accuracy of the numerical dispersion is calculated as a function of the frequency of the launched wave and time step of the numerical grid. It is shown that the numerical method does not reproduce the analytical results near the plasma resonances for any chosen value of time step if there is not a dissipation mechanism in the system. It means that FDTD method cannot be applied straightforward to simulate the problems where the plasma resonances play a key role (for example, the mode conversion problems). But the accuracy of the numerical scheme can be improved by introducing some artificial damping of the plasma currents. Although part of the wave power is lost in the system in this case but the numerical scheme describes the wave processes in an agreement with analytical predictions.

  8. Steam generator tube rupture simulation using extended finite element method

    International Nuclear Information System (INIS)

    Mohanty, Subhasish; Majumdar, Saurin; Natesan, Ken

    2016-01-01

    Highlights: • Extended finite element method used for modeling the steam generator tube rupture. • Crack propagation is modeled in an arbitrary solution dependent path. • The FE model is used for estimating the rupture pressure of steam generator tubes. • Crack coalescence modeling is also demonstrated. • The method can be used for crack modeling of tubes under severe accident condition. - Abstract: A steam generator (SG) is an important component of any pressurized water reactor. Steam generator tubes represent a primary pressure boundary whose integrity is vital to the safe operation of the reactor. SG tubes may rupture due to propagation of a crack created by mechanisms such as stress corrosion cracking, fatigue, etc. It is thus important to estimate the rupture pressures of cracked tubes for structural integrity evaluation of SGs. The objective of the present paper is to demonstrate the use of extended finite element method capability of commercially available ABAQUS software, to model SG tubes with preexisting flaws and to estimate their rupture pressures. For the purpose, elastic–plastic finite element models were developed for different SG tubes made from Alloy 600 material. The simulation results were compared with experimental results available from the steam generator tube integrity program (SGTIP) sponsored by the United States Nuclear Regulatory Commission (NRC) and conducted at Argonne National Laboratory (ANL). A reasonable correlation was found between extended finite element model results and experimental results.

  9. The McKenzie method compared with manipulation when used adjunctive to information and advice in low back pain patients presenting with centralization or peripheralization. A randomized controlled trial

    DEFF Research Database (Denmark)

    Petersen, Tom; Larsen, Kristian; Nordsteen, Jan

    2011-01-01

    .Methods. A total of 350 patients suffering from low back pain with a duration of more than 6 weeks who presented with centralization or peripheralization of symptoms with or without signs of nerve root involvement, were enrolled in the trial. Main outcome was number of patients with treatment success defined...... a structured exercise programme tailored to the individual patient as well as manual therapy for the treatment of persistent low back pain. There is presently insufficient evidence to recommend the use of specific decision methods tailoring specific therapies to clinical subgroups of patients in primary care...... for more than six weeks presenting with centralization or peripheralization of symptoms, we found the McKenzie method to be slightly more effective than manipulation when used adjunctive to information and advice....

  10. Comparison of Kato-Katz thick-smear and McMaster egg counting method for the assessment of drug efficacy against soil-transmitted helminthiasis in school children in Jimma Town, Ethiopia.

    Science.gov (United States)

    Bekana, Teshome; Mekonnen, Zeleke; Zeynudin, Ahmed; Ayana, Mio; Getachew, Mestawet; Vercruysse, Jozef; Levecke, Bruno

    2015-10-01

    There is a paucity of studies that compare efficacy of drugs obtained by different diagnostic methods. We compared the efficacy of a single oral dose albendazole (400 mg), measured as egg reduction rate, against soil-transmitted helminth infections in 210 school children (Jimma Town, Ethiopia) using both Kato-Katz thick smear and McMaster egg counting method. Our results indicate that differences in sensitivity and faecal egg counts did not imply a significant difference in egg reduction rate estimates. The choice of a diagnostic method to assess drug efficacy should not be based on sensitivity and faecal egg counts only. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. The effect of glycerin solution density and viscosity on vibration amplitude of oblique different piezoelectric MC near the surface in 3D modeling

    Science.gov (United States)

    Korayem, A. H.; Abdi, M.; Korayem, M. H.

    2018-06-01

    The surface topography in nanoscale is one of the most important applications of AFM. The analysis of piezoelectric microcantilevers vibration behavior is essential to improve the AFM performance. To this end, one of the appropriate methods to simulate the dynamic behavior of microcantilever (MC) is a numerical solution with FEM in the 3D modeling using COMSOL software. The present study aims to simulate different geometries of the four-layered AFM piezoelectric MCs in 2D and 3D modeling in a liquid medium using COMSOL software. The 3D simulation was done in a spherical container using FSI domain in COMSOL. In 2D modeling by applying Hamilton's Principle based on Euler-Bernoulli Beam theory, the governing motion equation was derived and discretized with FEM. In this mode, the hydrodynamic force was assumed with a string of spheres. The effect of this force along with the squeezed-film force was considered on MC equations. The effect of fluid density and viscosity on the MC vibrations that immersed in different glycerin solutions was investigated in 2D and 3D modes and the results were compared with the experimental results. The frequencies and time responses of MC close to the surface were obtained considering tip-sample forces. The surface topography of MCs different geometries were compared in the liquid medium and the comparison was done in both tapping and non-contact mode. Various types of surface roughness were considered in the topography for MC different geometries. Also, the effect of geometric dimensions on the surface topography was investigated. In liquid medium, MC is installed at an oblique position to avoid damaging the MC due to the squeezed-film force in the vicinity of MC surface. Finally, the effect of MC's angle on surface topography and time response of the system was investigated.

  12. A Monte Carlo method and finite volume method coupled optical simulation method for parabolic trough solar collectors

    International Nuclear Information System (INIS)

    Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing

    2017-01-01

    Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of

  13. Development and applications of Super Monte Carlo Simulation Program for Advanced Nuclear Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Y., E-mail: yican.wu@fds.org.cn [Inst. of Nuclear Energy Safety Technology, Hefei, Anhui (China)

    2015-07-01

    'Full text:' Super Monte Carlo Simulation Program for Advanced Nuclear Energy Systems (SuperMC) is a CAD-based Monte Carlo (MC) program for integrated simulation of nuclear system by making use of hybrid MC-deterministic method and advanced computer technologies. The main usability features are automatic modeling of geometry and physics, visualization and virtual simulation and cloud computing service. SuperMC 2.3, the latest version, can perform coupled neutron and photon transport calculation. SuperMC has been verified by more than 2000 benchmark models and experiments, and has been applied in tens of major nuclear projects, such as the nuclear design and analysis of International Thermonuclear Experimental Reactor (ITER) and China Lead-based reactor (CLEAR). Development and applications of SuperMC are introduced in this presentation. (author)

  14. Development and applications of Super Monte Carlo Simulation Program for Advanced Nuclear Energy Systems

    International Nuclear Information System (INIS)

    Wu, Y.

    2015-01-01

    'Full text:' Super Monte Carlo Simulation Program for Advanced Nuclear Energy Systems (SuperMC) is a CAD-based Monte Carlo (MC) program for integrated simulation of nuclear system by making use of hybrid MC-deterministic method and advanced computer technologies. The main usability features are automatic modeling of geometry and physics, visualization and virtual simulation and cloud computing service. SuperMC 2.3, the latest version, can perform coupled neutron and photon transport calculation. SuperMC has been verified by more than 2000 benchmark models and experiments, and has been applied in tens of major nuclear projects, such as the nuclear design and analysis of International Thermonuclear Experimental Reactor (ITER) and China Lead-based reactor (CLEAR). Development and applications of SuperMC are introduced in this presentation. (author)

  15. APLIKASI ANALISIS DISKRIMINAN DALAM MENENTUKAN KEPUTUSAN PEMBELIAN PRODUK McCafe (Studi Kasus: McDonald’s Jimbaran Bali

    Directory of Open Access Journals (Sweden)

    TRISNA RAMADHAN

    2018-02-01

    Full Text Available McDonald’s is one of fast food company that is growing rapidly. McDonald’s continues to innovate to satisfy customers. It introduced the concept of a cafe with the name McCafe. Because of the competition with other fast food restaurants, McDonald’s needs to improve the quality of McCafe favored by customers. Thus, this research was conducted to aim at getting the indicators that are best describing customers characteristic. This research used discriminant analysis methods. Discriminant analysis was used to classify customers into groups of loyal customers or non loyal customers.. The indicators that distinguished the decision of the customer to buy McCafe Jimbaran product were affordable prices and locations that are easily accessible to customers. The formed discriminant function had an accuracy of 91,67 percent in classifying the customers.

  16. Simulation and Verificaiton of Flow in Test Methods

    DEFF Research Database (Denmark)

    Thrane, Lars Nyholm; Szabo, Peter; Geiker, Mette Rica

    2005-01-01

    Simulations and experimental results of L-box and slump flow test of a self-compacting mortar and a self-compacting concrete are compared. The simulations are based on a single fluid approach and assume an ideal Bingham behavior. It is possible to simulate the experimental results of both tests...

  17. An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers

    OpenAIRE

    Chen, S.; Hanzo, L.

    2000-01-01

    An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

  18. Numerical simulation for cracks detection using the finite elements method

    Directory of Open Access Journals (Sweden)

    S Bennoud

    2016-09-01

    Full Text Available The means of detection must ensure controls either during initial construction, or at the time of exploitation of all parts. The Non destructive testing (NDT gathers the most widespread methods for detecting defects of a part or review the integrity of a structure. In the areas of advanced industry (aeronautics, aerospace, nuclear …, assessing the damage of materials is a key point to control durability and reliability of parts and materials in service. In this context, it is necessary to quantify the damage and identify the different mechanisms responsible for the progress of this damage. It is therefore essential to characterize materials and identify the most sensitive indicators attached to damage to prevent their destruction and use them optimally. In this work, simulation by finite elements method is realized with aim to calculate the electromagnetic energy of interaction: probe and piece (with/without defect. From calculated energy, we deduce the real and imaginary components of the impedance which enables to determine the characteristic parameters of a crack in various metallic parts.

  19. Application of the maximum entropy method to dynamical fermion simulations

    Science.gov (United States)

    Clowser, Jonathan

    This thesis presents results for spectral functions extracted from imaginary-time correlation functions obtained from Monte Carlo simulations using the Maximum Entropy Method (MEM). The advantages this method are (i) no a priori assumptions or parametrisations of the spectral function are needed, (ii) a unique solution exists and (iii) the statistical significance of the resulting image can be quantitatively analysed. The Gross Neveu model in d = 3 spacetime dimensions (GNM3) is a particularly interesting model to study with the MEM because at T = 0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances. Results for the elementary fermion, the Goldstone boson (pion), the sigma, the massive pseudoscalar meson and the symmetric phase resonances are presented. UKQCD Nf = 2 dynamical QCD data is also studied with MEM. Results are compared to those found from the quenched approximation, where the effects of quark loops in the QCD vacuum are neglected, to search for sea-quark effects in the extracted spectral functions. Information has been extract from the difficult axial spatial and scalar as well as the pseudoscalar, vector and axial temporal channels. An estimate for the non-singlet scalar mass in the chiral limit is given which is in agreement with the experimental value of Mao = 985 MeV.

  20. Methods employed to speed up Cathare for simulation uses

    International Nuclear Information System (INIS)

    Agator, J.M.

    1992-01-01

    This paper describes the main methods used to speed up the french advanced thermal-hydraulic computer code CATHARE and build a speedy version, called CATHARE-SIMU, adapted to real time calculations and simulation environment. Since CATHARE-SIMU, like CATHARE, uses a numerical scheme based on a fully implicit Newton's iterative method, and therefore with a variable time step, two ways have been explored to reduce the computing time: avoidance of short time steps, and so minimization of the number of iterations per time step, reduction of the computing time needed for an iteration. CATHARE-SIMU uses the same physical laws and correlations as in CATHARE with only some minor simplifications. This was considered the only way to be sure to maintain the level of physical relevance of CATHARE. Finally it is indicated that the validation programme of CATHARE-SIMU includes a set of 33 transient calculations, referring either to CATHARE for two-phase transients, or to measurements on real plants for operational transients

  1. Hybrid Methods for Muon Accelerator Simulations with Ionization Cooling

    Energy Technology Data Exchange (ETDEWEB)

    Kunz, Josiah [Anderson U.; Snopok, Pavel [Fermilab; Berz, Martin [Michigan State U.; Makino, Kyoko [Michigan State U.

    2018-03-28

    Muon ionization cooling involves passing particles through solid or liquid absorbers. Careful simulations are required to design muon cooling channels. New features have been developed for inclusion in the transfer map code COSY Infinity to follow the distribution of charged particles through matter. To study the passage of muons through material, the transfer map approach alone is not sufficient. The interplay of beam optics and atomic processes must be studied by a hybrid transfer map--Monte-Carlo approach in which transfer map methods describe the deterministic behavior of the particles, and Monte-Carlo methods are used to provide corrections accounting for the stochastic nature of scattering and straggling of particles. The advantage of the new approach is that the vast majority of the dynamics are represented by fast application of the high-order transfer map of an entire element and accumulated stochastic effects. The gains in speed are expected to simplify the optimization of cooling channels which is usually computationally demanding. Progress on the development of the required algorithms and their application to modeling muon ionization cooling channels is reported.

  2. Michel Trottier-McDonald

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics. Michel Trottier-McDonald. Articles written in Pramana – Journal of Physics. Volume 79 Issue 5 November 2012 pp 1337-1340 Poster Presentations. Tau reconstruction, energy calibration and identification at ATLAS · Michel Trottier-McDonald on behalf of the ATLAS ...

  3. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  4. Direct Monte Carlo Simulation Methods for Nonreacting and Reacting Systems at Fixed Total Internal Energy or Enthalpy

    Czech Academy of Sciences Publication Activity Database

    Smith, W.; Lísal, Martin

    2002-01-01

    Roč. 66, č. 1 (2002), s. 011104-1 - 011104-1 ISSN 1063-651X R&D Projects: GA ČR GA203/02/0805 Grant - others:NSERC(CA) OGP1041 Keywords : MC * simulation * reaction Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.397, year: 2002

  5. Simulation of neutral gas flow in a tokamak divertor using the Direct Simulation Monte Carlo method

    International Nuclear Information System (INIS)

    Gleason-González, Cristian; Varoutis, Stylianos; Hauer, Volker; Day, Christian

    2014-01-01

    Highlights: • Subdivertor gas flows calculations in tokamaks by coupling the B2-EIRENE and DSMC method. • The results include pressure, temperature, bulk velocity and particle fluxes in the subdivertor. • Gas recirculation effect towards the plasma chamber through the vertical targets is found. • Comparison between DSMC and the ITERVAC code reveals a very good agreement. - Abstract: This paper presents a new innovative scientific and engineering approach for describing sub-divertor gas flows of fusion devices by coupling the B2-EIRENE (SOLPS) code and the Direct Simulation Monte Carlo (DSMC) method. The present study exemplifies this with a computational investigation of neutral gas flow in the ITER's sub-divertor region. The numerical results include the flow fields and contours of the overall quantities of practical interest such as the pressure, the temperature and the bulk velocity assuming helium as model gas. Moreover, the study unravels the gas recirculation effect located behind the vertical targets, viz. neutral particles flowing towards the plasma chamber. Comparison between calculations performed by the DSMC method and the ITERVAC code reveals a very good agreement along the main sub-divertor ducts

  6. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    International Nuclear Information System (INIS)

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-01

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  7. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  8. Numerical Simulation of Tubular Pumping Systems with Different Regulation Methods

    Science.gov (United States)

    Zhu, Honggeng; Zhang, Rentian; Deng, Dongsheng; Feng, Xusong; Yao, Linbi

    2010-06-01

    Since the flow in tubular pumping systems is basically along axial direction and passes symmetrically through the impeller, most satisfying the basic hypotheses in the design of impeller and having higher pumping system efficiency in comparison with vertical pumping system, they are being widely applied to low-head pumping engineering. In a pumping station, the fluctuation of water levels in the sump and discharge pool is most common and at most time the pumping system runs under off-design conditions. Hence, the operation of pump has to be flexibly regulated to meet the needs of flow rates, and the selection of regulation method is as important as that of pump to reduce operation cost and achieve economic operation. In this paper, the three dimensional time-averaged Navier-Stokes equations are closed by RNG κ-ɛ turbulent model, and two tubular pumping systems with different regulation methods, equipped with the same pump model but with different designed system structures, are numerically simulated respectively to predict the pumping system performances and analyze the influence of regulation device and help designers make final decision in the selection of design schemes. The computed results indicate that the pumping system with blade-adjusting device needs longer suction box, and the increased hydraulic loss will lower the pumping system efficiency in the order of 1.5%. The pumping system with permanent magnet motor, by means of variable speed regulation, obtains higher system efficiency partly for shorter suction box and partly for different structure design. Nowadays, the varied speed regulation is realized by varied frequency device, the energy consumption of which is about 3˜4% of output power of the motor. Hence, when the efficiency of variable frequency device is considered, the total pumping system efficiency will probably be lower.

  9. Purpose compliant visual simulation: towards effective and selective methods and techniques of visualisation and simulation

    NARCIS (Netherlands)

    Daru, R.; Venemans, P.

    1998-01-01

    Visualisation, simulation and communication were always intimately interconnected. Visualisations and simulations impersonate existing or virtual realities. Without those tools it is arduous to communicate mental depictions about virtual objects and events. A communication model is presented to

  10. A criticism of the paper entitled 'A practical method of estimating standard error of the age in the Fission Track Dating method' by Johnson, McGee and Naeser

    International Nuclear Information System (INIS)

    Green, P.F.

    1981-01-01

    It is stated that the common use of Poissonian errors to assign uncertainties in fission-track dating studies has led Johnson, McGee and Naeser (1979) to the mistaken assumption that such errors could be used to measure the spatial variation of track densities. The analysis proposed by JMN 79, employing this assumption, therefore leads to erroneous assessment of the error in an age determination. The basis for the statement is discussed. (U.K.)

  11. Overview of Computer Simulation Modeling Approaches and Methods

    Science.gov (United States)

    Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett

    2005-01-01

    The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...

  12. Simulation as a Method of Teaching Communication for Multinational Corporations.

    Science.gov (United States)

    Stull, James B.; Baird, John W.

    Interpersonal simulations may be used as a module in cultural awareness programs to provide realistic environments in which students, supervisors, and managers may practice communication skills that are effective in multicultural environments. To conduct and implement a cross-cultural simulation, facilitators should proceed through four stages:…

  13. Simulation

    DEFF Research Database (Denmark)

    Gould, Derek A; Chalmers, Nicholas; Johnson, Sheena J

    2012-01-01

    Recognition of the many limitations of traditional apprenticeship training is driving new approaches to learning medical procedural skills. Among simulation technologies and methods available today, computer-based systems are topical and bring the benefits of automated, repeatable, and reliable p...... performance assessments. Human factors research is central to simulator model development that is relevant to real-world imaging-guided interventional tasks and to the credentialing programs in which it would be used....

  14. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  15. Nuclear power plant training simulator system and method

    International Nuclear Information System (INIS)

    Ferguson, R.W.; Converse, R.E. Jr.

    1975-01-01

    A system is described for simulating the real-time dynamic operation of a full scope nuclear powered electrical generating plant for operator training utilizing apparatus that includes a control console with plant component control devices and indicating devices for monitoring plant operation. A general purpose digital computer calculates the dynamic simulation data for operating the indicating devices in accordance with the operation of the control devices. The functions for synchronization and calculation are arranged in a priority structure so as to insure an execution order that provides a maximum overlap of data exchange and simulation calculations. (Official Gazette)

  16. Discrete simulation system based on artificial intelligence methods

    Energy Technology Data Exchange (ETDEWEB)

    Futo, I; Szeredi, J

    1982-01-01

    A discrete event simulation system based on the AI language Prolog is presented. The system called t-Prolog extends the traditional possibilities of simulation languages toward automatic problem solving by using backtrack in time and automatic model modification depending on logical deductions. As t-Prolog is an interactive tool, the user has the possibility to interrupt the simulation run to modify the model or to force it to return to a previous state for trying possible alternatives. It admits the construction of goal-oriented or goal-seeking models with variable structure. Models are defined in a restricted version of the first order predicate calculus using Horn clauses. 21 references.

  17. Dose rates from a C-14 source using extrapolation chamber and MC calculations

    International Nuclear Information System (INIS)

    Borg, J.

    1996-05-01

    The extrapolation chamber technique and the Monte Carlo (MC) calculation technique based on the EGS4 system have been studied for application for determination of dose rates in a low-energy β radiation field e.g., that from a 14 C source. The extrapolation chamber measurement method is the basic method for determination of dose rates in β radiation fields. Applying a number of correction factors and the stopping power ratio, tissue to air, the measured dose rate in an air volume surrounded by tissue equivalent material is converted into dose to tissue. Various details of the extrapolation chamber measurement method and evaluation procedure have been studied and further developed, and a complete procedure for the experimental determination of dose rates from a 14 C source is presented. A number of correction factors and other parameters used in the evaluation procedure for the measured data have been obtained by MC calculations. The whole extrapolation chamber measurement procedure was simulated using the MC method. The measured dose rates showed an increasing deviation from the MC calculated dose rates as the absorber thickness increased. This indicates that the EGS4 code may have some limitations for transport of very low-energy electrons. i.e., electrons with estimated energies less than 10 - 20 keV. MC calculations of dose to tissue were performed using two models: a cylindrical tissue phantom and a computer model of the extrapolation chamber. The dose to tissue in the extrapolation chamber model showed an additional buildup dose compared to the dose in the tissue model. (au) 10 tabs., 11 ills., 18 refs

  18. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    Science.gov (United States)

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  19. Modified enthalpy method for the simulation of melting and ...

    Indian Academy of Sciences (India)

    These include the implicit time stepping method of Voller & Cross. (1981), explicit enthalpy method of Tacke (1985), centroidal temperature correction method ... In variable viscosity method, viscosity is written as a function of liquid fraction.

  20. Review of Vortex Methods for Simulation of Vortex Breakdown

    National Research Council Canada - National Science Library

    Levinski, Oleg

    2001-01-01

    The aim of this work is to identify current developments in the field of vortex breakdown modelling in order to initiate the development of a numerical model for the simulation of F/A-18 empennage buffet...

  1. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang

    2012-01-01

    Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related

  2. New methods for simulation of fractional Brownian motion

    International Nuclear Information System (INIS)

    Yin, Z.M.

    1996-01-01

    We present new algorithms for simulation of fractional Brownian motion (fBm) which comprises a set of important random functions widely used in geophysical and physical modeling, fractal image (landscape) simulating, and signal processing. The new algorithms, which are both accurate and efficient, allow us to generate not only a one-dimensional fBm process, but also two- and three-dimensional fBm fields. 23 refs., 3 figs

  3. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    Energy Technology Data Exchange (ETDEWEB)

    LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  4. On Partitioned Simulation of Electrical Circuits using Dynamic Iteration Methods

    OpenAIRE

    Ebert, Falk

    2008-01-01

    Im Rahmen dieser Arbeit wird die partitionierte Simulation elektrischer Schaltkreise untersucht. Hierbei handelt es sich um eine Technik, verschiedene Teile eines Schaltkreises auf unterschiedliche Weise numerisch zu behandeln um eine Simulation für den Gesamtkreis zu erhalten. Dabei wird besonderes Augenmerk auf zwei Dinge gelegt. Zum einen sollen sämtliche analytischen Resultate eine graphentheoretische Interpretation zulassen. Diese Bedingung resultiert daraus, dass Schaltkreisgleichungen ...

  5. Computational simulation in architectural and environmental acoustics methods and applications of wave-based computation

    CERN Document Server

    Sakamoto, Shinichi; Otsuru, Toru

    2014-01-01

    This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.  

  6. Application of quality assurance to MC and A systems

    International Nuclear Information System (INIS)

    Skinner, A.J.; Delvin, W.L.

    1986-01-01

    Application of the principles of quality assurance to MC and A has been done at DOE's Savannah River Operations Office. The principles were applied to the functions within the MC and A Branch, including both the functions used to operate the Branch and those used to review the MC and A activities of DOE/SR's contractor. The purpose of this paper is to discuss that application of quality assurance and to show how the principles of quality assurance relate to the functions of a MC and A system, for both a DOE field office and a contractor. The principles (presented as requirements from the NQA-1 standard) are briefly discussed, a method for applying quality assurance is outlined, application at DOE/SR is shown, and application to a contractor's MC and A system is discussed

  7. Simulating Social Networks of Online Communities: Simulation as a Method for Sociability Design

    Science.gov (United States)

    Ang, Chee Siang; Zaphiris, Panayiotis

    We propose the use of social simulations to study and support the design of online communities. In this paper, we developed an Agent-Based Model (ABM) to simulate and study the formation of social networks in a Massively Multiplayer Online Role Playing Game (MMORPG) guild community. We first analyzed the activities and the social network (who-interacts-with-whom) of an existing guild community to identify its interaction patterns and characteristics. Then, based on the empirical results, we derived and formalized the interaction rules, which were implemented in our simulation. Using the simulation, we reproduced the observed social network of the guild community as a means of validation. The simulation was then used to examine how various parameters of the community (e.g. the level of activity, the number of neighbors of each agent, etc) could potentially influence the characteristic of the social networks.

  8. The Simulation and Analysis of the Closed Die Hot Forging Process by A Computer Simulation Method

    Directory of Open Access Journals (Sweden)

    Dipakkumar Gohil

    2012-06-01

    Full Text Available The objective of this research work is to study the variation of various parameters such as stress, strain, temperature, force, etc. during the closed die hot forging process. A computer simulation modeling approach has been adopted to transform the theoretical aspects in to a computer algorithm which would be used to simulate and analyze the closed die hot forging process. For the purpose of process study, the entire deformation process has been divided in to finite number of steps appropriately and then the output values have been computed at each deformation step. The results of simulation have been graphically represented and suitable corrective measures are also recommended, if the simulation results do not agree with the theoretical values. This computer simulation approach would significantly improve the productivity and reduce the energy consumption of the overall process for the components which are manufactured by the closed die forging process and contribute towards the efforts in reducing the global warming.

  9. High viscosity fluid simulation using particle-based method

    KAUST Repository

    Chang, Yuanzhang; Bao, Kai; Zhu, Jian; Wu, Enhua

    2011-01-01

    the boundary, ghost particles are employed to enforce the solid boundary condition. Compared with Finite Element Methods with complicated and time-consuming remeshing operations, our method is much more straightforward to implement. Moreover, our method doesn

  10. Method for numerical simulation of two-term exponentially correlated colored noise

    International Nuclear Information System (INIS)

    Yilmaz, B.; Ayik, S.; Abe, Y.; Gokalp, A.; Yilmaz, O.

    2006-01-01

    A method for numerical simulation of two-term exponentially correlated colored noise is proposed. The method is an extension of traditional method for one-term exponentially correlated colored noise. The validity of the algorithm is tested by comparing numerical simulations with analytical results in two physical applications

  11. The null-event method in computer simulation

    International Nuclear Information System (INIS)

    Lin, S.L.

    1978-01-01

    The simulation of collisions of ions moving under the influence of an external field through a neutral gas to non-zero temperatures is discussed as an example of computer models of processes in which a probe particle undergoes a series of interactions with an ensemble of other particles, such that the frequency and outcome of the events depends on internal properties of the second particles. The introduction of null events removes the need for much complicated algebra, leads to a more efficient simulation and reduces the likelihood of logical error. (Auth.)

  12. McClean Lake. Site Guide

    International Nuclear Information System (INIS)

    2016-09-01

    Located over 700 kilometers northeast of Saskatoon, Areva's McClean Lake site is comprised of several uranium mines and one of the most technologically advanced uranium mills in the world - the only mill designed to process high-grade uranium ore without dilution. Areva has operated several open-pit uranium mines at the McClean Lake site, and is evaluating future mines at and near the site. The McClean Lake mill has recently undergone a multimillion-dollar upgrade and expansion, which has doubled its annual production capacity of uranium concentrate to 24 million pounds. It is the only facility in the world capable of processing high-grade uranium ore without diluting it. The mill processes the ore from the Cigar Lake mine, the world's second largest and highest-grade uranium mine. The McClean Lake site operates 365 days a year on a week-in/week-out rotation schedule for workers, over 50% of whom reside in northern Saskatchewan communities. Tailings are waste products resulting from milling uranium ore. This waste is made up of leach residue solids, waste solutions and chemical precipitates that are carefully engineered for long-term disposal. The TMF serves as the repository for all resulting tailings. This facility allows proper waste management, which minimizes potential adverse environmental effects. Mining projections indicate that the McClean Lake mill will produce tailings in excess of the existing capacity of the TMF. After evaluating a number of options, Areva has decided to pursue an expansion of this facility. Areva is developing the Surface Access Borehole Resource Extraction (SABRE) mining method, which uses a high-pressure water jet placed at the bottom of the drill hole to extract ore. Areva has conducted a series of tests with this method and is evaluating its potential for future mining operations. McClean Lake maintains its certification in ISO 14001 standards for environmental management and OHSAS 18001 standards for occupational health

  13. Adaptive Multiscale Finite Element Method for Subsurface Flow Simulation

    NARCIS (Netherlands)

    Van Esch, J.M.

    2010-01-01

    Natural geological formations generally show multiscale structural and functional heterogeneity evolving over many orders of magnitude in space and time. In subsurface hydrological simulations the geological model focuses on the structural hierarchy of physical sub units and the flow model addresses

  14. Crop canopy BRDF simulation and analysis using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.

    2006-01-01

    This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and

  15. A Ten-Step Design Method for Simulation Games in Logistics Management

    NARCIS (Netherlands)

    Fumarola, M.; Van Staalduinen, J.P.; Verbraeck, A.

    2011-01-01

    Simulation games have often been found useful as a method of inquiry to gain insight in complex system behavior and as aids for design, engineering simulation and visualization, and education. Designing simulation games are the result of creative thinking and planning, but often not the result of a

  16. An NPT Monte Carlo Molecular Simulation-Based Approach to Investigate Solid-Vapor Equilibrium: Application to Elemental Sulfur-H2S System

    KAUST Repository

    Kadoura, Ahmad Salim; Salama, Amgad; Sun, Shuyu; Sherik, Abdelmounam

    2013-01-01

    In this work, a method to estimate solid elemental sulfur solubility in pure and gas mixtures using Monte Carlo (MC) molecular simulation is proposed. This method is based on Isobaric-Isothermal (NPT) ensemble and the Widom insertion technique

  17. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  18. Truncated Newton-Raphson Methods for Quasicontinuum Simulations

    National Research Council Canada - National Science Library

    Liang, Yu; Kanapady, Ramdev; Chung, Peter W

    2006-01-01

    .... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...

  19. Numerical simulation of GEW equation using RBF collocation method

    Directory of Open Access Journals (Sweden)

    Hamid Panahipour

    2012-08-01

    Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.

  20. External individual monitoring: experiments and simulations using Monte Carlo Method

    International Nuclear Information System (INIS)

    Guimaraes, Carla da Costa

    2005-01-01

    In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF 2 :NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF 2 :NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF 2 :NaCl compound estimated by simulation to be 2,20(25) mm -1 was introduced. Conversion coefficients C p from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low

  1. Fast Multilevel Panel Method for Wind Turbine Rotor Flow Simulations

    NARCIS (Netherlands)

    van Garrel, Arne; Venner, Cornelis H.; Hoeijmakers, Hendrik Willem Marie

    2017-01-01

    A fast multilevel integral transform method has been developed that enables the rapid analysis of unsteady inviscid flows around wind turbines rotors. A low order panel method is used and the new multi-level multi-integration cluster (MLMIC) method reduces the computational complexity for

  2. Simulation methods for multiperiodic and aperiodic nanostructured dielectric waveguides

    DEFF Research Database (Denmark)

    Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina

    2017-01-01

    on Rudin–Shapiro, Fibonacci, and Thue–Morse binary sequences. The near-field and far-field properties are computed employing the finite-element method (FEM), the finite-difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). The results show that all three methods...

  3. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  4. Geometry optimization of zirconium sulfophenylphosphonate layers by molecular simulation methods

    Czech Academy of Sciences Publication Activity Database

    Škoda, J.; Pospíšil, M.; Kovář, P.; Melánová, Klára; Svoboda, J.; Beneš, L.; Zima, Vítězslav

    2018-01-01

    Roč. 24, č. 1 (2018), s. 1-12, č. článku 10. ISSN 1610-2940 R&D Projects: GA ČR(CZ) GA14-13368S; GA ČR(CZ) GA17-10639S Institutional support: RVO:61389013 Keywords : zirconium sulfophenylphosphonate * intercalation * molecular simulation Subject RIV: CA - Inorganic Chemistry OBOR OECD: Inorganic and nuclear chemistry Impact factor: 1.425, year: 2016

  5. Absolute efficiency calibration of HPGe detector by simulation method

    International Nuclear Information System (INIS)

    Narayani, K.; Pant, Amar D.; Verma, Amit K.; Bhosale, N.A.; Anilkumar, S.

    2018-01-01

    High resolution gamma ray spectrometry by HPGe detectors is a powerful radio analytical technique for estimation of activity of various radionuclides. In the present work absolute efficiency calibration of the HPGe detector was carried out using Monte Carlo simulation technique and results are compared with those obtained by experiment using standard radionuclides of 152 Eu and 133 Ba. The coincidence summing correction factors for the measurement of these nuclides were also calculated

  6. Quantum mechanical simulation methods for studying biological systems

    International Nuclear Information System (INIS)

    Bicout, D.; Field, M.

    1996-01-01

    Most known biological mechanisms can be explained using fundamental laws of physics and chemistry and a full understanding of biological processes requires a multidisciplinary approach in which all the tools of biology, chemistry and physics are employed. An area of research becoming increasingly important is the theoretical study of biological macromolecules where numerical experimentation plays a double role of establishing a link between theoretical models and predictions and allowing a quantitative comparison between experiments and models. This workshop brought researchers working on different aspects of the development and application of quantum mechanical simulation together, assessed the state-of-the-art in the field and highlighted directions for future research. Fourteen lectures (theoretical courses and specialized seminars) deal with following themes: 1) quantum mechanical calculations of large systems, 2) ab initio molecular dynamics where the calculation of the wavefunction and hence the energy and forces on the atoms for a system at a single nuclear configuration are combined with classical molecular dynamics algorithms in order to perform simulations which use a quantum mechanical potential energy surface, 3) quantum dynamical simulations, electron and proton transfer processes in proteins and in solutions and finally, 4) free seminars that helped to enlarge the scope of the workshop. (N.T.)

  7. An Interview with Joe McMann: His Life Lessons

    Science.gov (United States)

    McMann, Joe

    2011-01-01

    Pica Kahn conducted "An Interview with Joe McMann: His Life Lessons" on May 23, 2011. With over 40 years of experience in the aerospace industry, McMann has gained a wealth of knowledge. Many have been interested in his biography, progression of work at NASA, impact on the U.S. spacesuit, and career accomplishments. This interview highlighted the influences and decision-making methods that impacted his technical and management contributions to the space program. McMann shared information about the accomplishments and technical advances that committed individuals can make.

  8. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  9. Justification of a Monte Carlo Algorithm for the Diffusion-Growth Simulation of Helium Clusters in Materials

    International Nuclear Information System (INIS)

    Yu-Lu, Zhou; Ai-Hong, Deng; Qing, Hou; Jun, Wang

    2009-01-01

    A theoretical analysis of a Monte Carlo (MC) method for the simulation of the diffusion-growth of helium clusters in materials is presented. This analysis is based on an assumption that the diffusion-growth process consists of first stage, during which the clusters diffuse freely, and second stage in which the coalescence occurs with certain probability. Since the accuracy of MC simulation results is sensitive to the coalescence probability, the MC calculations in the second stage is studied in detail. Firstly, the coalescence probability is analytically formulated for the one-dimensional diffusion-growth case. Thereafter, the one-dimensional results are employed to justify the MC simulation. The choice of time step and the random number generator used in the MC simulation are discussed

  10. MC 68020 μp architecture

    International Nuclear Information System (INIS)

    Casals, O.; Dejuan, E.; Labarta, J.

    1988-01-01

    The MC68020 is a 32-bit microprocessor object code compatible with the earlier MC68000 and MC68010. In this paper we describe its architecture and two coprocessors: the MC68851 paged memory management unit and the MC68882 floating point coprocessor. Between its most important characteristics we can point up: addressing mode extensions for enhanced support of high level languages, an on-chip instruction cache and full support of virtual memory. (Author)

  11. Explicit Singly Diagonally Implicit Runge-Kutta Methods and Adaptive Stepsize Control for Reservoir Simulation

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Thomsen, Per Grove

    2010-01-01

    The implicit Euler method, normally refered to as the fully implicit (FIM) method, and the implicit pressure explicit saturation (IMPES) method are the traditional choices for temporal discretization in reservoir simulation. The FIM method offers unconditionally stability in the sense of discrete......-Kutta methods, ESDIRK, Newton-Raphson, convergence control, error control, stepsize selection....

  12. Mock ECHO: A Simulation-Based Medical Education Method.

    Science.gov (United States)

    Fowler, Rebecca C; Katzman, Joanna G; Comerci, George D; Shelley, Brian M; Duhigg, Daniel; Olivas, Cynthia; Arnold, Thomas; Kalishman, Summers; Monnette, Rebecca; Arora, Sanjeev

    2018-04-16

    This study was designed to develop a deeper understanding of the learning and social processes that take place during the simulation-based medical education for practicing providers as part of the Project ECHO® model, known as Mock ECHO training. The ECHO model is utilized to expand access to care of common and complex diseases by supporting the education of primary care providers with an interprofessional team of specialists via videoconferencing networks. Mock ECHO trainings are conducted through a train the trainer model targeted at leaders replicating the ECHO model at their organizations. Trainers conduct simulated teleECHO clinics while participants gain skills to improve communication and self-efficacy. Three focus groups, conducted between May 2015 and January 2016 with a total of 26 participants, were deductively analyzed to identify common themes related to simulation-based medical education and interdisciplinary education. Principal themes generated from the analysis included (a) the role of empathy in community development, (b) the value of training tools as guides for learning, (c) Mock ECHO design components to optimize learning, (d) the role of interdisciplinary education to build community and improve care delivery, (e) improving care integration through collaboration, and (f) development of soft skills to facilitate learning. Mock ECHO trainings offer clinicians the freedom to learn in a noncritical environment while emphasizing real-time multidirectional feedback and encouraging knowledge and skill transfer. The success of the ECHO model depends on training interprofessional healthcare providers in behaviors needed to lead a teleECHO clinic and to collaborate in the educational process. While building a community of practice, Mock ECHO provides a safe opportunity for a diverse group of clinician experts to practice learned skills and receive feedback from coparticipants and facilitators.

  13. Computerized method for X-ray angular distribution simulation in radiological systems

    International Nuclear Information System (INIS)

    Marques, Marcio A.; Oliveira, Henrique J.Q. de; Frere, Annie F.; Schiabel, Homero; Marques, Paulo M.A.

    1996-01-01

    A method to simulate the changes in X-ray angular distribution (the Heel effect) for radiologic imaging systems is presented. This simulation method is described as to predict images for any exposure technique considering that the distribution is the cause of the intensity variation along the radiation field

  14. Contribution of the ultrasonic simulation to the testing methods qualification process

    International Nuclear Information System (INIS)

    Le Ber, L.; Calmon, P.; Abittan, E.

    2001-01-01

    The CEA and EDF have started a study concerning the simulation interest in the qualification of nuclear components control by ultrasonic methods. In this framework, the simulation tools of the CEA, as CIVA, have been tested on real control. The method and the results obtained on some examples are presented. (A.L.B.)

  15. 2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method

    Science.gov (United States)

    Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)

    2000-01-01

    The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.

  16. Digital system verification a combined formal methods and simulation framework

    CERN Document Server

    Li, Lun

    2010-01-01

    Integrated circuit capacity follows Moore's law, and chips are commonly produced at the time of this writing with over 70 million gates per device. Ensuring correct functional behavior of such large designs before fabrication poses an extremely challenging problem. Formal verification validates the correctness of the implementation of a design with respect to its specification through mathematical proof techniques. Formal techniques have been emerging as commercialized EDA tools in the past decade. Simulation remains a predominantly used tool to validate a design in industry. After more than 5

  17. Achieving better cooling of turbine blades using numerical simulation methods

    Science.gov (United States)

    Inozemtsev, A. A.; Tikhonov, A. S.; Sendyurev, C. I.; Samokhvalov, N. Yu.

    2013-02-01

    A new design of the first-stage nozzle vane for the turbine of a prospective gas-turbine engine is considered. The blade's thermal state is numerically simulated in conjugate statement using the ANSYS CFX 13.0 software package. Critical locations in the blade design are determined from the distribution of heat fluxes, and measures aimed at achieving more efficient cooling are analyzed. Essentially lower (by 50-100°C) maximal temperature of metal has been achieved owing to the results of the performed work.

  18. Simulation Methods in the Contact with Impact of Rigid Bodies

    Directory of Open Access Journals (Sweden)

    Cristina Basarabă-Opritescu

    2007-10-01

    Full Text Available The analysis of impacts of elastic bodies is topical and it has many applications, practical and theoretical, too. The elastic character of collision is put in evidence, especially by the velocities of some parts of a particular body, named “ring”. In the presented paper, the situation of elastic collisions is put in evidence by the simulation with the help of the program ANSYS and it refers to the particular case of the ring, with the mechanical characteristics, given in the paper

  19. Modelling and simulation of diffusive processes methods and applications

    CERN Document Server

    Basu, SK

    2014-01-01

    This book addresses the key issues in the modeling and simulation of diffusive processes from a wide spectrum of different applications across a broad range of disciplines. Features: discusses diffusion and molecular transport in living cells and suspended sediment in open channels; examines the modeling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modeling of nitrogen fate and transport

  20. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    Science.gov (United States)

    2016-06-12

    Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and

  1. Advanced scientific computational methods and their applications to nuclear technologies. (3) Introduction of continuum simulation methods and their applications (3)

    International Nuclear Information System (INIS)

    Satake, Shin-ichi; Kunugi, Tomoaki

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)

  2. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    Science.gov (United States)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  3. Simulation Methods for Multiperiodic and Aperiodic Nanostructured Dielectric Waveguides

    DEFF Research Database (Denmark)

    Paulsen, Moritz; Neustock, Lars Thorben; Jahns, Sabrina

    on Rudin-Shapiro, Fibonacci, and Thue-Morse binary sequences. The near-field and far-field properties are calculated employing the finite-element method (FEM), the finite- difference time-domain (FDTD) method as well as a rigorous coupled wave algorithm (RCWA). References [1] S. V. Boriskina, A. Gopinath...

  4. Nonequilibrium relaxation method – An alternative simulation strategy

    Indian Academy of Sciences (India)

    This equilibrium method traces over the standard theory of the thermal ... The purpose of this article is to give a concise review of the idea of the NER method .... using the NER functions from mixed initial configuration, that is, half of the system.

  5. Simulation of crystalline pattern formation by the MPFC method

    Directory of Open Access Journals (Sweden)

    Starodumov Ilya

    2017-01-01

    Full Text Available The Phase Field Crystal model in hyperbolic formulation (modified PFC or MPFC, is investigated as one of the most promising techniques for modeling the formation of crystal patterns. MPFC is a convenient and fundamentally based description linking nano-and meso-scale processes in the evolution of crystal structures. The presented model is a powerful tool for mathematical modeling of the various operations in manufacturing. Among them is the definition of process conditions for the production of metal castings with predetermined properties, the prediction of defects in the crystal structure during casting, the evaluation of quality of special coatings, and others. Our paper presents the structure diagram which was calculated for the one-mode MPFC model and compared to the results of numerical simulation for the fast phase transitions. The diagram is verified by the numerical simulation and also strongly correlates to the previously calculated diagrams. The computations have been performed using software based on the effective parallel computational algorithm.

  6. Numerical simulation methods of fires in nuclear power plants

    International Nuclear Information System (INIS)

    Keski-Rahkonen, O.; Bjoerkman, J.; Heikkilae, L.

    1992-01-01

    Fire is a significant hazard to the safety of nuclear power plants (NPP). Fire may be serious accident as such, but even small fire at a critical point in a NPP may cause an accident much more serious than fire itself. According to risk assessments a fire may be an initial cause or a contributing factor in a large part of reactor accidents. At the Fire Technology and the the Nuclear Engineering Laboratory of the Technical Research Centre of Finland (VTT) fire safety research for NPPs has been carried out in a large extent since 1985. During years 1988-92 a project Advanced Numerical Modelling in Nuclear Power Plants (PALOME) was carried out. In the project the level of numerical modelling for fire research in Finland was improved by acquiring, preparing for use and developing numerical fire simulation programs. Large scale test data of the German experimental program (PHDR Sicherheitsprogramm in Kernforschungscentral Karlsruhe) has been as reference. The large scale tests were simulated by numerical codes and results were compared to calculations carried out by others. Scientific interaction with outstanding foreign laboratories and scientists has been an important part of the project. This report describes the work of PALOME-project carried out at the Fire Technology Laboratory only. A report on the work at the Nuclear Engineering Laboratory will be published separatively. (au)

  7. The PDF method for Lagrangian two-phase flow simulations

    International Nuclear Information System (INIS)

    Minier, J.P.; Pozorski, J.

    1996-04-01

    A recent turbulence model put forward by Pope (1991) in the context of PDF modelling has been used. In this approach, the one-point joint velocity-dissipation pdf equation is solved by simulating the instantaneous behaviour of a large number of Lagrangian fluid particles. Closure of the evolution equations of these Lagrangian particles is based on stochastic models and more specifically on diffusion processes. Such models are of direct use for two-phase flow modelling where the so-called fluid seen by discrete inclusions has to be modelled. Full Lagrangian simulations have been performed for shear-flows. It is emphasized that this approach gives far more information than traditional turbulence closures (such as the K-ε model) and therefore can be very useful for situations involving complex physics. It is also believed that the present model represents the first step towards a complete Lagrangian-Lagrangian model for dispersed two-phase flow problems. (authors). 21 refs., 6 figs

  8. McStas event logger

    DEFF Research Database (Denmark)

    Bergbäck Knudsen, Erik; Willendrup, Peter Kjær; Klinkby, Esben Bryndt

    2014-01-01

    Functionality is added to the McStas neutron ray-tracing code, which allows individual neutron states before and after a scattering to be temporarily stored, and analysed. This logging mechanism has multiple uses, including studies of longitudinal intensity loss in neutron guides and guide coatin...

  9. Angus McBean - Portraits

    NARCIS (Netherlands)

    Pepper, T.

    2007-01-01

    Angus McBean (1904-90) was one of the most extraordinary British photographers of the twentieth century. In a career that spanned the start of the Second World War through the birth of the 'Swinging Sixties' to the 1980s, he became the most prominent theatre photographer of his generation and, along

  10. NTS MC and A History

    International Nuclear Information System (INIS)

    Mary Alice Price; Kim Young

    2008-01-01

    Within the past three and a half years, the Nevada Test Site (NTS) has progressed from a Category IV to a Category I nuclear material facility. In accordance with direction from the U.S. Department of Energy (DOE) Secretary and National Nuclear Security Administration (NNSA) Administrator, NTS received shipments of large quantities of special nuclear material from Los Alamos National Laboratory (LANL) and other sites in the DOE complex. December 2004 was the first occurrence of Category I material at the NTS, with the exception of two weeks of sub-critical underground testing in 2001, since 1992. The Material Control and Accountability (MC and A) program was originally a jointlab effort by LANL, Lawrence Livermore National Laboratory, and Bechtel Nevada, but in March 2006 the NNSA Nevada Site Office appointed the NTS Management and Operations contractor with sole responsibility. This paper will discuss the process and steps taken to transition the NTS MC and A program from multiple organizations to a single entity and from a Category IV to a Category I program. This transition flourished as MC and A progressed from the 2004 Office of Assessment (OA) rating of 'Significant Weakness' to the 2007 OA assessment rating of 'Effective Performance'. The paper will provide timelines, funding and staffing issues, OA assessment findings and corrective actions, and future expectations. The process has been challenging, but MC and A's innovative responses to the challenges have been very successful

  11. A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.

    Science.gov (United States)

    Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L

    2018-05-16

    During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.

  12. ExMC Technology Watch

    Science.gov (United States)

    Krihak, M.; Barr, Y.; Watkins, S.; Fung, P.; McGrath, T.; Baumann, D.

    2012-01-01

    The Technology Watch (Tech Watch) project is a NASA endeavor conducted under the Human Research Program's (HRP) Exploration Medical Capability (ExMC) element, and focusing on ExMC technology gaps. The project involves several NASA centers, including the Johnson Space Center (JSC), Glenn Research Center (GRC), Ames Research Center (ARC), and the Langley Research Center (LaRC). The objective of Tech Watch is to identify emerging, high-impact technologies that augment current NASA HRP technology development efforts. Identifying such technologies accelerates the development of medical care and research capabilities for the mitigation of potential health issues encountered during human space exploration missions. The aim of this process is to leverage technologies developed by academia, industry and other government agencies and to identify the effective utilization of NASA resources to maximize the HRP return on investment. The establishment of collaborations with these entities is beneficial to technology development, assessment and/or insertion and further NASA's goal to provide a safe and healthy environment for human exploration. In 2011, the major focus areas for Tech Watch included information dissemination, education outreach and public accessibility to technology gaps and gap reports. The dissemination of information was accomplished through site visits to research laboratories and/or companies, and participation at select conferences where Tech Watch objectives and technology gaps were presented. Presentation of such material provided researchers with insights on NASA ExMC needs for space exploration and an opportunity to discuss potential areas of common interest. The second focus area, education outreach, was accomplished via two mechanisms. First, several senior student projects, each related to an ExMC technology gap, were sponsored by the various NASA centers. These projects presented ExMC related technology problems firsthand to collegiate laboratories

  13. Cutting Method of the CAD model of the Nuclear facility for Dismantling Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ikjune; Choi, ByungSeon; Hyun, Dongjun; Jeong, KwanSeong; Kim, GeunHo; Lee, Jonghwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    Current methods for process simulation cannot simulate the cutting operation flexibly. As is, to simulate a cutting operation, user needs to prepare the result models of cutting operation based on pre-define cutting path, depth and thickness with respect to a dismantle scenario in advance. And those preparations should be built again as scenario changes. To be, user can change parameters and scenarios dynamically within a simulation configuration process so that the user saves time and efforts to simulate cutting operations. This study presents the methodology of cutting operation which can be applied to all the procedure in the simulation of dismantling of nuclear facilities. We developed the cutting simulation module for cutting operation in the dismantling of the nuclear facilities based on proposed cutting methodology. We defined the requirement of model cutting methodology based on the requirement of the dismantling of nuclear facilities. And we implemented cutting simulation module based on API of the commercial CAD system.

  14. Algorithm simulating the atom displacement processes induced by the gamma rays on the base of Monte Carlo method

    International Nuclear Information System (INIS)

    Cruz, C. M.; Pinera, I; Abreu, Y.; Leyva, A.

    2007-01-01

    Present work concerns with the implementation of a Monte Carlo based calculation algorithm describing particularly the occurrence of Atom Displacements induced by the Gamma Radiation interactions at a given target material. The Atom Displacement processes were considered only on the basis of single elastic scattering interactions among fast secondary electrons with matrix atoms, which are ejected from their crystalline sites at recoil energies higher than a given threshold energy. The secondary electron transport was described assuming typical approaches on this matter, where consecutive small angle scattering and very low energy transfer events behave as a continuously cuasi-classical electron state changes along a given path length delimited by two discrete high scattering angle and electron energy losses events happening on a random way. A limiting scattering angle was introduced and calculated according Moliere-Bethe-Goudsmit-Saunderson Electron Multiple Scattering, which allows splitting away secondary electrons single scattering processes from multiple one, according which a modified McKinley-Feshbach electron elastic scattering cross section arises. This distribution was statistically sampled and simulated in the framework of the Monte Carlo Method to perform discrete single electron scattering processes, particularly those leading to Atom Displacement events. The possibility of adding this algorithm to present existing open Monte Carlo code systems is analyze, in order to improve their capabilities. (Author)

  15. Comparative Study on Two Melting Simulation Methods: Melting Curve of Gold

    International Nuclear Information System (INIS)

    Liu Zhong-Li; Li Rui; Sun Jun-Sheng; Zhang Xiu-Lu; Cai Ling-Cang

    2016-01-01

    Melting simulation methods are of crucial importance to determining melting temperature of materials efficiently. A high-efficiency melting simulation method saves much simulation time and computational resources. To compare the efficiency of our newly developed shock melting (SM) method with that of the well-established two-phase (TP) method, we calculate the high-pressure melting curve of Au using the two methods based on the optimally selected interatomic potentials. Although we only use 640 atoms to determine the melting temperature of Au in the SM method, the resulting melting curve accords very well with the results from the TP method using much more atoms. Thus, this shows that a much smaller system size in SM method can still achieve a fully converged melting curve compared with the TP method, implying the robustness and efficiency of the SM method. (paper)

  16. Multigrid methods for fully implicit oil reservoir simulation

    Energy Technology Data Exchange (ETDEWEB)

    Molenaar, J.

    1995-12-31

    In this paper, the authors consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations the material balance or continuity equations, and the equation of motion (Darcy`s law). For the numerical solution of this system of nonlinear partial differential equations, there are two approaches: the fully implicit or simultaneous solution method, and the sequential solution method. In this paper, the authors consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations.

  17. Industrial development of a simulation method for ore recovery evaluation

    International Nuclear Information System (INIS)

    Deraisme; De Fouquet; Fraisse

    1983-01-01

    The purpose of downstream geostatistics is to provide to engineers, responsible for mining project studies, with a method for predicting the ore reserve recovery coming from different mining methods and for choosing the best one according to economic criteria. In the case of the BEN LOMOND uranium deposit, the metal recovery at the production stage depends on the geometry of mineralized lenses. For the first step of this study the usual technique for constructing a numerical model of deposit has been used. But this does not reproduce the geological structures very precisely. The recovered reserves have been computed for three more or less selective mining methods. This has been done inputing the outlines of stopes on a digitalizer. In the case of a cut and fill method an automatic algorithm for the optimization under constraints has been developed [fr

  18. Phase portrait methods for verifying fluid dynamic simulations

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, H.B.

    1989-01-01

    As computing resources become more powerful and accessible, engineers more frequently face the difficult and challenging engineering problem of accurately simulating nonlinear dynamic phenomena. Although mathematical models are usually available, in the form of initial value problems for differential equations, the behavior of the solutions of nonlinear models is often poorly understood. A notable example is fluid dynamics: while the Navier-Stokes equations are believed to correctly describe turbulent flow, no exact mathematical solution of these equations in the turbulent regime is known. Differential equations can of course be solved numerically, but how are we to assess numerical solutions of complex phenomena without some understanding of the mathematical problem and its solutions to guide us

  19. Coupling methods for parallel running RELAPSim codes in nuclear power plant simulation

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yankai; Lin, Meng, E-mail: linmeng@sjtu.edu.cn; Yang, Yanhua

    2016-02-15

    When the plant is modeled detailedly for high precision, it is hard to achieve real-time calculation for one single RELAP5 in a large-scale simulation. To improve the speed and ensure the precision of simulation at the same time, coupling methods for parallel running RELAPSim codes were proposed in this study. Explicit coupling method via coupling boundaries was realized based on a data-exchange and procedure-control environment. Compromise of synchronization frequency was well considered to improve the precision of simulation and guarantee the real-time simulation at the same time. The coupling methods were assessed using both single-phase flow models and two-phase flow models and good agreements were obtained between the splitting–coupling models and the integrated model. The mitigation of SGTR was performed as an integral application of the coupling models. A large-scope NPP simulator was developed adopting six splitting–coupling models of RELAPSim and other simulation codes. The coupling models could improve the speed of simulation significantly and make it possible for real-time calculation. In this paper, the coupling of the models in the engineering simulator is taken as an example to expound the coupling methods, i.e., coupling between parallel running RELAPSim codes, and coupling between RELAPSim code and other types of simulation codes. However, the coupling methods are also referable in other simulator, for example, a simulator employing ATHLETE instead of RELAP5, other logic code instead of SIMULINK. It is believed the coupling method is commonly used for NPP simulator regardless of the specific codes chosen in this paper.

  20. Advanced scientific computational methods and their applications to nuclear technologies. (4) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (4)

    International Nuclear Information System (INIS)

    Sekimura, Naoto; Okita, Taira

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)

  1. Application of Conjugate Gradient methods to tidal simulation

    Science.gov (United States)

    Barragy, E.; Carey, G.F.; Walters, R.A.

    1993-01-01

    A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.

  2. Hybrid statistics-simulations based method for atom-counting from ADF STEM images

    Energy Technology Data Exchange (ETDEWEB)

    De wael, Annelies, E-mail: annelies.dewael@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); De Backer, Annick [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Jones, Lewys; Nellist, Peter D. [Department of Materials, University of Oxford, Parks Road, OX1 3PH Oxford (United Kingdom); Van Aert, Sandra, E-mail: sandra.vanaert@uantwerpen.be [Electron Microscopy for Materials Science (EMAT), University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium)

    2017-06-15

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. - Highlights: • A hybrid method for atom-counting from ADF STEM images is introduced. • Image simulations are incorporated into a statistical framework in a reliable manner. • Limits of the existing methods for atom-counting are far exceeded. • Reliable counting results from an experimental low dose image are obtained. • Progress towards reliable quantitative analysis of beam-sensitive materials is made.

  3. Simulation of electrically driven jet using Chebyshev collocation method

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The model of electrically driven jet is governed by a series of quasi 1D dimensionless partial differential equations(PDEs).Following the method of lines,the Chebyshev collocation method is employed to discretize the PDEs and obtain a system of differential-algebraic equations(DAEs).By differentiating constrains in DAEs twice,the system is transformed into a set of ordinary differential equations(ODEs) with invariants.Then the implicit differential equations solver "ddaskr" is used to solve the ODEs and ...

  4. FDTD method using for electrodynamic simulation of resonator accelerating structures

    International Nuclear Information System (INIS)

    Vorogushin, M.F.; Svistunov, Yu.A.; Chetverikov, I.O.; Malyshev, V.N.; Malyukhov, M.V.

    2000-01-01

    The finite difference method in the time area (FDTD) makes it possible to model both stationary and nonstationary processes, originating by the beam and field interaction. Possibilities of the method by modeling the fields in the resonant accelerating structures are demonstrated. The possibility of considering the transition processes is important besides the solution of the problem on determination of frequencies and distribution in the space of the resonators oscillations proper types. The program presented makes it possible to obtain practical results for modeling accelerating structures on personal computers [ru

  5. Numerical simulation methods for phase-transitional flow

    NARCIS (Netherlands)

    Pecenko, A.

    2010-01-01

    The object of the present dissertation is a numerical study of multiphase flow of one fluid component. In particular, the research described in this thesis focuses on the development of numerical methods that are based on a diffuse-interface model (DIM). With this approach, the modeling problem

  6. Clearance gap flow: Simulations by discontinuous Galerkin method and experiments

    Czech Academy of Sciences Publication Activity Database

    Hála, Jindřich; Luxa, Martin; Bublík, O.; Prausová, H.; Vimmr, J.

    2016-01-01

    Roč. 92, May (2016), 02073-02073 ISSN 2100-014X. [EFM14 – Experimental Fluid Mechanics 2014. Český Krumlov, 18.11.2014-21.11.2014] Institutional support: RVO:61388998 Keywords : compressible fluid flow * narrow channel flow * discontinuous Galerkin finite element method Subject RIV: BK - Fluid Dynamics

  7. TreePM Method for Two-Dimensional Cosmological Simulations ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    We discuss the integration of the equations of motion that we use in the 2d TreePM code in section 7. .... spaced values of r in order to keep interpolation errors in control. .... hence we cannot use the usual leap-frog method. We recast the ...

  8. A Simulator to Enhance Teaching and Learning of Mining Methods ...

    African Journals Online (AJOL)

    Audio visual education that incorporates devices and materials which involve sight, sound, or both has become a sine qua non in recent times in the teaching and learning process. An automated physical model of mining methods aided with video instructions was designed and constructed by harnessing locally available ...

  9. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  10. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    Science.gov (United States)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  11. Methods for Factor Screening in Computer Simulation Experiments

    Science.gov (United States)

    1979-03-01

    of the dat-a In a-space, impacto the variable selection problem s ign if Lrast ly. S-arch-type variable selection methods include the all-po"sible...i.iv 41.1 ti * n wt- -iu’pt-v c C it st’vt’re mu It ico11 inear it v is pro-crtnt Lind. , ii;.qt4pai tlv * Iti’lt- c- j c. tic j icivnt, art, verv

  12. Empirical method for simulation of water tables by digital computers

    International Nuclear Information System (INIS)

    Carnahan, C.L.; Fenske, P.R.

    1975-09-01

    An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada

  13. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  14. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  15. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  16. Methods and Simulations of Muon Tomography and Reconstruction

    Science.gov (United States)

    Schreiner, Henry Fredrick, III

    This dissertation investigates imaging with cosmic ray muons using scintillator-based portable particle detectors, and covers a variety of the elements required for the detectors to operate and take data, from the detector internal communications and software algorithms to a measurement to allow accurate predictions of the attenuation of physical targets. A discussion of the tracking process for the three layer helical design developed at UT Austin is presented, with details of the data acquisition system, and the highly efficient data format. Upgrades to this system provide a stable system for taking images in harsh or inaccessible environments, such as in a remote jungle in Belize. A Geant4 Monte Carlo simulation was used to develop our understanding of the efficiency of the system, as well as to make predictions for a variety of different targets. The projection process is discussed, with a high-speed algorithm for sweeping a plane through data in near real time, to be used in applications requiring a search through space for target discovery. Several other projections and a foundation of high fidelity 3D reconstructions are covered. A variable binning scheme for rapidly varying statistics over portions of an image plane is also presented and used. A discrepancy in our predictions and the observed attenuation through smaller targets is shown, and it is resolved with a new measurement of low energy spectrum, using a specially designed enclosure to make a series of measurements underwater. This provides a better basis for understanding the images of small amounts of materials, such as for thin cover materials.

  17. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    Science.gov (United States)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  18. New features in McStas, version 1.5

    DEFF Research Database (Denmark)

    Åstrand, P.O.; Lefmann, K.; Farhi, E.

    2002-01-01

    The neutron ray-tracing simulation package McStas has attracted numerous users, and the development of the package continues with version 1.5 released at the ICNS 2001 conference. New features include: support for neutron polarisation, labelling of neutrons, realistic source and sample components......, and interface to the Riso instrument-control software TASCOM. We give a general introduction to McStas and present the latest developments. In particular, we give an example of how the neutron-label option has been used to locate the origin of a spurious side-peak, observed in an experiment with RITA-1 at Riso....

  19. New features in McStas, version 1.5

    International Nuclear Information System (INIS)

    Aastrand, P.O.; Lefmann, K.; Nielsen, K.; Skaarup, P.; Farhi, E.

    2002-01-01

    The neutron ray-tracing simulation package McStas has attracted numerous users, and the development of the package continues with version 1.5 released at the ICNS 2001 conference. New features include: support for neutron polarisation, labelling of neutrons, realistic source and sample components, and interface to the Riso instrument-control software TASCOM. We give a general introduction to McStas and present the latest developments. In particular, we give an example of how the neutron-label option has been used to locate the origin of a spurious side-peak, observed in an experiment with RITA-1 at Riso. (orig.)

  20. SCIENTIFIC PROGRESS OF THE MC-PAD NETWORK

    CERN Document Server

    Aguilar, J; Ambalathankandy, P; Apostolakis, J; Arora, R; Balog, T; Behnke, T; Beltrame, P; Bencivenni, G; Caiazza, S; Dong, J; Heller, M; Heuser, J; Idzik, M; Joram, C; Klanner, R; Koffeman, E; Korpar, S; Kramberger, G; Lohmann, W; Milovanović, M; Miscetti, S; Moll, M; Novgorodova, O; Pacifico, N; Pirvutoiu, C; Radu, R; Rahman, S; Rohe, T; Ropelewski, L; Roukoutakis, F; Schmidt, C; Schön, R; Sibille, J; Tsagri, M; Turala, M; Van Beuzekom, M; Verheyden, R; Villa, M; Zappon, F; Zawiejski, L; Zhang, J

    2013-01-01

    MC-PAD is a multi-site Initial Training Network on particle detectors in physics experiments. It comprises nine academic participants, three industrial partners and two associated academic partners. 17 recruited Early Stage and 5 Experienced Researchers have performed their scientific work in the network. The research and development work of MC-PAD is organized in 12 work packages, which focus on a large variety of aspects of particle detector development, electronics as well as simulation and modelling. The network was established in November 2008 and lasted until October 2012 (48 months). This report describes the R&D activities and highlights the main results achieved during this period.

  1. A simulation of portable PET with a new geometric image reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456 8611 (Japan): Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474 8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644 0023 (Japan)

    2006-12-20

    A new method is proposed for three-dimensional positron emission tomography image reconstruction. The method uses the elementary geometric property of line of response whereby two lines of response, which originate from radioactive isotopes in the same position, lie within a few millimeters distance of each other. The method differs from the filtered back projection method and the iterative reconstruction method. The method is applied to a simulation of portable positron emission tomography.

  2. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    Science.gov (United States)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  3. Simulation of the 2-dimensional Drude’s model using molecular dynamics method

    Energy Technology Data Exchange (ETDEWEB)

    Naa, Christian Fredy; Amin, Aisyah; Ramli,; Suprijadi,; Djamal, Mitra [Theoretical High Energy Physics and Instrumentation Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia); Wahyoedi, Seramika Ari; Viridi, Sparisoma, E-mail: viridi@cphys.fi.itb.ac.id [Nuclear and Biophysics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2015-04-16

    In this paper, we reported the results of the simulation of the electronic conduction in solids. The simulation is based on the Drude’s models by applying molecular dynamics (MD) method, which uses the fifth-order predictor-corrector algorithm. A formula of the electrical conductivity as a function of lattice length and ion diameter τ(L, d) cand be obtained empirically based on the simulation results.

  4. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON

    International Nuclear Information System (INIS)

    BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.

    2002-01-01

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed

  5. SPACE CHARGE SIMULATION METHODS INCORPORATED IN SOME MULTI - PARTICLE TRACKING CODES AND THEIR RESULTS COMPARISON.

    Energy Technology Data Exchange (ETDEWEB)

    BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.

    2002-06-03

    Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.

  6. Highly efficient molecular simulation methods for evaluation of thermodynamic properties of crystalline phases

    Science.gov (United States)

    Moustafa, Sabry Gad Al-Hak Mohammad

    Molecular simulation (MS) methods (e.g. Monte Carlo (MC) and molecular dynamics (MD)) provide a reliable tool (especially at extreme conditions) to measure solid properties. However, measuring them accurately and efficiently (smallest uncertainty for a given time) using MS can be a big challenge especially with ab initio-type models. In addition, comparing with experimental results through extrapolating properties from finite size to the thermodynamic limit can be a critical obstacle. We first estimate the free energy (FE) of crystalline system of simple discontinuous potential, hard-spheres (HS), at its melting condition. Several approaches are explored to determine the most efficient route. The comparison study shows a considerable improvement in efficiency over the standard MS methods that are known for solid phases. In addition, we were able to accurately extrapolate to the thermodynamic limit using relatively small system sizes. Although the method is applied to HS model, it is readily extended to more complex hard-body potentials, such as hard tetrahedra. The harmonic approximation of the potential energy surface is usually an accurate model (especially at low temperature and large density) to describe many realistic solid phases. In addition, since the analysis is done numerically the method is relatively cheap. Here, we apply lattice dynamics (LD) techniques to get the FE of clathrate hydrates structures. Rigid-bonds model is assumed to describe water molecules; this, however, requires additional orientation degree-of-freedom in order to specify each molecule. However, we were able to efficiently avoid using those degrees of freedom through a mathematical transformation that only uses the atomic coordinates of water molecules. In addition, the proton-disorder nature of hydrate water networks adds extra complexity to the problem, especially when extrapolating to the thermodynamic limit is needed. The finite-size effects of the proton disorder contribution is

  7. Aqueous-Phase Synthesis of Silver Nanodiscs and Nanorods in Methyl Cellulose Matrix: Photophysical Study and Simulation of UV–Vis Extinction Spectra Using DDA Method

    Directory of Open Access Journals (Sweden)

    Sarkar Priyanka

    2010-01-01

    Full Text Available Abstract We present a very simple and effective way for the synthesis of tunable coloured silver sols having different morphologies. The procedure is based on the seed-mediated growth approach where methyl cellulose (MC has been used as soft-template in the growth solution. Nanostructures of varying morphologies as well as colour of the silver sols are controlled by altering the concentration of citrate in the growth solution. Similar to the polymers in the solution, citrate ions also dynamically adsorbed on the growing silver nanoparticles and promote one (1-D and two-dimensional (2-D growth of nanoparticles. Silver nanostructures are characterized using UV–vis and HR-TEM spectroscopic study. Simulation of the UV–vis extinction spectra of our synthesized silver nanostructures has been carried out using discrete dipole approximation (DDA method.

  8. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  9. 3D simulation of friction stir welding based on movable cellular automaton method

    Science.gov (United States)

    Eremina, Galina M.

    2017-12-01

    The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.

  10. Atomistic Monte Carlo Simulation of Lipid Membranes

    Directory of Open Access Journals (Sweden)

    Daniel Wüstner

    2014-01-01

    Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.

  11. Development and simulation of various methods for neutron activation analysis

    International Nuclear Information System (INIS)

    Otgooloi, B.

    1993-01-01

    Simple methods for neutron activation analysis have been developed. The results on the studies of installation for determination of fluorine in fluorite ores directly on the lorry by fast neutron activation analysis have been shown. Nitrogen in organic materials was shown by N 14 and N 15 activation. The description of the new equipment 'FLUORITE' for fluorate factory have been shortly given. Pu and Be isotope in organic materials, including in wheat, was measured. 25 figs, 19 tabs. (Author, Translated by J.U)

  12. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  13. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    International Nuclear Information System (INIS)

    Kim, Tae Hoon; Kim, Yong Kyun; Chung, Hyun Tai

    2016-01-01

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results

  14. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    Science.gov (United States)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  15. An adaptative finite element method for turbulent flow simulations

    International Nuclear Information System (INIS)

    Arnoux-Guisse, F.; Bonnin, O.; Leal de Sousa, L.; Nicolas, G.

    1995-05-01

    After outlining the space and time discretization methods used in the N3S thermal hydraulic code developed at EDF/NHL, we describe the possibilities of the peripheral version, the Adaptative Mesh, which comprises two separate parts: the error indicator computation and the development of a module subdividing elements usable by the solid dynamics code ASTER and the electromagnetism code TRIFOU also developed by R and DD. The error indicators implemented in N3S are described. They consist of a projection indicator quantifying the space error in laminar or turbulent flow calculations and a Navier-Stokes residue indicator calculated on each element. The method for subdivision of triangles into four sub-triangles and tetrahedra into eight sub-tetrahedra is then presented with its advantages and drawbacks. It is illustrated by examples showing the efficiency of the module. The last concerns the 2 D case of flow behind a backward-facing step. (authors). 9 refs., 5 figs., 1 tab

  16. Heterogeneous Rock Simulation Using DIP-Micromechanics-Statistical Methods

    Directory of Open Access Journals (Sweden)

    H. Molladavoodi

    2018-01-01

    Full Text Available Rock as a natural material is heterogeneous. Rock material consists of minerals, crystals, cement, grains, and microcracks. Each component of rock has a different mechanical behavior under applied loading condition. Therefore, rock component distribution has an important effect on rock mechanical behavior, especially in the postpeak region. In this paper, the rock sample was studied by digital image processing (DIP, micromechanics, and statistical methods. Using image processing, volume fractions of the rock minerals composing the rock sample were evaluated precisely. The mechanical properties of the rock matrix were determined based on upscaling micromechanics. In order to consider the rock heterogeneities effect on mechanical behavior, the heterogeneity index was calculated in a framework of statistical method. A Weibull distribution function was fitted to the Young modulus distribution of minerals. Finally, statistical and Mohr–Coulomb strain-softening models were used simultaneously as a constitutive model in DEM code. The acoustic emission, strain energy release, and the effect of rock heterogeneities on the postpeak behavior process were investigated. The numerical results are in good agreement with experimental data.

  17. Simulation As a Method To Support Complex Organizational Transformations in Healthcare

    NARCIS (Netherlands)

    Rothengatter, D.C.F.; Katsma, Christiaan; van Hillegersberg, Jos

    2010-01-01

    In this paper we study the application of simulation as a method to support information system and process design in complex organizational transitions. We apply a combined use of a collaborative workshop approach with the use of a detailed and accurate graphical simulation model in a hospital that

  18. Discrete event simulation of crop operations in sweet pepper in support of work method innovation

    NARCIS (Netherlands)

    Ooster, van 't Bert; Aantjes, Wiger; Melamed, Z.

    2017-01-01

    Greenhouse Work Simulation, GWorkS, is a model that simulates crop operations in greenhouses for the purpose of analysing work methods. GWorkS is a discrete event model that approaches reality as a discrete stochastic dynamic system. GWorkS was developed and validated using cut-rose as a case

  19. An analytical method to simulate the H I 21-cm visibility signal for intensity mapping experiments

    Science.gov (United States)

    Sarkar, Anjan Kumar; Bharadwaj, Somnath; Marthi, Visweshwar Ram

    2018-01-01

    Simulations play a vital role in testing and validating H I 21-cm power spectrum estimation techniques. Conventional methods use techniques like N-body simulations to simulate the sky signal which is then passed through a model of the instrument. This makes it necessary to simulate the H I distribution in a large cosmological volume, and incorporate both the light-cone effect and the telescope's chromatic response. The computational requirements may be particularly large if one wishes to simulate many realizations of the signal. In this paper, we present an analytical method to simulate the H I visibility signal. This is particularly efficient if one wishes to simulate a large number of realizations of the signal. Our method is based on theoretical predictions of the visibility correlation which incorporate both the light-cone effect and the telescope's chromatic response. We have demonstrated this method by applying it to simulate the H I visibility signal for the upcoming Ooty Wide Field Array Phase I.

  20. Simulation-based investigation of the paired-gear method in cod-end selectivity studies

    DEFF Research Database (Denmark)

    Herrmann, Bent; Frandsen, Rikke; Holst, René

    2007-01-01

    In this paper, the paired-gear and covered cod-end methods for estimating the selectivity of trawl cod-ends are compared. A modified version of the cod-end selectivity simulator PRESEMO is used to simulate the data that would be collected from a paired-gear experiment where the test cod-end also ...

  1. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  2. Hybrid vortex simulations of wind turbines using a three-dimensional viscous-inviscid panel method

    DEFF Research Database (Denmark)

    Ramos García, Néstor; Hejlesen, Mads Mølholm; Sørensen, Jens Nørkær

    2017-01-01

    adirect calculation, whereas the contribution from the large downstream wake is calculated using a mesh-based method. Thehybrid method is first validated in detail against the well-known MEXICO experiment, using the direct filament method asa comparison. The second part of the validation includes a study......A hybrid filament-mesh vortex method is proposed and validated to predict the aerodynamic performance of wind turbinerotors and to simulate the resulting wake. Its novelty consists of using a hybrid method to accurately simulate the wakedownstream of the wind turbine while reducing...

  3. Some recent developments of the immersed interface method for flow simulation

    Science.gov (United States)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  4. Research of Monte Carlo method used in simulation of different maintenance processes

    International Nuclear Information System (INIS)

    Zhao Siqiao; Liu Jingquan

    2011-01-01

    The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)

  5. Simulation of a complete inelastic neutron scattering experiment

    DEFF Research Database (Denmark)

    Edwards, H.; Lefmann, K.; Lake, B.

    2002-01-01

    A simulation of an inelastic neutron scattering experiment on the high-temperature superconductor La2-xSrxCuO4 is presented. The complete experiment, including sample, is simulated using an interface between the experiment control program and the simulation software package (McStas) and is compared...... with the experimental data. Simulating the entire experiment is an attractive alternative to the usual method of convoluting the model cross section with the resolution function, especially if the resolution function is nontrivial....

  6. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  7. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  8. METHODS OF SIMULATION ON THE MAP OF ETHNOGEOGRAPHICAL KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    A. M. Saraeva

    2017-01-01

    Full Text Available The article deals with the features of the spatial representation of the location of objects and phenomena on the Earth. One of the types of “cartographic representation” is modeling on the contour map. The advantages of the method are revealed. The application of modeling techniques that allows one to include ethnogeographic data in the content of the characteristics of the territory and reflect them on the contour map. The basis of ethnogeographic modeling is the identification and creation of elements of the material and spiritual culture of peoples by means of conventional signs. Comparison of these elements, their superimposition with respect to each other, as well as their comparison with geographic maps allow us to determine the interrelations and the dependence of the phenomenon. Modeling on contour maps is the basic method of learning in geography. On the one hand, it creates a cartographic image of the studied territory, and on the other hand it facilitates the creation of “visual supports” on the map.Modeling on contour maps, at the beginning students put the basic geographical names, which will serve as the basic knowledge. Then, by purposefully analyzing and comparing the thematic maps of the atlas or textbook, the students reflect specific ethno-geographical knowledge on contour maps. As a result, contour maps acquire “their own face”, and do not become a simple copy of maps of an atlas or textbook.Also, the features of the effect of this technique on the formation of spatial representations about the studied object have been analyzed. Thanks to the cartographic model, one can maintain a constant cognitive interest in the material studied. Modeling on the contour map will allow one to present the structure of the links between the elements of the ethnogeographical material. The basis of ethnogeographic modeling on the contour map is the identification and mapping of elements of the material and spiritual culture of

  9. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.

    2014-01-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must

  10. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  11. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  12. Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations

    NARCIS (Netherlands)

    B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)

    2017-01-01

    textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly

  13. The McDonaldization of Higher Education.

    Science.gov (United States)

    Hayes, Dennis, Ed.; Wynyard, Robin, Ed.

    The essays in this collection discuss the future of the university in the context of the "McDonaldization" of society and of academia. The idea of McDonaldization, a term coined by G. Ritzer (1998), provides a tool for looking at the university and its inevitable changes. The chapters are: (1) "Enchanting McUniversity: Toward a…

  14. McKenzie River Subbasin Assessment, Summary Report 2000.

    Energy Technology Data Exchange (ETDEWEB)

    Alsea Geospatial, Inc.

    2000-02-01

    This document summarizes the findings of the McKenzie River Subbasin Assessment: Technical Report. The subbasin assessment tells a story about the McKenzie River watershed. What is the McKenzie's ecological history, how is the McKenzie doing today, and where is the McKenzie watershed headed ecologically? Knowledge is a good foundation for action. The more we know, the better prepared we are to make decisions about the future. These decisions involve both protecting good remaining habitat and repairing some of the parts that are broken in the McKenzie River watershed. The subbasin assessment is the foundation for conservation strategy and actions. It provides a detailed ecological assessment of the lower McKenzie River and floodplain, identifies conservation and restoration opportunities, and discusses the influence of some upstream actions and processes on the study area. The assessment identifies restoration opportunities at the reach level. In this study, a reach is a river segment from 0.7 to 2.7 miles long and is defined by changes in land forms, land use, stream junctions, and/or cultural features. The assessment also provides flexible tools for setting priorities and planning projects. The goal of this summary is to clearly and concisely extract the key issues, findings, and recommendations from the full-length Technical Report. The high priority recommended action items highlight areas that the McKenzie Watershed Council can significantly influence, and that will likely yield the greatest ecological benefit. People are encouraged to read the full Technical Report if they are interested in the detailed methods, findings, and references used in this study.

  15. To improve training methods in an engine room simulator-based training

    OpenAIRE

    Lin, Chingshin

    2016-01-01

    The simulator based training are used widely in both industry and school education to reduce the accidents nowadays. This study aims to suggest the improved training methods to increase the effectiveness of engine room simulator training. The effectiveness of training in engine room will be performance indicators and the self-evaluation by participants. In the first phase of observation, the aim is to find out the possible shortcomings of current training methods based on train...

  16. Study of the quantitative analysis approach of maintenance by the Monte Carlo simulation method

    International Nuclear Information System (INIS)

    Shimizu, Takashi

    2007-01-01

    This study is examination of the quantitative valuation by Monte Carlo simulation method of maintenance activities of a nuclear power plant. Therefore, the concept of the quantitative valuation of maintenance that examination was advanced in the Japan Society of Maintenology and International Institute of Universality (IUU) was arranged. Basis examination for quantitative valuation of maintenance was carried out at simple feed water system, by Monte Carlo simulation method. (author)

  17. Simulations

    CERN Document Server

    Ngada, Narcisse

    2015-06-15

    The complexity and cost of building and running high-power electrical systems make the use of simulations unavoidable. The simulations available today provide great understanding about how systems really operate. This paper helps the reader to gain an insight into simulation in the field of power converters for particle accelerators. Starting with the definition and basic principles of simulation, two simulation types, as well as their leading tools, are presented: analog and numerical simulations. Some practical applications of each simulation type are also considered. The final conclusion then summarizes the main important items to keep in mind before opting for a simulation tool or before performing a simulation.

  18. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  19. Comparison of Two Methods for Speeding Up Flash Calculations in Compositional Simulations

    DEFF Research Database (Denmark)

    Belkadi, Abdelkrim; Yan, Wei; Michelsen, Michael Locht

    2011-01-01

    Flash calculation is the most time consuming part in compositional reservoir simulations and several approaches have been proposed to speed it up. Two recent approaches proposed in the literature are the shadow region method and the Compositional Space Adaptive Tabulation (CSAT) method. The shadow...... region method reduces the computation time mainly by skipping stability analysis for a large portion of compositions in the single phase region. In the two-phase region, a highly efficient Newton-Raphson algorithm can be employed with initial estimates from the previous step. The CSAT method saves...... and the tolerance set for accepting the feed composition are the key parameters in this method since they will influence the simulation speed and the accuracy of simulation results. Inspired by CSAT, we proposed a Tieline Distance Based Approximation (TDBA) method to get approximate flash results in the twophase...

  20. INTEGRATING DATA ANALYTICS AND SIMULATION METHODS TO SUPPORT MANUFACTURING DECISION MAKING

    Science.gov (United States)

    Kibira, Deogratias; Hatim, Qais; Kumara, Soundar; Shao, Guodong

    2017-01-01

    Modern manufacturing systems are installed with smart devices such as sensors that monitor system performance and collect data to manage uncertainties in their operations. However, multiple parameters and variables affect system performance, making it impossible for a human to make informed decisions without systematic methodologies and tools. Further, the large volume and variety of streaming data collected is beyond simulation analysis alone. Simulation models are run with well-prepared data. Novel approaches, combining different methods, are needed to use this data for making guided decisions. This paper proposes a methodology whereby parameters that most affect system performance are extracted from the data using data analytics methods. These parameters are used to develop scenarios for simulation inputs; system optimizations are performed on simulation data outputs. A case study of a machine shop demonstrates the proposed methodology. This paper also reviews candidate standards for data collection, simulation, and systems interfaces. PMID:28690363

  1. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    International Nuclear Information System (INIS)

    Liu Jizhi; Chen Xingbi

    2009-01-01

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  2. A quasi-3-dimensional simulation method for a high-voltage level-shifting circuit structure

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jizhi; Chen Xingbi, E-mail: jzhliu@uestc.edu.c [State Key Laboratory of Electronic Thin Films and Integrated Devices, University of Electronic Science and Technology of China, Chengdu 610054 (China)

    2009-12-15

    A new quasi-three-dimensional (quasi-3D) numeric simulation method for a high-voltage level-shifting circuit structure is proposed. The performances of the 3D structure are analyzed by combining some 2D device structures; the 2D devices are in two planes perpendicular to each other and to the surface of the semiconductor. In comparison with Davinci, the full 3D device simulation tool, the quasi-3D simulation method can give results for the potential and current distribution of the 3D high-voltage level-shifting circuit structure with appropriate accuracy and the total CPU time for simulation is significantly reduced. The quasi-3D simulation technique can be used in many cases with advantages such as saving computing time, making no demands on the high-end computer terminals, and being easy to operate. (semiconductor integrated circuits)

  3. System dynamic simulation: A new method in social impact assessment (SIA)

    Energy Technology Data Exchange (ETDEWEB)

    Karami, Shobeir, E-mail: shobeirkarami@gmail.com [Agricultural Extension and Education, Shiraz University (Iran, Islamic Republic of); Karami, Ezatollah, E-mail: ekarami@shirazu.ac.ir [Agricultural Extension and Education, Shiraz University (Iran, Islamic Republic of); Buys, Laurie, E-mail: l.buys@qut.edu.au [Creative Industries Faculty, School of Design, Queensland University of Technology (Australia); Drogemuller, Robin, E-mail: robin.drogemuller@qut.edu.au [Creative Industries Faculty, School of Design, Queensland University of Technology (Australia)

    2017-01-15

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts to develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.

  4. Study on simulation methods of atrium building cooling load in hot and humid regions

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Yiqun; Li, Yuming; Huang, Zhizhong [Institute of Building Performance and Technology, Sino-German College of Applied Sciences, Tongji University, 1239 Siping Road, Shanghai 200092 (China); Wu, Gang [Weldtech Technology (Shanghai) Co. Ltd. (China)

    2010-10-15

    In recent years, highly glazed atria are popular because of their architectural aesthetics and advantage of introducing daylight into inside. However, cooling load estimation of such atrium buildings is difficult due to complex thermal phenomena that occur in the atrium space. The study aims to find out a simplified method of estimating cooling loads through simulations for various types of atria in hot and humid regions. Atrium buildings are divided into different types. For every type of atrium buildings, both CFD and energy models are developed. A standard method versus the simplified one is proposed to simulate cooling load of atria in EnergyPlus based on different room air temperature patterns as a result from CFD simulation. It incorporates CFD results as input into non-dimensional height room air models in EnergyPlus, and the simulation results are defined as a baseline model in order to compare with the results from the simplified method for every category of atrium buildings. In order to further validate the simplified method an actual atrium office building is tested on site in a typical summer day and measured results are compared with simulation results using the simplified methods. Finally, appropriate methods of simulating different types of atrium buildings are proposed. (author)

  5. System dynamic simulation: A new method in social impact assessment (SIA)

    International Nuclear Information System (INIS)

    Karami, Shobeir; Karami, Ezatollah; Buys, Laurie; Drogemuller, Robin

    2017-01-01

    Many complex social questions are difficult to address adequately with conventional methods and techniques, due to the complicated dynamics, and hard to quantify social processes. Despite these difficulties researchers and practitioners have attempted to use conventional methods not only in evaluative modes but also in predictive modes to inform decision making. The effectiveness of SIAs would be increased if they were used to support the project design processes. This requires deliberate use of lessons from retrospective assessments to inform predictive assessments. Social simulations may be a useful tool for developing a predictive SIA method. There have been limited attempts to develop computer simulations that allow social impacts to be explored and understood before implementing development projects. In light of this argument, this paper aims to introduce system dynamic (SD) simulation as a new predictive SIA method in large development projects. We propose the potential value of the SD approach to simulate social impacts of development projects. We use data from the SIA of Gareh-Bygone floodwater spreading project to illustrate the potential of SD simulation in SIA. It was concluded that in comparison to traditional SIA methods SD simulation can integrate quantitative and qualitative inputs from different sources and methods and provides a more effective and dynamic assessment of social impacts for development projects. We recommend future research to investigate the full potential of SD in SIA in comparing different situations and scenarios.

  6. Concentration gradient driven molecular dynamics: a new method for simulations of membrane permeation and separation.

    Science.gov (United States)

    Ozcan, Aydin; Perego, Claudio; Salvalaglio, Matteo; Parrinello, Michele; Yazaydin, Ozgur

    2017-05-01

    In this study, we introduce a new non-equilibrium molecular dynamics simulation method to perform simulations of concentration driven membrane permeation processes. The methodology is based on the application of a non-conservative bias force controlling the concentration of species at the inlet and outlet of a membrane. We demonstrate our method for pure methane, ethane and ethylene permeation and for ethane/ethylene separation through a flexible ZIF-8 membrane. Results show that a stationary concentration gradient is maintained across the membrane, realistically simulating an out-of-equilibrium diffusive process, and the computed permeabilities and selectivity are in good agreement with experimental results.

  7. A method for data handling numerical results in parallel OpenFOAM simulations

    International Nuclear Information System (INIS)

    nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" data-affiliation=" (Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2nd Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania))" >Anton, Alin; th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" data-affiliation=" (Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24th Mihai Viteazu Ave., 300221, TM Timişoara (Romania))" >Muntean, Sebastian

    2015-01-01

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit ® [1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms

  8. A method for data handling numerical results in parallel OpenFOAM simulations

    Energy Technology Data Exchange (ETDEWEB)

    Anton, Alin [Faculty of Automatic Control and Computing, Politehnica University of Timişoara, 2" n" d Vasile Pârvan Ave., 300223, TM Timişoara, Romania, alin.anton@cs.upt.ro (Romania); Muntean, Sebastian [Center for Advanced Research in Engineering Science, Romanian Academy – Timişoara Branch, 24" t" h Mihai Viteazu Ave., 300221, TM Timişoara (Romania)

    2015-12-31

    Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.

  9. Simulation methods supporting homologation of Electronic Stability Control in vehicle variants

    Science.gov (United States)

    Lutz, Albert; Schick, Bernhard; Holzmann, Henning; Kochem, Michael; Meyer-Tuve, Harald; Lange, Olav; Mao, Yiqin; Tosolin, Guido

    2017-10-01

    Vehicle simulation has a long tradition in the automotive industry as a powerful supplement to physical vehicle testing. In the field of Electronic Stability Control (ESC) system, the simulation process has been well established to support the ESC development and application by suppliers and Original Equipment Manufacturers (OEMs). The latest regulation of the United Nations Economic Commission for Europe UN/ECE-R 13 allows also for simulation-based homologation. This extends the usage of simulation from ESC development to homologation. This paper gives an overview of simulation methods, as well as processes and tools used for the homologation of ESC in vehicle variants. The paper first describes the generic homologation process according to the European Regulation (UN/ECE-R 13H, UN/ECE-R 13/11) and U.S. Federal Motor Vehicle Safety Standard (FMVSS 126). Subsequently the ESC system is explained as well as the generic application and release process at the supplier and OEM side. Coming up with the simulation methods, the ESC development and application process needs to be adapted for the virtual vehicles. The simulation environment, consisting of vehicle model, ESC model and simulation platform, is explained in detail with some exemplary use-cases. In the final section, examples of simulation-based ESC homologation in vehicle variants are shown for passenger cars, light trucks, heavy trucks and trailers. This paper is targeted to give a state-of-the-art account of the simulation methods supporting the homologation of ESC systems in vehicle variants. However, the described approach and the lessons learned can be used as reference in future for an extended usage of simulation-supported releases of the ESC system up to the development and release of driver assistance systems.

  10. COMPARISON OF METHODS FOR SIMULATING TSUNAMI RUN-UP THROUGH COASTAL FORESTS

    Directory of Open Access Journals (Sweden)

    Benazir

    2017-09-01

    Full Text Available The research is aimed at reviewing two numerical methods for modeling the effect of coastal forest on tsunami run-up and to propose an alternative approach. Two methods for modeling the effect of coastal forest namely the Constant Roughness Model (CRM and Equivalent Roughness Model (ERM simulate the effect of the forest by using an artificial Manning roughness coefficient. An alternative approach that simulates each of the trees as a vertical square column is introduced. Simulations were carried out with variations of forest density and layout pattern of the trees. The numerical model was validated using an existing data series of tsunami run-up without forest protection. The study indicated that the alternative method is in good agreement with ERM method for low forest density. At higher density and when the trees were planted in a zigzag pattern, the ERM produced significantly higher run-up. For a zigzag pattern and at 50% forest densities which represents a water tight wall, both the ERM and CRM methods produced relatively high run-up which should not happen theoretically. The alternative method, on the other hand, reflected the entire tsunami. In reality, housing complex can be considered and simulated as forest with various size and layout of obstacles where the alternative approach is applicable. The alternative method is more accurate than the existing methods for simulating a coastal forest for tsunami mitigation but consumes considerably more computational time.

  11. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    Science.gov (United States)

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  13. Identifying deterministic signals in simulated gravitational wave data: algorithmic complexity and the surrogate data method

    International Nuclear Information System (INIS)

    Zhao Yi; Small, Michael; Coward, David; Howell, Eric; Zhao Chunnong; Ju Li; Blair, David

    2006-01-01

    We describe the application of complexity estimation and the surrogate data method to identify deterministic dynamics in simulated gravitational wave (GW) data contaminated with white and coloured noises. The surrogate method uses algorithmic complexity as a discriminating statistic to decide if noisy data contain a statistically significant level of deterministic dynamics (the GW signal). The results illustrate that the complexity method is sensitive to a small amplitude simulated GW background (SNR down to 0.08 for white noise and 0.05 for coloured noise) and is also more robust than commonly used linear methods (autocorrelation or Fourier analysis)

  14. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    Science.gov (United States)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  15. Multimedia transmission in MC-CDMA using adaptive subcarrier power allocation and CFO compensation

    Science.gov (United States)

    Chitra, S.; Kumaratharan, N.

    2018-02-01

    Multicarrier code division multiple access (MC-CDMA) system is one of the most effective techniques in fourth-generation (4G) wireless technology, due to its high data rate, high spectral efficiency and resistance to multipath fading. However, MC-CDMA systems are greatly deteriorated by carrier frequency offset (CFO) which is due to Doppler shift and oscillator instabilities. It leads to loss of orthogonality among the subcarriers and causes intercarrier interference (ICI). Water filling algorithm (WFA) is an efficient resource allocation algorithm to solve the power utilisation problems among the subcarriers in time-dispersive channels. The conventional WFA fails to consider the effect of CFO. To perform subcarrier power allocation with reduced CFO and to improve the capacity of MC-CDMA system, residual CFO compensated adaptive subcarrier power allocation algorithm is proposed in this paper. The proposed technique allocates power only to subcarriers with high channel to noise power ratio. The performance of the proposed method is evaluated using random binary data and image as source inputs. Simulation results depict that the bit error rate performance and ICI reduction capability of the proposed modified WFA offered superior performance in both power allocation and image compression for high-quality multimedia transmission in the presence of CFO and imperfect channel state information conditions.

  16. MC-50 AVF cyclotron operation

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Jong Seo; Lee, Dong Hoon; Kim, You Seok; Park, Chan Won; Lee, Yong Min; Hong, Sung Seok; Lee, Min Yong

    1995-12-01

    The first cyclotron in Korea, MC-59 cyclotron is used for neutron irradiation, radionuclide development, production and material and biomedical research. 50.5MeV and 35MeV proton beam have been extracted with 20-70 .mu.A. A total of beam extraction time are 1095.7 hours. 206.5 hours are used for the developments and 663.8 hours are for radionuclide production and development and 225.4 hours for application researches. The shutdown days are 23days. Fundamental data for failure decrement and efficient beam extraction were composed and maintenance technologies were developed. (author). 8 tabs., 17 figs., 10 refs.

  17. MC-50 AVF cyclotron operation

    International Nuclear Information System (INIS)

    Kim, Yu Seok; Chai, Jong Seo; Bak, Seong Ki; Park, Chan Won; Jo, Young Ho; Hong, Seong Seok; Lee, Min Yong; Jang Ho Ha

    2000-01-01

    The first cyclotron in Korea, MC-50 cyclotron is used for neutron irradiation, radionuclide development, production and material and biomedical research. 50.5MeV and 35MeV proton beam have been extracted with 20-60μA. A total of beam extraction time are 1095.7 hours. 206.5 hours are used for the developments and 663.8 hours are for radionuclide production and development and 225.4 hours for application researches. The shutdown days are 23 days. Fundamental data for failure decrement and efficient beam extraction were composed and maintenance technologies were developed

  18. MC-50 AVF cyclotron operation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yu Seok; Chai, Jong Seo; Bak, Seong Ki; Park, Chan Won; Jo, Young Ho; Hong, Seong Seok; Lee, Min Yong; Jang Ho Ha

    2000-01-01

    The first cyclotron in Korea, MC-50 cyclotron is used for neutron irradiation, radionuclide development, production and material and biomedical research. 50.5MeV and 35MeV proton beam have been extracted with 20-60{mu}A. A total of beam extraction time are 1095.7 hours. 206.5 hours are used for the developments and 663.8 hours are for radionuclide production and development and 225.4 hours for application researches. The shutdown days are 23 days. Fundamental data for failure decrement and efficient beam extraction were composed and maintenance technologies were developed.

  19. MC-50 AVF cyclotron operation

    International Nuclear Information System (INIS)

    Chae, Jong Seo; Lee, Dong Hoon; Kim, You Seok; Park, Chan Won; Lee, Yong Min; Hong, Sung Seok; Lee, Min Yong.

    1995-12-01

    The first cyclotron in Korea, MC-59 cyclotron is used for neutron irradiation, radionuclide development, production and material and biomedical research. 50.5MeV and 35MeV proton beam have been extracted with 20-70 .mu.A. A total of beam extraction time are 1095.7 hours. 206.5 hours are used for the developments and 663.8 hours are for radionuclide production and development and 225.4 hours for application researches. The shutdown days are 23days. Fundamental data for failure decrement and efficient beam extraction were composed and maintenance technologies were developed. (author). 8 tabs., 17 figs., 10 refs

  20. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    Science.gov (United States)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  1. A Multiscale Simulation Method and Its Application to Determine the Mechanical Behavior of Heterogeneous Geomaterials

    Directory of Open Access Journals (Sweden)

    Shengwei Li

    2017-01-01

    Full Text Available To study the micro/mesomechanical behaviors of heterogeneous geomaterials, a multiscale simulation method that combines molecular simulation at the microscale, a mesoscale analysis of polished slices, and finite element numerical simulation is proposed. By processing the mesostructure images obtained from analyzing the polished slices of heterogeneous geomaterials and mapping them onto finite element meshes, a numerical model that more accurately reflects the mesostructures of heterogeneous geomaterials was established by combining the results with the microscale mechanical properties of geomaterials obtained from the molecular simulation. This model was then used to analyze the mechanical behaviors of heterogeneous materials. Because kernstone is a typical heterogeneous material that comprises many types of mineral crystals, it was used for the micro/mesoscale mechanical behavior analysis in this paper using the proposed method. The results suggest that the proposed method can be used to accurately and effectively study the mechanical behaviors of heterogeneous geomaterials at the micro/mesoscales.

  2. Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows

    Science.gov (United States)

    Zwick, David; Hackl, Jason; Balachandar, S.

    2017-11-01

    Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.

  3. Resolved-particle simulation by the Physalis method: Enhancements and new capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Sierakowski, Adam J., E-mail: sierakowski@jhu.edu [Department of Mechanical Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Prosperetti, Andrea [Department of Mechanical Engineering, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede (Netherlands)

    2016-03-15

    We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrative simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.

  4. Method for simulating predictive control of building systems operation in the early stages of building design

    DEFF Research Database (Denmark)

    Petersen, Steffen; Svendsen, Svend

    2011-01-01

    A method for simulating predictive control of building systems operation in the early stages of building design is presented. The method uses building simulation based on weather forecasts to predict whether there is a future heating or cooling requirement. This information enables the thermal...... control systems of the building to respond proactively to keep the operational temperature within the thermal comfort range with the minimum use of energy. The method is implemented in an existing building simulation tool designed to inform decisions in the early stages of building design through...... parametric analysis. This enables building designers to predict the performance of the method and include it as a part of the solution space. The method furthermore facilitates the task of configuring appropriate building systems control schemes in the tool, and it eliminates time consuming manual...

  5. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  6. The effect of high fidelity simulated learning methods on physiotherapy pre-registration education: a systematic review protocol.

    Science.gov (United States)

    Roberts, Fiona; Cooper, Kay

    2017-11-01

    The objective of this review is to identify if high fidelity simulated learning methods are effective in enhancing clinical/practical skills compared to usual, low fidelity simulated learning methods in pre-registration physiotherapy education.

  7. Treatment plan evaluation for interstitial photodynamic therapy in a mouse model by Monte Carlo simulation with FullMonte

    Directory of Open Access Journals (Sweden)

    Jeffrey eCassidy

    2015-02-01

    Full Text Available Monte Carlo (MC simulation is recognized as the gold standard for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.

  8. A new algorithm for the simulation of the Boltzmann equation using the direct simulation monte-carlo method

    International Nuclear Information System (INIS)

    Ganjaei, A. A.; Nourazar, S. S.

    2009-01-01

    A new algorithm, the modified direct simulation Monte-Carlo (MDSMC) method, for the simulation of Couette- Taylor gas flow problem is developed. The Taylor series expansion is used to obtain the modified equation of the first order time discretization of the collision equation and the new algorithm, MDSMC, is implemented to simulate the collision equation in the Boltzmann equation. In the new algorithm (MDSMC) there exists a new extra term which takes in to account the effect of the second order collision. This new extra term has the effect of enhancing the appearance of the first Taylor instabilities of vortices streamlines. In the new algorithm (MDSMC) there also exists a second order term in time step in the probabilistic coefficients which has the effect of simulation with higher accuracy than the previous DSMC algorithm. The appearance of the first Taylor instabilities of vortices streamlines using the MDSMC algorithm at different ratios of ω/ν (experimental data of Taylor) occurred at less time-step than using the DSMC algorithm. The results of the torque developed on the stationary cylinder using the MDSMC algorithm show better agreement in comparison with the experimental data of Kuhlthau than the results of the torque developed on the stationary cylinder using the DSMC algorithm

  9. Petascale molecular dynamics simulation using the fast multipole method on K computer

    KAUST Repository

    Ohno, Yousuke; Yokota, Rio; Koyama, Hiroshi; Morimoto, Gentaro; Hasegawa, Aki; Masumoto, Gen; Okimoto, Noriaki; Hirano, Yoshinori; Ibeid, Huda; Narumi, Tetsu; Taiji, Makoto

    2014-01-01

    In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.

  10. Petascale molecular dynamics simulation using the fast multipole method on K computer

    KAUST Repository

    Ohno, Yousuke

    2014-10-01

    In this paper, we report all-atom simulations of molecular crowding - a result from the full node simulation on the "K computer", which is a 10-PFLOPS supercomputer in Japan. The capability of this machine enables us to perform simulation of crowded cellular environments, which are more realistic compared to conventional MD simulations where proteins are simulated in isolation. Living cells are "crowded" because macromolecules comprise ∼30% of their molecular weight. Recently, the effects of crowded cellular environments on protein stability have been revealed through in-cell NMR spectroscopy. To measure the performance of the "K computer", we performed all-atom classical molecular dynamics simulations of two systems: target proteins in a solvent, and target proteins in an environment of molecular crowders that mimic the conditions of a living cell. Using the full system, we achieved 4.4 PFLOPS during a 520 million-atom simulation with cutoff of 28 Å. Furthermore, we discuss the performance and scaling of fast multipole methods for molecular dynamics simulations on the "K computer", as well as comparisons with Ewald summation methods. © 2014 Elsevier B.V. All rights reserved.

  11. Comparison of ALE finite element method and adaptive smoothed finite element method for the numerical simulation of friction stir welding

    NARCIS (Netherlands)

    van der Stelt, A.A.; Bor, Teunis Cornelis; Geijselaers, Hubertus J.M.; Quak, W.; Akkerman, Remko; Huetink, Han; Menary, G

    2011-01-01

    In this paper, the material flow around the pin during friction stir welding (FSW) is simulated using a 2D plane strain model. A pin rotates without translation in a disc with elasto-viscoplastic material properties and the outer boundary of the disc is clamped. Two numerical methods are used to

  12. Innovative teaching methods in the professional training of nurses – simulation education

    Directory of Open Access Journals (Sweden)

    Michaela Miertová

    2013-12-01

    Full Text Available Introduction: The article is aimed to highlight usage of innovative teaching methods within simulation education in the professional training of nurses abroad and to present our experience based on passing intensive study programme at School of Nursing, Midwifery and Social Work, University of Salford (United Kingdom, UK within Intensive EU Lifelong Learning Programme (LPP Erasmus EU RADAR 2013. Methods: Implementation of simulation methods such as role-play, case studies, simulation scenarios, practical workshops and clinical skills workstation within structured ABCDE approach (AIM© Assessment and Management Tool was aimed to promote the development of theoretical knowledge and skills to recognize and manage acutely deteriorated patients. Structured SBAR approach (Acute SBAR Communication Tool was used for the training of communication and information sharing among the members of multidisciplinary health care team. OSCE approach (Objective Structured Clinical Examination was used for student’s individual formative assessment. Results: Simulation education is proved to have lots of benefits in the professional training of nurses. It is held in safe, controlled and realistic conditions (in simulation laboratories reflecting real hospital and community care environment with no risk of harming real patients accompanied by debriefing, discussion and analysis of all activities students have performed within simulated scenario. Such learning environment is supportive, challenging, constructive, motivated, engaging, skilled, flexible, inspiring and respectful. Thus the simulation education is effective, interactive, interesting, efficient and modern way of nursing education. Conclusion: Critical thinking and clinical competences of nurses are crucial for early recognition and appropriate response to acute deterioration of patient’s condition. These competences are important to ensure the provision of high quality nursing care. Methods of

  13. Fluid distribution network and steam generators and method for nuclear power plant training simulator

    International Nuclear Information System (INIS)

    Alliston, W.H.; Johnson, S.J.; Mutafelija, B.A.

    1975-01-01

    A description is given of a training simulator for the real-time dynamic operation of a nuclear power plant which utilizes apparatus that includes control consoles having manual and automatic devices corresponding to simulated plant components and indicating devices for monitoring physical values in the simulated plant. A digital computer configuration is connected to the control consoles to calculate the dynamic real-time simulated operation of the plant in accordance with the simulated plant components to provide output data including data for operating the control console indicating devices. In the method and system for simulating a fluid distribution network of the power plant, such as that which includes, for example, a main steam system which distributes steam from steam generators to high pressure turbine steam reheaters, steam dump valves, and feedwater heaters, the simultaneous solution of linearized non-linear algebraic equations is used to calculate all the flows throughout the simulated system. A plurality of parallel connected steam generators that supply steam to the system are simulated individually, and include the simulation of shrink-swell characteristics

  14. Research on neutron noise analysis stochastic simulation method for α calculation

    International Nuclear Information System (INIS)

    Zhong Bin; Shen Huayun; She Ruogu; Zhu Shengdong; Xiao Gang

    2014-01-01

    The prompt decay constant α has significant application on the physical design and safety analysis in nuclear facilities. To overcome the difficulty of a value calculation with Monte-Carlo method, and improve the precision, a new method based on the neutron noise analysis technology was presented. This method employs the stochastic simulation and the theory of neutron noise analysis technology. Firstly, the evolution of stochastic neutron was simulated by discrete-events Monte-Carlo method based on the theory of generalized Semi-Markov process, then the neutron noise in detectors was solved from neutron signal. Secondly, the neutron noise analysis methods such as Rossia method, Feynman-α method, zero-probability method, and cross-correlation method were used to calculate a value. All of the parameters used in neutron noise analysis method were calculated based on auto-adaptive arithmetic. The a value from these methods accords with each other, the largest relative deviation is 7.9%, which proves the feasibility of a calculation method based on neutron noise analysis stochastic simulation. (authors)

  15. Advanced scientific computational methods and their applications of nuclear technologies. (1) Overview of scientific computational methods, introduction of continuum simulation methods and their applications (1)

    International Nuclear Information System (INIS)

    Oka, Yoshiaki; Okuda, Hiroshi

    2006-01-01

    Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)

  16. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  17. Three-Dimensional Phase Field Simulations of Hysteresis and Butterfly Loops by the Finite Volume Method

    International Nuclear Information System (INIS)

    Xi Li-Ying; Chen Huan-Ming; Zheng Fu; Gao Hua; Tong Yang; Ma Zhi

    2015-01-01

    Three-dimensional simulations of ferroelectric hysteresis and butterfly loops are carried out based on solving the time dependent Ginzburg–Landau equations using a finite volume method. The influence of externally mechanical loadings with a tensile strain and a compressive strain on the hysteresis and butterfly loops is studied numerically. Different from the traditional finite element and finite difference methods, the finite volume method is applicable to simulate the ferroelectric phase transitions and properties of ferroelectric materials even for more realistic and physical problems. (paper)

  18. Analysis on the influence of supply method on a workstation with the help of dynamic simulation

    Directory of Open Access Journals (Sweden)

    Gavriluță Alin

    2017-01-01

    Full Text Available Considering the need of flexibility in any manufacturing process, the choice of the supply method of an assembly workstation can be a decision with instead influence on its performances. Using dynamic simulation, this article wants to compare the effect on a workstation cycle time of three different supply methods: supply on stock, supply in “Strike Zone” and synchronous supply. This study is part of an extended work that has the aim of compering by 3D layout design and dynamic simulation, different supply methods on an assembly line performances.

  19. Parallel shooting methods for finding steady state solutions to engine simulation models

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2007-01-01

    Parallel single- and multiple shooting methods were tested for finding periodic steady state solutions to a Stirling engine model. The model was used to illustrate features of the methods and possibilities for optimisations. Performance was measured using simulation of an experimental data set...

  20. A Lagrangian finite element method for the simulation of flow of non-newtonian liquids

    DEFF Research Database (Denmark)

    Hassager, Ole; Bisgaard, C

    1983-01-01

    A Lagrangian method for the simulation of flow of non-Newtonian liquids is implemented. The fluid mechanical equations are formulated in the form of a variational principle, and a discretization is performed by finite elements. The method is applied to the slow of a contravariant convected Maxwell...