Quantum Monte Carlo approaches for correlated systems
Becca, Federico
2017-01-01
Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Monte Carlo systems used for treatment planning and dose verification
Energy Technology Data Exchange (ETDEWEB)
Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)
2017-04-15
General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo
Efficiency of Monte Carlo sampling in chaotic systems.
Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G
2014-11-01
In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.
Meaningful timescales from Monte Carlo simulations of molecular systems
Costa, Liborio I
2016-01-01
A new Markov Chain Monte Carlo method for simulating the dynamics of molecular systems with atomistic detail is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.
Applications of quantum Monte Carlo methods in condensed systems
Kolorenc, Jindrich
2010-01-01
The quantum Monte Carlo methods represent a powerful and broadly applicable computational tool for finding very accurate solutions of the stationary Schroedinger equation for atoms, molecules, solids and a variety of model systems. The algorithms are intrinsically parallel and are able to take full advantage of the present-day high-performance computing systems. This review article concentrates on the fixed-node/fixed-phase diffusion Monte Carlo method with emphasis on its applications to electronic structure of solids and other extended many-particle systems.
Applicability of Quasi-Monte Carlo for lattice systems
Ammon, Andreas; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Müller-Preussker, Micheal
2013-01-01
This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like $N^{-1/2}$, where $N$ is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to $N^{-1}$, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.
Implementation of Monte Carlo Simulations for the Gamma Knife System
Energy Technology Data Exchange (ETDEWEB)
Xiong, W [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Huang, D [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Lee, L [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Feng, J [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Morris, K [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Calugaru, E [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Burman, C [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Li, J [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States); Ma, C-M [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States)
2007-06-15
Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 {+-} 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.
Determining MTF of digital detector system with Monte Carlo simulation
Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee
2005-04-01
We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.
Multi-microcomputer system for Monte-Carlo calculations
Berg, B; Krasemann, H
1981-01-01
The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.
Effective quantum Monte Carlo algorithm for modeling strongly correlated systems
Kashurnikov, V. A.; Krasavin, A. V.
2007-01-01
A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Quantum Monte Carlo simulation
Wang, Yazhen
2011-01-01
Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...
Fixed-Node Diffusion Monte Carlo of Lithium Systems
Rasch, Kevin
2015-01-01
We study lithium systems over a range of number of atoms, e.g., atomic anion, dimer, metallic cluster, and body-centered cubic crystal by the diffusion Monte Carlo method. The calculations include both core and valence electrons in order to avoid any possible impact by pseudo potentials. The focus of the study is the fixed-node errors, and for that purpose we test several orbital sets in order to provide the most accurate nodal hyper surfaces. We compare our results to other high accuracy calculations wherever available and to experimental results so as to quantify the the fixed-node errors. The results for these Li systems show that fixed-node quantum Monte Carlo achieves remarkably high accuracy total energies and recovers 97-99 % of the correlation energy.
Subtle Monte Carlo Updates in Dense Molecular Systems
DEFF Research Database (Denmark)
Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.;
2012-01-01
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule...
FAST CONVERGENT MONTE CARLO RECEIVER FOR OFDM SYSTEMS
Institute of Scientific and Technical Information of China (English)
Wu Lili; Liao Guisheng; Bao Zheng; Shang Yong
2005-01-01
The paper investigates the problem of the design of an optimal Orthogonal Frequency Division Multiplexing (OFDM) receiver against unknown frequency selective fading. A fast convergent Monte Carlo receiver is proposed. In the proposed method, the Markov Chain Monte Carlo (MCMC) methods are employed for the blind Bayesian detection without channel estimation. Meanwhile, with the exploitation of the characteristics of OFDM systems, two methods are employed to improve the convergence rate and enhance the efficiency of MCMC algorithms.One is the integration of the posterior distribution function with respect to the associated channel parameters, which is involved in the derivation of the objective distribution function; the other is the intra-symbol differential coding for the elimination of the bimodality problem resulting from the presence of unknown fading channels. Moreover, no matrix inversion is needed with the use of the orthogonality property of OFDM modulation and hence the computational load is significantly reduced. Computer simulation results show the effectiveness of the fast convergent Monte Carlo receiver.
Monte Carlo Simulation for the MAGIC-II System
Carmona, E; Moralejo, A; Vitale, V; Sobczynska, D; Haffke, M; Bigongiari, C; Otte, N; Cabras, G; De Maria, M; De Sabata, F
2007-01-01
Within the year 2007, MAGIC will be upgraded to a two telescope system at La Palma. Its main goal is to improve the sensitivity in the stereoscopic/coincident operational mode. At the same time it will lower the analysis threshold of the currently running single MAGIC telescope. Results from the Monte Carlo simulations of this system will be discussed. A comparison of the two telescope system with the performance of one single telescope will be shown in terms of sensitivity, angular resolution and energy resolution.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Reaction Ensemble Monte Carlo Simulation of Complex Molecular Systems.
Rosch, Thomas W; Maginn, Edward J
2011-02-08
Acceptance rules for reaction ensemble Monte Carlo (RxMC) simulations containing classically modeled atomistic degrees of freedom are derived for complex molecular systems where insertions and deletions are achieved gradually by utilizing the continuous fractional component (CFC) method. A self-consistent manner in which to utilize statistical mechanical data contained in ideal gas free energy parameters during RxMC moves is presented. The method is tested by applying it to two previously studied systems containing intramolecular degrees of freedom: the propene metathesis reaction and methyl-tert-butyl-ether (MTBE) synthesis. Quantitative agreement is found between the current results and those of Keil et al. (J. Chem. Phys. 2005, 122, 164705) for the propene metathesis reaction. Differences are observed between the equilibrium concentrations of the present study and those of Lísal et al. (AIChE J. 2000, 46, 866-875) for the MTBE reaction. It is shown that most of this difference can be attributed to an incorrect formulation of the Monte Carlo acceptance rule. Efficiency gains using CFC MC as opposed to single stage molecule insertions are presented.
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Interacting multiagent systems kinetic equations and Monte Carlo methods
Pareschi, Lorenzo
2014-01-01
The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...
Monte Carlo simulations of systems with complex energy landscapes
Wüst, T.; Landau, D. P.; Gervais, C.; Xu, Y.
2009-04-01
Non-traditional Monte Carlo simulations are a powerful approach to the study of systems with complex energy landscapes. After reviewing several of these specialized algorithms we shall describe the behavior of typical systems including spin glasses, lattice proteins, and models for "real" proteins. In the Edwards-Anderson spin glass it is now possible to produce probability distributions in the canonical ensemble and thermodynamic results of high numerical quality. In the hydrophobic-polar (HP) lattice protein model Wang-Landau sampling with an improved move set (pull-moves) produces results of very high quality. These can be compared with the results of other methods of statistical physics. A more realistic membrane protein model for Glycophorin A is also examined. Wang-Landau sampling allows the study of the dimerization process including an elucidation of the nature of the process.
Energy Technology Data Exchange (ETDEWEB)
Brown, F.B.; Sutton, T.M.
1996-02-01
This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.
Bardenet, R.
2012-01-01
ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Simulating Strongly Correlated Electron Systems with Hybrid Monte Carlo
Institute of Scientific and Technical Information of China (English)
LIU Chuan
2000-01-01
Using the path integral representation, the Hubbard and the periodic Anderson model on D-dimensional cubic lattice are transformed into field theories of fermions in D + 1 dimensions. These theories at half-filling possess a positive definite real symmetry fermion matrix and can be simulated using the hybrid Monte Carlo method.
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Zio, Enrico
2013-01-01
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis
Directory of Open Access Journals (Sweden)
Hyung Jin Shim
2015-01-01
Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.
Highly Efficient Monte-Carlo for Estimating the Unavailability of Markov Dynamic System1）
Institute of Scientific and Technical Information of China (English)
XIAOGang; DENGLi; ZHANGBen-Ai; ZHUJian-Shi
2004-01-01
Monte Carlo simulation has become an important tool for estimating the reliability andavailability of dynamic system, since conventional numerical methods are no longer efficient whenthe size of the system to solve is large. However, evaluating by a simulation the probability of oc-currence of very rare events means playing a very large number of histories of the system, whichleads to unacceptable computing time. Highly efficient Monte Carlo should be worked out. In thispaper, based on the integral equation describing state transitions of Markov dynamic system, a u-niform Monte Carlo for estimating unavailability is presented. Using free-flight estimator, directstatistical estimation Monte Carlo is achieved. Using both free-flight estimator and biased proba-bility space of sampling, weighted statistical estimation Monte Carlo is also achieved. Five MonteCarlo schemes, including crude simulation, analog simulation, statistical estimation based oncrude and analog simulation, and weighted statistical estimation, are used for calculating the un-availability of a repairable Con/3/30 : F system. Their efficiencies are compared with each other.The results show the weighted statistical estimation Monte Carlo has the smallest variance and thehighest efficiency in very rare events simulation.
Event-chain Monte Carlo algorithm for continuous spin systems and its application
Nishikawa, Yoshihiko; Hukushima, Koji
2016-09-01
The event-chain Monte Carlo (ECMC) algorithm is described for hard-sphere systems and general potential systems including interacting particle system and continuous spin systems. Using the ECMC algorithm, large-scale equilibrium Monte Carlo simulations are performed for a three-dimensional chiral helimagnetic model under a magnetic field. It is found that critical behavior of a phase transition changes with increasing the magnetic field.
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Quasi-Monte Carlo methods for lattice systems: a first look
Jansen, K; Nube, A; Griewank, A; Müller-Preussker, M
2013-01-01
We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like 1/Sqrt(N), where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to 1/N. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
Cell-veto Monte Carlo algorithm for long-range systems
Kapfer, Sebastian C.; Krauth, Werner
2016-09-01
We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Iraj Jabbari; Shahram Monadi
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation ...
Synchronous parallel kinetic Monte Carlo Diffusion in Heterogeneous Systems
Energy Technology Data Exchange (ETDEWEB)
Martinez Saez, Enrique [Los Alamos National Laboratory; Hetherly, Jeffery [Los Alamos National Laboratory; Caro, Jose A [Los Alamos National Laboratory
2010-12-06
A new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm has been developed in order to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields. Parallel implementation of the k-MC part based on a recently developed synchronous algorithm [1. Compo Phys. 227 (2008) 3804-3823] resorting on the introduction of a set of null events aiming at synchronizing the time for the different subdomains, added to the parallel efficiency of MD, provides the computer power required to evaluate jump rates 'on the flight', incorporating in this way the actual driving force emerging from chemical potential gradients, and the actual environment-dependent jump rates. The time gain has been analyzed and the parallel performance reported. The algorithm is tested on simple diffusion problems to verify its accuracy.
MONTE CARLO SIMULATION OF MULTIFOCAL STOCHASTIC SCANNING SYSTEM
Directory of Open Access Journals (Sweden)
LIXIN LIU
2014-01-01
Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Monte Carlo analysis of a control technique for a tunable white lighting system
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2017-01-01
A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...
Monte Carlo analysis of a control technique for a tunable white lighting system
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2017-01-01
A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)
2015-05-15
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.
Meaningful timescales from Monte Carlo simulations of particle systems with hard-core interactions
Costa, Liborio I.
2016-12-01
A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.
Energy Technology Data Exchange (ETDEWEB)
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Directory of Open Access Journals (Sweden)
Cecilia Maya
2004-12-01
Full Text Available El método Monte Carlo se aplica a varios casos de valoración de opciones financieras. El método genera una buena aproximación al comparar su precisión con la de otros métodos numéricos. La estimación que produce la versión Cruda de Monte Carlo puede ser aún más exacta si se recurre a metodologías de reducción de la varianza entre las cuales se sugieren la variable antitética y de la variable de control. Sin embargo, dichas metodologías requieren un esfuerzo computacional mayor por lo cual las mismas deben ser evaluadas en términos no sólo de su precisión sino también de su eficiencia.
Energy Technology Data Exchange (ETDEWEB)
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-07-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
Quasi-Monte Carlo methods for lattice systems. A first look
Energy Technology Data Exchange (ETDEWEB)
Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik
2013-02-15
We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Estimating the parameters of dynamical systems from Big Data using Sequential Monte Carlo samplers
Green, P. L.; Maskell, S.
2017-09-01
In this paper the authors present a method which facilitates computationally efficient parameter estimation of dynamical systems from a continuously growing set of measurement data. It is shown that the proposed method, which utilises Sequential Monte Carlo samplers, is guaranteed to be fully parallelisable (in contrast to Markov chain Monte Carlo methods) and can be applied to a wide variety of scenarios within structural dynamics. Its ability to allow convergence of one's parameter estimates, as more data is analysed, sets it apart from other sequential methods (such as the particle filter).
Monte Carlo studies of positron implantation in elemental metallic and multilayer systems
Energy Technology Data Exchange (ETDEWEB)
Ghosh, V.J.; Welch, D.O.; Lynn, K.G.
1992-01-01
We have used a Monte Carlo computer code developed at Brookhaven [sup 1,2] to study the implantation profiles of 1-10 keV positrons incident on a wide range of semi-infinite metals and multilayer systems. Our Monte Carlo program accounts for elastic scattering as well as inelastic scattering from core and valence electrons, and includes the excitation of plasmons. The implantation profiles of positrons in many metals as well as Pd/Al, and Al/Co/Si multilayers are presented. Scaling relations and closed-form expressions representing he implantation profiles are also discussed.
Monte Carlo studies of positron implantation in elemental metallic and multilayer systems
Energy Technology Data Exchange (ETDEWEB)
Ghosh, V.J.; Welch, D.O.; Lynn, K.G.
1992-12-01
We have used a Monte Carlo computer code developed at Brookhaven {sup 1,2} to study the implantation profiles of 1-10 keV positrons incident on a wide range of semi-infinite metals and multilayer systems. Our Monte Carlo program accounts for elastic scattering as well as inelastic scattering from core and valence electrons, and includes the excitation of plasmons. The implantation profiles of positrons in many metals as well as Pd/Al, and Al/Co/Si multilayers are presented. Scaling relations and closed-form expressions representing he implantation profiles are also discussed.
Development of Monte Carlo decay gamma-ray transport calculation system
Energy Technology Data Exchange (ETDEWEB)
Sato, Satoshi [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Kawasaki, Nobuo [Fujitsu Ltd., Tokyo (Japan); Kume, Etsuo [Japan Atomic Energy Research Inst., Center for Promotion of Computational Science and Engineering, Tokai, Ibaraki (Japan)
2001-06-01
In the DT fusion reactor, it is critical concern to evaluate the decay gamma-ray biological dose rates after the reactor shutdown exactly. In order to evaluate the decay gamma-ray biological dose rates exactly, three dimensional Monte Carlo decay gamma-ray transport calculation system have been developed by connecting the three dimensional Monte Carlo particle transport calculation code and the induced activity calculation code. The developed calculation system consists of the following four functions. (1) The operational neutron flux distribution is calculated by the three dimensional Monte Carlo particle transport calculation code. (2) The induced activities are calculated by the induced activity calculation code. (3) The decay gamma-ray source distribution is obtained from the induced activities. (4) The decay gamma-rays are generated by using the decay gamma-ray source distribution, and the decay gamma-ray transport calculation is conducted by the three dimensional Monte Carlo particle transport calculation code. In order to reduce the calculation time drastically, a biasing system for the decay gamma-ray source distribution has been developed, and the function is also included in the present system. In this paper, the outline and the detail of the system, and the execution example are reported. The evaluation for the effect of the biasing system is also reported. (author)
Computer program uses Monte Carlo techniques for statistical system performance analysis
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
LMC: Logarithmantic Monte Carlo
Mantz, Adam B.
2017-06-01
LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).
Directory of Open Access Journals (Sweden)
He Deyu
2016-09-01
Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.
Monte Carlo analysis of the accelerator-driven system at Kyoto University Research Reactor Institute
Energy Technology Data Exchange (ETDEWEB)
Kim, Won Kyeong; Lee, Deok Jung [Nuclear Engineering Division, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Lee, Hyun Chul [VHTR Technology Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Pyeon, Cheol Ho [Nuclear Engineering Science Division, Kyoto University Research Reactor Institute, Osaka (Japan); Shin, Ho Cheol [Core and Fuel Analysis Group, Korea Hydro and Nuclear Power Central Research Institute, Daejeon (Korea, Republic of)
2016-04-15
An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan), a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft-Walton type accelerator, which generates the external neutron source by deuterium-tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.
Monte Carlo Analysis of the Accelerator-Driven System at Kyoto University Research Reactor Institute
Directory of Open Access Journals (Sweden)
Wonkyeong Kim
2016-04-01
Full Text Available An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan, a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft–Walton type accelerator, which generates the external neutron source by deuterium–tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.
Density matrix quantum Monte Carlo
Blunt, N S; Spencer, J S; Foulkes, W M C
2013-01-01
This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...
Efficient kinetic Monte Carlo simulation
Schulze, Tim P.
2008-02-01
This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Kinetic Monte Carlo and Cellular Particle Dynamics Simulations of Multicellular Systems
Flenner, Elijah; Barz, Bogdan; Neagu, Adrian; Forgacs, Gabor; Kosztin, Ioan
2011-01-01
Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Computer simulations based on Metropolis Monte Carlo (MMC) algorithms were successful in explaining and predicting the resulting stationary structures (corresponding to the lowest adhesion energy state). Here we introduce two alternatives to the MMC approach for modeling cellular motion and self-assembly: (1) a kinetic Monte Carlo (KMC), and (2) a cellular particle dynamics (CPD) method. Unlike MMC, both KMC and CPD methods are capable of simulating the dynamics of the cellular system in real time. In the KMC approach a transition rate is associated with possible rearrangements of the cellular system, and the corresponding time evolution is expressed in terms of these rates. In the CPD approach cells are modeled as interacting cellular ...
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Coarse-grained stochastic processes and Monte Carlo simulations in lattice systems
Katsoulakis, M A; Vlachos, D G
2003-01-01
In this paper we present a new class of coarse-grained stochastic processes and Monte Carlo simulations, derived directly from microscopic lattice systems and describing mesoscopic length scales. As our primary example, we mainly focus on a microscopic spin-flip model for the adsorption and desorption of molecules between a surface adjacent to a gas phase, although a similar analysis carries over to other processes. The new model can capture large scale structures, while retaining microscopic information on intermolecular forces and particle fluctuations. The requirement of detailed balance is utilized as a systematic design principle to guarantee correct noise fluctuations for the coarse-grained model. We carry out a rigorous asymptotic analysis of the new system using techniques from large deviations and present detailed numerical comparisons of coarse-grained and microscopic Monte Carlo simulations. The coarse-grained stochastic algorithms provide large computational savings without increasing programming ...
Gan, Zecheng
2013-01-01
Computer simulation with Monte Carlo is an important tool to investigate the function and equilibrium properties of many systems with biological and soft matter materials solvable in solvents. The appropriate treatment of long-range electrostatic interaction is essential for these charged systems, but remains a challenging problem for large-scale simulations. We have developed an efficient Barnes-Hut treecode algorithm for electrostatic evaluation in Monte Carlo simulations of Coulomb many-body systems. The algorithm is based on a divide-and-conquer strategy and fast update of the octree data structure in each trial move through a local adjustment procedure. We test the accuracy of the tree algorithm, and use it to computer simulations of electric double layer near a spherical interface. It has been shown that the computational cost of the Monte Carlo method with treecode acceleration scales as $\\log N$ in each move. For a typical system with ten thousand particles, by using the new algorithm, the speed has b...
Development and validation of MCNPX-based Monte Carlo treatment plan verification system.
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Directory of Open Access Journals (Sweden)
Iraj Jabbari
2015-01-01
Full Text Available A Monte Carlo treatment plan verification (MCTPV system was developed for clinical treatment plan verification (TPV, especially for the conformal and intensity-modulated radiotherapy (IMRT plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D diode array (MapCHECK2 and gamma index analysis were used. The gamma passing rate (3%/3 mm of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%. The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.
Evaluation of the material assignment method used by a Monte Carlo treatment planning system.
Isambert, A; Brualla, L; Lefkopoulos, D
2009-12-01
An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan.
Monte Carlo simulations of precise timekeeping in the Milstar communication satellite system
Camparo, James C.; Frueholz, R. P.
1995-01-01
The Milstar communications satellite system will provide secure antijam communication capabilities for DOD operations into the next century. In order to accomplish this task, the Milstar system will employ precise timekeeping on its satellites and at its ground control stations. The constellation will consist of four satellites in geosynchronous orbit, each carrying a set of four rubidium (Rb) atomic clocks. Several times a day, during normal operation, the Mission Control Element (MCE) will collect timing information from the constellation, and after several days use this information to update the time and frequency of the satellite clocks. The MCE will maintain precise time with a cesium (Cs) atomic clock, synchronized to UTC(USNO) via a GPS receiver. We have developed a Monte Carlo simulation of Milstar's space segment timekeeping. The simulation includes the effects of: uplink/downlink time transfer noise; satellite crosslink time transfer noise; satellite diurnal temperature variations; satellite and ground station atomic clock noise; and also quantization limits regarding satellite time and frequency corrections. The Monte Carlo simulation capability has proven to be an invaluable tool in assessing the performance characteristics of various timekeeping algorithms proposed for Milstar, and also in highlighting the timekeeping capabilities of the system. Here, we provide a brief overview of the basic Milstar timekeeping architecture as it is presently envisioned. We then describe the Monte Carlo simulation of space segment timekeeping, and provide examples of the simulation's efficacy in resolving timekeeping issues.
The CMS Monte Carlo Production System: Development and Design
Energy Technology Data Exchange (ETDEWEB)
Evans, D. [Fermi National Accelerator Laboratory, Batavia, IL (United States)], E-mail: evansde@fnal.gov; Fanfani, A. [Universita degli Studi di Bologna and INFN Sezione di Bologna, Bologna (Italy); Kavka, C. [INFN Sezione di Trieste, Trieste (Italy); Lingen, F. van [California Institute of Technology, Pasadena, CA (United States); Eulisse, G. [Northeastern University, Boston, MA (United States); Bacchi, W.; Codispoti, G. [Universita degli Studi di Bologna and INFN Sezione di Bologna, Bologna (Italy); Mason, D. [Fermi National Accelerator Laboratory, Batavia, IL (United States); De Filippis, N. [INFN Sezione di Bari, Bari (Italy); Hernandez, J.M. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Madrid (Spain); Elmer, P. [Princeton University, Princeton, NJ (United States)
2008-03-15
The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment.
The CMS Monte Carlo Production System Development and Design
Evans, D; Kavka, C; Van Lingen, F; Eulisse, G; Bacchi, W; Codispoti, G; Mason, D; De Filippis, N; Hernandez J M; Elmer, P
2008-01-01
The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment.
Takahashi, F; Endo, A
2007-01-01
A system utilising radiation transport codes has been developed to derive accurate dose distributions in a human body for radiological accidents. A suitable model is quite essential for a numerical analysis. Therefore, two tools were developed to setup a 'problem-dependent' input file, defining a radiation source and an exposed person to simulate the radiation transport in an accident with the Monte Carlo calculation codes-MCNP and MCNPX. Necessary resources are defined by a dialogue method with a generally used personal computer for both the tools. The tools prepare human body and source models described in the input file format of the employed Monte Carlo codes. The tools were validated for dose assessment in comparison with a past criticality accident and a hypothesized exposure.
Quasi-Monte Carlo methods for lattice systems: A first look
Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.
2014-03-01
We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte
Velazquez, L.; Castro-Palacio, J. C.
2013-07-01
Recently, Velazquez and Curilef proposed a methodology to extend Monte Carlo algorithms based on a canonical ensemble which aims to overcome slow sampling problems associated with temperature-driven discontinuous phase transitions. We show in this work that Monte Carlo algorithms extended with this methodology also exhibit a remarkable efficiency near a critical point. Our study is performed for the particular case of a two-dimensional four-state Potts model on a square lattice with periodic boundary conditions. This analysis reveals that the extended version of Metropolis importance sampling is more efficient than the usual Swendsen-Wang and Wolff cluster algorithms. These results demonstrate the effectiveness of this methodology to improve the efficiency of MC simulations of systems that undergo any type of temperature-driven phase transition.
A Markov Chain Monte Carlo Based Method for System Identification
Energy Technology Data Exchange (ETDEWEB)
Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2002-10-22
This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.
Lattice gauge theories and Monte Carlo simulations
Rebbi, Claudio
1983-01-01
This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.
Fast quantum Monte Carlo on a GPU
Lutsyshyn, Y
2013-01-01
We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.
Characterizing a Proton Beam Scanning System for Monte Carlo Dose Calculation in Patients
Grassberger, C; Lomax, Tony; Paganetti, H
2015-01-01
The presented work has two goals. First, to demonstrate the feasibility of accurately characterizing a proton radiation field at treatment head exit for Monte Carlo dose calculation of active scanning patient treatments. Second, to show that this characterization can be done based on measured depth dose curves and spot size alone, without consideration of the exact treatment head delivery system. This is demonstrated through calibration of a Monte Carlo code to the specific beam lines of two institutions, Massachusetts General Hospital (MGH) and Paul Scherrer Institute (PSI). Comparison of simulations modeling the full treatment head at MGH to ones employing a parameterized phase space of protons at treatment head exit reveals the adequacy of the method for patient simulations. The secondary particle production in the treatment head is typically below 0.2% of primary fluence, except for low–energy electrons (protons), whose contribution to skin dose is negligible. However, there is significant difference between the two methods in the low-dose penumbra, making full treatment head simulations necessary to study out-of field effects such as secondary cancer induction. To calibrate the Monte Carlo code to measurements in a water phantom, we use an analytical Bragg peak model to extract the range-dependent energy spread at the two institutions, as this quantity is usually not available through measurements. Comparison of the measured with the simulated depth dose curves demonstrates agreement within 0.5mm over the entire energy range. Subsequently, we simulate three patient treatments with varying anatomical complexity (liver, head and neck and lung) to give an example how this approach can be employed to investigate site-specific discrepancies between treatment planning system and Monte Carlo simulations. PMID:25549079
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Monte Carlo Simulation of Magnetic System in the Tsallis Statistics
1999-01-01
We apply the Broad Histogram Method to an Ising system in the context of the recently reformulated Generalized Thermostatistics, and we claim it to be a very efficient simulation tool for this non-extensive statistics. Results are obtained for the nearest-neighbour version of the Ising model for a range of values of the $q$ parameter of Generalized Thermostatistics. We found an evidence that the 2D-Ising model does not undergo phase transitions at finite temperatures except for the extensive ...
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.
2013-07-01
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
Lectures on Monte Carlo methods
Madras, Neal
2001-01-01
Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati
Wieslander, Elinore; Knöös, Tommy
2003-10-01
An increasing number of patients receiving radiation therapy have metallic implants such as hip prostheses. Therefore, beams are normally set up to avoid irradiation through the implant; however, this cannot always be accomplished. In such situations, knowledge of the accuracy of the used treatment planning system (TPS) is required. Two algorithms, the pencil beam (PB) and the collapsed cone (CC), are implemented in the studied TPS. Comparisons are made with Monte Carlo simulations for 6 and 18 MV. The studied materials are steel, CoCrMo, Orthinox® (a stainless steel alloy and registered trademark of Stryker Corporation), TiAlV and Ti. Monte Carlo simulated depth dose curves and dose profiles are compared to CC and PB calculated data. The CC algorithm shows overall a better agreement with Monte Carlo than the PB algorithm. Thus, it is recommended to use the CC algorithm to get the most accurate dose calculation both for the planning target volume and for tissues adjacent to the implants when beams are set up to pass through implants.
Application of kinetic Monte Carlo method to equilibrium systems: vapour-liquid equilibria.
Ustinov, E A; Do, D D
2012-01-15
Kinetic Monte Carlo (kMC) simulations were carried out to describe the vapour-liquid equilibria of argon at various temperatures. This paper aims to demonstrate the potential of the kMC technique in the analysis of equilibrium systems and its advantages over the traditional Monte Carlo method, which is based on the Metropolis algorithm. The key feature of the kMC is the absence of discarded trial moves of molecules, which ensures larger number of configurations that are collected for time averaging. Consequently, the kMC technique results in significantly fewer errors for the same number of Monte Carlo steps, especially when the fluid is rarefied. An additional advantage of the kMC is that the relative displacement probability of molecules is significantly larger in rarefied regions, which results in a more efficient sampling. This provides a more reliable determination of the vapour phase pressure and density in case of non-uniform density distributions, such as the vapour-liquid interface or a fluid adsorbed on an open surface. We performed kMC simulations in a canonical ensemble, with a liquid slab in the middle of the simulation box to model two vapour-liquid interfaces. A number of thermodynamic properties such as the pressure, density, heat of evaporation and the surface tension were reliably determined as time averages. Copyright Â© 2011 Elsevier Inc. All rights reserved.
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
The Feynman Path Goes Monte Carlo
Sauer, Tilman
2001-01-01
Path integral Monte Carlo (PIMC) simulations have become an important tool for the investigation of the statistical mechanics of quantum systems. I discuss some of the history of applying the Monte Carlo method to non-relativistic quantum systems in path-integral representation. The principle feasibility of the method was well established by the early eighties, a number of algorithmic improvements have been introduced in the last two decades.
Monte Carlo Hamiltonian:Inverse Potential
Institute of Scientific and Technical Information of China (English)
LUO Xiang-Qian; CHENG Xiao-Ni; Helmut KR(O)GER
2004-01-01
The Monte Carlo Hamiltonian method developed recently allows to investigate the ground state and low-lying excited states of a quantum system,using Monte Carlo(MC)algorithm with importance sampling.However,conventional MC algorithm has some difficulties when applied to inverse potentials.We propose to use effective potential and extrapolation method to solve the problem.We present examples from the hydrogen system.
Monte Carlo integration on GPU
Kanzaki, J.
2010-01-01
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...
Comparing Subspace Methods for Closed Loop Subspace System Identification by Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
David Di Ruscio
2009-10-01
Full Text Available A novel promising bootstrap subspace system identification algorithm for both open and closed loop systems is presented. An outline of the SSARX algorithm by Jansson (2003 is given and a modified SSARX algorithm is presented. Some methods which are consistent for closed loop subspace system identification presented in the literature are discussed and compared to a recently published subspace algorithm which works for both open as well as for closed loop data, i.e., the DSR_e algorithm as well as the bootstrap method. Experimental comparisons are performed by Monte Carlo simulations.
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
Determination of the detective quantum efficiency of gamma camera systems: a Monte Carlo study.
Eriksson, Ida; Starck, Sven-Ake; Båth, Magnus
2010-01-01
The purpose of the present work was to investigate the validity of using the Monte Carlo technique for determining the detective quantum efficiency (DQE) of a gamma camera system and to use this technique in investigating the DQE behaviour of a gamma camera system and its dependency on a number of relevant parameters. The Monte Carlo-based software SIMIND, simulating a complete gamma camera system, was used in the present study. The modulation transfer function (MTF) of the system was determined from simulated images of a point source of (99m)Tc, positioned at different depths in a water phantom. Simulations were performed using different collimators and energy windows. The MTF of the system was combined with the photon yield and the sensitivity, obtained from the simulations, to form the frequency-dependent DQE of the system. As figure-of-merit (FOM), the integral of the 2D DQE was used. The simulated DQE curves agreed well with published data. As expected, there was a strong dependency of the shape and magnitude of the DQE curve on the collimator, energy window and imaging position. The highest FOM was obtained for a lower energy threshold of 127 keV for objects close to the detector and 131 keV for objects deeper in the phantom, supporting an asymmetric window setting to reduce scatter. The Monte Carlo software SIMIND can be used to determine the DQE of a gamma camera system from a simulated point source alone. The optimal DQE results in the present study were obtained for parameter settings close to the clinically used settings.
Energy Technology Data Exchange (ETDEWEB)
Zucca Aparcio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrila, J.
2016-10-01
The commissioning procedures of a Monte Carlo treatment planning system (MC) for photon beams from a dedicated stereotactic body radiosurgery (SBRT) unit has been reported in this document. XVMC has been the MC Code available in the treatment planning system evaluated (BrainLAB iPlan RT Dose) which is based on Virtual Source Models that simulate the primary and scattered radiation, besides the electronic contamination, using gaussian components for whose modelling are required measurements of dose profiles, percentage depth dose and output factors, performed both in water and in air. The dosimetric accuracy of the particle transport simulation has been analyzed by validating the calculations in homogeneous and heterogeneous media versus measurements made under the same conditions as the dose calculation, and checking the stochastic behaviour of Monte Carlo calculations when using different statistical variances. Likewise, it has been verified how the planning system performs the conversion from dose to medium to dose to water, applying the stopping power ratio water to medium, in the presence of heterogeneities where this phenomenon is relevant, such as high density media (cortical bone). (Author)
Self-consistent kinetic lattice Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Horsfield, A.; Dunham, S.; Fujitani, Hideaki
1999-07-01
The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.
Alpha Eigenvalue Estimation from Dynamic Monte Carlo Calculation for Subcritical Systems
Energy Technology Data Exchange (ETDEWEB)
Shaukat, Nadeem; Shim, Hyung Jin; Jang, Sang Hoon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
The dynamic Monte Carlo (DMC) method has been used in the TART code for the α eigenvalue calculations. A unique method has been equipped to measure the α in time-stepwise Monte Carlo simulations. For off-critical systems, the neutron population is allowed to change exponentially over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, the conventional dynamic Monte Carlo method has been implemented in the McCARD. There is an exponential change of neutron population at the end of each time boundary for off-critical systems. In order to control this exponential change at the end of each time boundary, a conventional time cut-off controlling population strategy is included in the DMC module implemented in the McCARD. the conventional combing method to control the neutron population for off-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. The prompt neutron decay constant α is estimated from DMC algorithm for subcritical systems. The effectiveness of the results is examined for two-group infinite homogeneous problems with varying the k-value. From the comparisons with the analytical solutions, it is observed that the results are quite comparable with each other for each k-value.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Algorithm and application of Monte Carlo simulation for multi-dispersive copolymerization system
Institute of Scientific and Technical Information of China (English)
凌君; 沈之荃; 陈万里
2002-01-01
A Monte Carlo algorithm has been established for multi-dispersive copolymerization system, based on the experimental data of copolymer molecular weight and dispersion via GPC measurement. The program simulates the insertion of every monomer unit and records the structure and microscopical sequence of every chain in various lengths. It has been applied successfully for the ring-opening copolymerization of 2,2-dimethyltrimethylene carbonate (DTC) with (-caprolactone (ε-CL). The simulation coincides with the experimental results and provides microscopical data of triad fractions, lengths of homopolymer segments, etc., which are difficult to obtain by experiments. The algorithm presents also a uniform frame for copolymerization studies under other complicated mechanisms.
Event-chain Monte Carlo algorithms for hard-sphere systems.
Bernard, Etienne P; Krauth, Werner; Wilson, David B
2009-11-01
In this paper we present the event-chain algorithms, which are fast Markov-chain Monte Carlo methods for hard spheres and related systems. In a single move of these rejection-free methods, an arbitrarily long chain of particles is displaced, and long-range coherent motion can be induced. Numerical simulations show that event-chain algorithms clearly outperform the conventional Metropolis method. Irreversible versions of the algorithms, which violate detailed balance, improve the speed of the method even further. We also compare our method with a recent implementations of the molecular-dynamics algorithm.
Monte Carlo simulation of the jet stream process. [planetary/satellite systems formation
Ip, W.-H.
1977-01-01
A Monte Carlo model is formulated to simulate the orbital evolution of a system of colliding particles. It is found that inelastic collision alone (even if the impact energy dissipation from collision is very large) does not lead to the formation of a narrow ring-like jet stream; instead, a flat disk structure, similar to Saturn's rings, usually results. To produce the radial focusing effect, it is argued that additional dynamical effects, which would strengthen the collisional interaction between the particles in near-circular orbits, is needed.
Evaluation of effective dose with chest digital tomosynthesis system using Monte Carlo simulation
Kim, Dohyeon; Jo, Byungdu; Lee, Youngjin; Park, Su-Jin; Lee, Dong-Hoon; Kim, Hee-Joung
2015-03-01
Chest digital tomosynthesis (CDT) system has recently been introduced and studied. This system offers the potential to be a substantial improvement over conventional chest radiography for the lung nodule detection and reduces the radiation dose with limited angles. PC-based Monte Carlo program (PCXMC) simulation toolkit (STUK, Helsinki, Finland) is widely used to evaluate radiation dose in CDT system. However, this toolkit has two significant limits. Although PCXMC is not possible to describe a model for every individual patient and does not describe the accurate X-ray beam spectrum, Geant4 Application for Tomographic Emission (GATE) simulation describes the various size of phantom for individual patient and proper X-ray spectrum. However, few studies have been conducted to evaluate effective dose in CDT system with the Monte Carlo simulation toolkit using GATE. The purpose of this study was to evaluate effective dose in virtual infant chest phantom of posterior-anterior (PA) view in CDT system using GATE simulation. We obtained the effective dose at different tube angles by applying dose actor function in GATE simulation which was commonly used to obtain the medical radiation dosimetry. The results indicated that GATE simulation was useful to estimate distribution of absorbed dose. Consequently, we obtained the acceptable distribution of effective dose at each projection. These results indicated that GATE simulation can be alternative method of calculating effective dose in CDT applications.
Schreiber, Eric C; Chang, Sha X
2012-08-01
Microbeam radiation therapy (MRT) is an experimental radiotherapy technique that has shown potent antitumor effects with minimal damage to normal tissue in animal studies. This unique form of radiation is currently only produced in a few large synchrotron accelerator research facilities in the world. To promote widespread translational research on this promising treatment technology we have proposed and are in the initial development stages of a compact MRT system that is based on carbon nanotube field emission x-ray technology. We report on a Monte Carlo based feasibility study of the compact MRT system design. Monte Carlo calculations were performed using EGSnrc-based codes. The proposed small animal research MRT device design includes carbon nanotube cathodes shaped to match the corresponding MRT collimator apertures, a common reflection anode with filter, and a MRT collimator. Each collimator aperture is sized to deliver a beam width ranging from 30 to 200 μm at 18.6 cm source-to-axis distance. Design parameters studied with Monte Carlo include electron energy, cathode design, anode angle, filtration, and collimator design. Calculations were performed for single and multibeam configurations. Increasing the energy from 100 kVp to 160 kVp increased the photon fluence through the collimator by a factor of 1.7. Both energies produced a largely uniform fluence along the long dimension of the microbeam, with 5% decreases in intensity near the edges. The isocentric dose rate for 160 kVp was calculated to be 700 Gy∕min∕A in the center of a 3 cm diameter target. Scatter contributions resulting from collimator size were found to produce only small (<7%) changes in the dose rate for field widths greater than 50 μm. Dose vs depth was weakly dependent on filtration material. The peak-to-valley ratio varied from 10 to 100 as the separation between adjacent microbeams varies from 150 to 1000 μm. Monte Carlo simulations demonstrate that the proposed compact MRT system
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Validation of MTF measurement for CBCT system using Monte Carlo simulations
Hao, Ting; Gao, Feng; Zhao, Huijuan; Zhou, Zhongxing
2016-03-01
To evaluate the spatial resolution performance of cone beam computed tomography (CBCT) system, accurate measurement of the modulation transfer function (MTF) is required. This accuracy depends on the MTF measurement method and CBCT reconstruction algorithms. In this work, the accuracy of MTF measurement of CBCT system using wire phantom is validated by Monte Carlo simulation. A Monte Carlo simulation software tool BEAMnrc/EGSnrc was employed to model X-ray radiation beams and transport. Tungsten wires were simulated with different diameters and radial distances from the axis of rotation. We adopted filtered back projection technique to reconstruct images from 360° acquisition. The MTFs for four reconstruction kernels were measured from corresponding reconstructed wire images, while the ram-lak kernel increased the MTF relative to the cosine, hamming and hann kernel. The results demonstrated that the MTF degraded radially from the axis of rotation. This study suggested that an increase in the MTF for the CBCT system is possible by optimizing scanning settings and reconstruction parameters.
Blind receiver for OFDM systems via sequential Monte Carlo in factor graphs
Institute of Scientific and Technical Information of China (English)
CHEN Rong; ZHANG Hai-bin; XU You-yun; LIU Xin-zhao
2007-01-01
Estimation and detection algorithms for orthogonal frequency division multiplexing (OFDM) systems can be developed based on the sum-product algorithms, which operate by message passing in factor graphs. In this paper, we apply the sampling method (Monte Carlo) to factor graphs, and then the integrals in the sum-product algorithm can be approximated by sums, which results in complexity reduction. The blind receiver for OFDM systems can be derived via Sequential Monte Carlo(SMC) in factor graphs, the previous SMC blind receiver can be regarded as the special case of the sum-product algorithms using sampling methods. The previous SMC blind receiver for OFDM systems needs generating samples of the channel vector assuming the channel has an a priori Gaussian distribution. In the newly-built blind receiver, we generate samples of the virtual-pilots instead of the channel vector, with channel vector which can be easily computed based on virtual-pilots. As the size of the virtual-pilots space is much smaller than the channel vector space, only small number of samples are necessary, with the blind detection being much simpler. Furthermore, only one pilot tone is needed to resolve phase ambiguity and differential encoding is not used anymore. Finally, the results of computer simulations demonstrate that the proposal can perform well while providing significant complexity reduction.
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
Energy Technology Data Exchange (ETDEWEB)
Serena, P. A. [Instituto de Ciencias de Materiales de Madrid, Madrid (Spain); Costa-Kraemer, J. L. [Instituto de Microelectronica de Madrid, Madrid (Spain)
2001-03-01
A Monte Carlo algorithm suitable to study systems described by an anisotropic Heisenberg Hamiltonian is presented. This technique has been tested successfully with 3D and 2D systems, illustrating how magnetic properties depend on the dimensionality and the coordination number. We have found that magnetic properties of constrictions differ from those appearing in bulk. In particular, spin fluctuations are considerable larger than those calculated for bulk materials. In addition, domain walls are strongly modified when a constriction is present, with a decrease of the domain-wall width. This decrease is explained in terms of previous theoretical works. [Spanish] Se presenta un algoritmo de Monte Carlo para estudiar sistemas discritos por un hamiltoniano anisotropico de Heisenburg. Esta tecnica ha sido probada exitosamente con sistemas de dos y tres dimensiones, ilustrado con las propiedades magneticas dependen de la dimensionalidad y el numero de coordinacion. Hemos encontrado que las propiedades magneticas de constricciones difieren de aquellas del bulto. En particular, las fluctuaciones de espin son considerablemente mayores. Ademas, las paredes de dominio son fuertemente modificadas cuando una construccion esta presente, originando un decrecimiento del ancho de la pared de dominio. Damos cuenta de este decrecimiento en terminos de un trabajo teorico previo.
Monte Carlo Hamiltonian: Linear Potentials
Institute of Scientific and Technical Information of China (English)
LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx ＜ 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Finite Size Effect in Path Integral Monte Carlo Simulations of 4He Systems
Institute of Scientific and Technical Information of China (English)
ZHAO Xing-Wen; CHENG Xin-Lu
2008-01-01
Path integral Monte Carlo (PIMC) simulations are a powerful computational method to study interacting quantum systems at finite temperatures. In this work, PIMC has been applied to study the finite size effect of the simulated systems of 4He. We determine the energy as a function of temperature at saturated-vapor-pressure (SVP) conditions in the temperature range of T ∈ [1.0 K,4.0 K], and the equation of state (EOS) in the ground state for systems consisted of 32, 64 and 128 4He atoms, respectively. We find that the energy at SVP is influenced significantly by the size of the simulated system in the temperature range of T ∈ [2.1 K, 3.0 K] and the larger the system is, the better results are obtained in comparison with the experimental values; while the EOS appeared to be unrelated to it.
Monte Carlo ﬁlters for identiﬁcation of nonlinear structural dynamical systems
Indian Academy of Sciences (India)
C S Manohar; D Roy
2006-08-01
The problem of identiﬁcation of parameters of nonlinear structures using dynamic state estimation techniques is considered. The process equations are derived based on principles of mechanics and are augmented by mathematical models that relate a set of noisy observations to state variables of the system. The set of structural parameters to be identiﬁed is declared as an additional set of state variables. Both the process equation and the measurement equations are taken to be nonlinear in the state variables and contaminated by additive and (or) multiplicative Gaussian white noise processes. The problem of determining the posterior probability density function of the state variables conditioned on all available information is considered. The utility of three recursive Monte Carlo simulation-based ﬁlters, namely, a probability density function-based Monte Carlo ﬁlter, a Bayesian bootstrap ﬁlter and a ﬁlter based on sequential importance sampling, to solve this problem is explored. The state equations are discretized using certain variations of stochastic Taylor expansions enabling the incorporation of a class of non-smooth functions within the process equations. Illustrative examples on identiﬁcation of the nonlinear stiffness parameter of a Dufﬁng oscillator and the friction parameter in a Coulomb oscillator are presented.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed
Haji Ali, Abdul Lateef
2016-01-08
I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.
Principle of Line Configuration and Monte-Carlo Simulation for Shared Multi-Channel System
Institute of Scientific and Technical Information of China (English)
MIAO Changyun; DAI Jufeng; BAI Zhihui
2005-01-01
Based on the steady-state solution of finite-state birth and death process, the principle of line configuration for shared multi-channel system is analyzed. Call congestion ratio equation and channel utilization ratio equation are deduced, and visualized data analysis is presented. The analy-sis indicates that, calculated with the proposed equations, the overestimate for call congestion ratio and channel utilization ratio can be rectified, and thereby the cost of channels can be saved by 20% in a small system.With MATLAB programming, line configuration methods are provided. In order to generally and intuitively show the dynamic running of the system, and to analyze,promote and improve it, the system is simulated using M/M/n/n/m queuing model and Monte-Carlo method. In addition, the simulation validates the correctness of the theoretical analysis and optimizing configuration method.
A quantum Monte Carlo study of mono(benzene) TM and bis(benzene) TM systems
Bennett, M. Chandler; Kulahlioglu, A. H.; Mitas, L.
2017-01-01
We present a study of mono(benzene) TM and bis(benzene) TM systems, where TM = {Mo, W}. We calculate the binding energies by quantum Monte Carlo (QMC) approaches and compare the results with other methods and available experiments. The orbitals for the determinantal part of each trial wave function were generated from several types of DFT functionals in order to optimize for fixed-node errors. We estimate and compare the size of the fixed-node errors for both the Mo and W systems with regard to the electron density and degree of localization in these systems. For the W systems we provide benchmarking results of the binding energies, given that experimental data is not available.
A Quantum Monte Carlo Study of mono(benzene)TM and bis(benzene)TM Systems
Bennett, M Chandler; Mitas, Lubos
2016-01-01
We present a study of mono(benzene)TM and bis(benzene)TM systems, where TM={Mo,W}. We calculate the binding energies by quantum Monte Carlo (QMC) approaches and compare the results with other methods and available experiments. The orbitals for the determinantal part of each trial wave function were generated from several types of DFT in order to optimize for fixed-node errors. We estimate and compare the size of the fixed-node errors for both the Mo and W systems with regard to the electron density and degree of localization in these systems. For the W systems we provide benchmarking results of the binding energies, given that experimental data is not available.
Sample Duplication Method for Monte Carlo Simulation of Large Reaction-Diffusion System
Institute of Scientific and Technical Information of China (English)
张红东; 陆建明; 杨玉良
1994-01-01
The sample duplication method for the Monte Carlo simulation of large reaction-diffusion system is proposed in this paper. It is proved that the sample duplication method will effectively raise the efficiency and statistical precision of the simulation without changing the kinetic behaviour of the reaction-diffusion system and the critical condition for the bifurcation of the steady-states. The method has been applied to the simulation of spatial and time dissipative structure of Brusselator under the Dirichlet boundary condition. The results presented in this paper definitely show that the sample duplication method provides a very efficient way to sol-’e the master equation of large reaction-diffusion system. For the case of two-dimensional system, it is found that the computation time is reduced at least by a factor of two orders of magnitude compared to the algorithm reported in literature.
Monte Carlo Treatment Planning for Molecular Targeted Radiotherapy within the MINERVA System
Energy Technology Data Exchange (ETDEWEB)
Lehmann, J; Siantar, C H; Wessol, D E; Wemple, C A; Nigg, D; Cogliati, J; Daly, T; Descalle, M; Flickinger, T; Pletcher, D; DeNardo, G
2004-09-22
The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry, and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU), and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo-based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (Modality Inclusive Environment for Radiotherapeutic Variable Analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plug-in architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4 - 2%, MCNP - 10%)(Descalle et al. 2003). The code is currently being benchmarked against experimental data. The interpatient variability of the drug pharmacokinetics in MTR
Monte Carlo treatment planning for molecular targeted radiotherapy within the MINERVA system
Energy Technology Data Exchange (ETDEWEB)
Lehmann, Joerg [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Siantar, Christine Hartmann [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Wessol, Daniel E [Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3885 (United States); Wemple, Charles A [Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3885 (United States); Nigg, David [Idaho National Engineering and Environmental Laboratory, PO Box 1625, Idaho Falls, ID 83415-3885 (United States); Cogliati, Josh [Department of Computer Science, Montana State University, Bozeman, MT 59717 (United States); Daly, Tom [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Descalle, Marie-Anne [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Flickinger, Terry [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); Pletcher, David [University of California, Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550 (United States); DeNardo, Gerald [University of California Davis, School of Medicine, Sacramento, CA 95817 (United States)
2005-03-07
The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU) and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (modality inclusive environment for radiotherapeutic variable analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plugin architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4-2%, MCNP-10%) (Descalle et al 2003 Cancer Biother. Radiopharm. 18 71-9). The code is currently being benchmarked against experimental data. The interpatient variability of the
Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.
Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H
2016-11-01
This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system.
Monte Carlo Particle Lists: MCPL
Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi
2016-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Watanabe, Hiroshi; Yukawa, Satoshi; Novotny, M A; Ito, Nobuyasu
2006-08-01
We construct asymptotic arguments for the relative efficiency of rejection-free Monte Carlo (MC) methods compared to the standard MC method. We find that the efficiency is proportional to exp(constbeta) in the Ising, sqrt[beta] in the classical XY, and beta in the classical Heisenberg spin systems with inverse temperature beta, regardless of the dimension. The efficiency in hard particle systems is also obtained, and found to be proportional to (rho(cp)-rho)(-d) with the closest packing density rho(cp), density rho, and dimension d of the systems. We construct and implement a rejection-free Monte Carlo method for the hard-disk system. The RFMC has a greater computational efficiency at high densities, and the density dependence of the efficiency is as predicted by our arguments.
Energy Technology Data Exchange (ETDEWEB)
Blazy-Aubignac, L
2007-09-15
The treatment planning systems (T.P.S.) occupy a key position in the radiotherapy service: they realize the projected calculation of the dose distribution and the treatment duration. Traditionally, the quality control of the calculated distribution doses relies on their comparisons with dose distributions measured under the device of treatment. This thesis proposes to substitute these dosimetry measures to the profile of reference dosimetry calculations got by the Penelope Monte-Carlo code. The Monte-Carlo simulations give a broad choice of test configurations and allow to envisage a quality control of dosimetry aspects of T.P.S. without monopolizing the treatment devices. This quality control, based on the Monte-Carlo simulations has been tested on a clinical T.P.S. and has allowed to simplify the quality procedures of the T.P.S.. This quality control, in depth, more precise and simpler to implement could be generalized to every center of radiotherapy. (N.C.)
Directory of Open Access Journals (Sweden)
Tsu-Ming Yeh
2013-10-01
Full Text Available Measurements are required to maintain the consistent quality of all finished and semi-finished products in a production line. Many firms in the automobile and general precision industries apply the TS 16949:2009 Technical Specifications and Measurement System Analysis (MSA manual to establish measurement systems. This work is undertaken to evaluate gauge repeatability and reproducibility (GR&R to verify the measuring ability and quality of the measurement frame, as well as to continuously improve and maintain the verification process. Nevertheless, the implementation of GR&R requires considerable time and manpower, and is likely to affect production adversely. In addition, the evaluation value for GR&R is always different owing to the sum of man-made and machine-made variations. Using a Monte Carlo simulation and the prediction of the repeatability and reproducibility of the measurement system analysis, this study aims to determine the distribution of %GR&R and the related number of distinct categories (ndc. This study uses two case studies of an automobile parts manufacturer and the combination of a Monte Carlo simulation, statistical bases, and the prediction of the repeatability and reproducibility of the measurement system analysis to determine the probability density function, the distribution of %GR&R, and the related number of distinct categories (ndc. The method used in this study could evaluate effectively the possible range of the GR&R of the measurement capability, in order to establish a prediction model for the evaluation of the measurement capacity of a measurement system.
Simulation of Cone Beam CT System Based on Monte Carlo Method
Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing
2014-01-01
Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.
Quantum Monte Carlo of atomic and molecular systems with heavy elements
Mitas, Lubos; Kulahlioglu, Adem; Melton, Cody; Bennett, Chandler
2015-03-01
We carry out quantum Monte Carlo calculations of atomic and molecular systems with several heavy atoms such as Mo, W and Bi. In particular, we compare the correlation energies vs their lighter counterparts in the same column of the periodic table in order to reveal trends with regard to the atomic number Z. One of the observations is that the correlation energy for the isoelectronic valence space/states is mildly decreasing with increasing Z. Similar observation applies also to the fixed-node errors, supporting thus our recent observation that the fixed-node error increases with electronic density for the same (or similar) complexity of the wave function and bonding. In addition, for Bi systems we study the impact of the spin-orbit on the electronic structure, in particular, on binding, correlation and excitation energies.
Space applications of the MITS electron-photon Monte Carlo transport code system
Energy Technology Data Exchange (ETDEWEB)
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A. [Sandia National Labs., Albuquerque, NM (United States); Morel, J.E. [Los Alamos National Lab., NM (United States)
1996-07-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction.
Miming the cancer-immune system competition by kinetic Monte Carlo simulations
Bianca, Carlo; Lemarchand, Annie
2016-10-01
In order to mimic the interactions between cancer and the immune system at cell scale, we propose a minimal model of cell interactions that is similar to a chemical mechanism including autocatalytic steps. The cells are supposed to bear a quantity called activity that may increase during the interactions. The fluctuations of cell activity are controlled by a so-called thermostat. We develop a kinetic Monte Carlo algorithm to simulate the cell interactions and thermalization of cell activity. The model is able to reproduce the well-known behavior of tumors treated by immunotherapy: the first apparent elimination of the tumor by the immune system is followed by a long equilibrium period and the final escape of cancer from immunosurveillance.
Applications of Monte Carlo Methods in Calculus.
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
2017-01-01
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, F R; Umrigar, C J
2012-01-01
A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.
Monte Carlo dose distributions for radiosurgery
Energy Technology Data Exchange (ETDEWEB)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)
2001-07-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
Validation and simulation of a regulated survey system through Monte Carlo techniques
Directory of Open Access Journals (Sweden)
Asier Lacasta Soto
2015-07-01
Full Text Available Channel flow covers long distances and obeys to variable temporal behaviour. It is usually regulated by hydraulic elements as lateralgates to provide a correct of water supply. The dynamics of this kind of flow is governed by a partial differential equations systemnamed shallow water model. They have to be complemented with a simplified formulation for the gates. All the set of equations forma non-linear system that can only be solved numerically. Here, an explicit upwind numerical scheme in finite volumes able to solveall type of flow regimes is used. Hydraulic structures (lateral gates formulation introduces parameters with some uncertainty. Hence,these parameters will be calibrated with a Monte Carlo algorithm obtaining associated coefficients to each gate. Then, they will bechecked, using real cases provided by the monitorizing equipment of the Pina de Ebro channel located in Zaragoza.
Monte Carlo Studies for the Calibration System of the GERDA Experiment
Baudis, Laura; Froborg, Francis; Tarka, Michal
2013-01-01
The GERmanium Detector Array, GERDA, searches for neutrinoless double beta decay in Ge-76 using bare high-purity germanium detectors submerged in liquid argon. For the calibration of these detectors gamma emitting sources have to be lowered from their parking position on top of the cryostat over more than five meters down to the germanium crystals. With the help of Monte Carlo simulations, the relevant parameters of the calibration system were determined. It was found that three Th-228 sources with an activity of 20 kBq each at two different vertical positions will be necessary to reach sufficient statistics in all detectors in less than four hours of calibration time. These sources will contribute to the background of the experiment with a total of (1.07 +/- 0.04(stat) +0.13 -0.19(sys)) 10^{-4} cts/(keV kg yr) when shielded from below with 6 cm of tantalum in the parking position.
Algorithm and application of Monte Carlo simulation for multi-dispersive copolymerization system
Institute of Scientific and Technical Information of China (English)
凌君; 沈之荃; 陈万里
2002-01-01
A Monte Carlo algorithm has been established for multi-dispersive copolymerization system, based on the experimental data of copolymer molecular weight and dispersion via GPC measurement. The program simulates the insertion of every monomer unit and records the structure and microscopical sequence of every chain in various lengths. It has been applied successfully for the ring-opening copolymerization of 2,2-dimethyltrimethylene carbonate (DTC) with δ-caprolactone (δ-CL). The simulation coincides with the experimental results and provides microscopical data of triad fractions, lengths of homopolymer segments, etc., which are difficult to obtain by experiments. The algorithm presents also a uniform frame for copolymerization studies under other complicated mechanisms.
A Parallel Monte Carlo Code for Simulating Collisional N-body Systems
Pattabiraman, Bharath; Liao, Wei-Keng; Choudhary, Alok; Kalogera, Vassiliki; Memik, Gokhan; Rasio, Frederic A
2012-01-01
We present a new parallel code for computing the dynamical evolution of collisional N-body systems with up to N~10^7 particles. Our code is based on the the H\\'enon Monte Carlo method for solving the Fokker-Planck equation, and makes assumptions of spherical symmetry and dynamical equilibrium. The principal algorithmic developments involve optimizing data structures, and the introduction of a parallel random number generation scheme, as well as a parallel sorting algorithm, required to find nearest neighbors for interactions and to compute the gravitational potential. The new algorithms we introduce along with our choice of decomposition scheme minimize communication costs and ensure optimal distribution of data and workload among the processing units. The implementation uses the Message Passing Interface (MPI) library for communication, which makes it portable to many different supercomputing architectures. We validate the code by calculating the evolution of clusters with initial Plummer distribution functi...
System Level Numerical Analysis of a Monte Carlo Simulation of the E. Coli Chemotaxis
Siettos, Constantinos I
2010-01-01
Over the past few years it has been demonstrated that "coarse timesteppers" establish a link between traditional numerical analysis and microscopic/ stochastic simulation. The underlying assumption of the associated lift-run-restrict-estimate procedure is that macroscopic models exist and close in terms of a few governing moments of microscopically evolving distributions, but they are unavailable in closed form. This leads to a system identification based computational approach that sidesteps the necessity of deriving explicit closures. Two-level codes are constructed; the outer code performs macroscopic, continuum level numerical tasks, while the inner code estimates -through appropriately initialized bursts of microscopic simulation- the quantities required for continuum numerics. Such quantities include residuals, time derivatives, and the action of coarse slow Jacobians. We demonstrate how these coarse timesteppers can be applied to perform equation-free computations of a kinetic Monte Carlo simulation of...
Monte Carlo simulation of glandular dose in a dedicated breast CT system
Institute of Scientific and Technical Information of China (English)
TANG Xiao; WEI Long; ZHAO Wei; WANG Yan-Fang; SHU Hang; SUN Cui-Li; WEI Cun-Feng; CAO Da-Quan; QUE Jie-Min; SHI Rong-Jian
2012-01-01
A dedicated breast CT system (DBCT) is a new method for breast cancer detection proposed in recent years.In this paper,the glandular dose in the DBCT is simulated using the Monte Carlo method.The phantom shape is half ellipsoid,and a series of phantoms with different sizes,shapes and compositions were constructed. In order to optimize the spectra,monoenergy X-ray beams of 5-80 keV were used in simulation.The dose distribution of a breast phantom was studied:a higher energy beam generated more uniform distribution,and the outer parts got more dose than the inner parts.For polyenergtic spectra,four spectra of Al filters with different thicknesses were simulated,and the polyenergtic glandular dose was calculated as a spectral weighted combination of the monoenergetic dose.
Kinetic Monte Carlo simulation of dopant-defect systems under submicrosecond laser thermal processes
Energy Technology Data Exchange (ETDEWEB)
Fisicaro, G.; Pelaz, Lourdes; Lopez, P.; Italia, M.; Huet, K.; Venturini, J.; La Magna, A. [CNR IMM, Z.I. VIII Strada 5, I -95121 Catania (Italy); Department of Electronics, University of Valladolid, 47011 Valladolid (Spain); CNR IMM, Z.I. VIII Strada 5, I -95121 Catania (Italy); Excico 13-21 Quai des Gresillons, 92230 Gennevilliers (France); CNR IMM, Z.I. VIII Strada 5, I -95121 Catania (Italy)
2012-11-06
An innovative Kinetic Monte Carlo (KMC) code has been developed, which rules the post-implant kinetics of the defects system in the extremely far-from-the equilibrium conditions caused by the laser irradiation close to the liquid-solid interface. It considers defect diffusion, annihilation and clustering. The code properly implements, consistently to the stochastic formalism, the fast varying local event rates related to the thermal field T(r,t) evolution. This feature of our numerical method represents an important advancement with respect to current state of the art KMC codes. The reduction of the implantation damage and its reorganization in defect aggregates are studied as a function of the process conditions. Phosphorus activation efficiency, experimentally determined in similar conditions, has been related to the emerging damage scenario.
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
Quantum Monte Carlo with variable spins.
Melton, Cody A; Bennett, M Chandler; Mitas, Lubos
2016-06-28
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo, we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn2 molecules, as well as the electron affinities of the 6p row elements in close agreement with experiments.
Quantum Monte Carlo with Variable Spins
Melton, Cody A; Mitas, Lubos
2016-01-01
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
The Monte Carlo Method and the Evaluation of Retrieval System Performance.
Burgin, Robert
1999-01-01
Introduces the Monte Carlo method which is shown to represent an attractive alternative to the hypergeometric model for identifying the levels at which random retrieval performance is exceeded in retrieval test collections and for overcoming some of the limitations of the hypergeometric model. Practical matters to consider when employing the Monte…
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Energy Technology Data Exchange (ETDEWEB)
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Institute of Scientific and Technical Information of China (English)
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Eriksson, Ida; Starck, Sven-Åke; Båth, Magnus
2014-04-01
The aim of the present study was to perform an extensive evaluation of available gamma camera systems in terms of their detective quantum efficiency (DQE) and determine their dependency on relevant parameters such as collimator type, imaging depth, and energy window using the Monte Carlo technique. The modulation transfer function was determined from a simulated (99m)Tc point source and was combined with the system sensitivity and photon yield to obtain the DQE of the system. The simulations were performed for different imaging depths in a water phantom for 13 gamma camera systems from four manufacturers. Except at very low spatial frequencies, the highest DQE values were found with a lower energy window threshold of around 130 keV for all systems. The height and shape of the DQE curves were affected by the collimator design and the intrinsic properties of the gamma camera systems. High-sensitivity collimators gave the highest DQE at low spatial frequencies, whereas the high-resolution and ultrahigh-resolution collimators showed higher DQE values at higher frequencies. The intrinsic resolution of the system mainly affected the DQE curve at superficial depths. The results indicate that the manufacturers have succeeded differently in their attempts to design a system constituting an optimal compromise between sensitivity and spatial resolution.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Validation of Compton Scattering Monte Carlo Simulation Models
Weidenspointner, Georg; Hauf, Steffen; Hoff, Gabriela; Kuster, Markus; Pia, Maria Grazia; Saracco, Paolo
2014-01-01
Several models for the Monte Carlo simulation of Compton scattering on electrons are quantitatively evaluated with respect to a large collection of experimental data retrieved from the literature. Some of these models are currently implemented in general purpose Monte Carlo systems; some have been implemented and evaluated for possible use in Monte Carlo particle transport for the first time in this study. Here we present first and preliminary results concerning total and differential Compton scattering cross sections.
Energy Technology Data Exchange (ETDEWEB)
Valentine, T.; Perez, R. [Oak Ridge National Lab., TN (United States); Rugama, Y.; Munoz-Cobo, J.L. [Poly. Tech. Univ. of Valencia (Spain). Chemical and Nuclear Engineering Dept.
2001-07-01
The design of reactivity monitoring systems for accelerator-driven systems must be investigated to ensure that such systems remain subcritical during operation. The Monte Carlo codes LAHET and MCNP-DSP were combined together to facilitate the design of reactivity monitoring systems. The coupling of LAHET and MCNP-DSP provides a tool that can be used to simulate a variety of subcritical measurements such as the pulsed neutron, Rossi-{alpha}, or noise analysis measurements. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Valentine, T.E.; Rugama, Y. Munoz-Cobos, J.; Perez, R.
2000-10-23
The design of reactivity monitoring systems for accelerator-driven systems must be investigated to ensure that such systems remain subcritical during operation. The Monte Carlo codes LAHET and MCNP-DSP were combined together to facilitate the design of reactivity monitoring systems. The coupling of LAHET and MCNP-DSP provides a tool that can be used to simulate a variety of subcritical measurements such as the pulsed neutron, Rossi-{alpha}, or noise analysis measurements.
Determination of phase equilibria in confined systems by open pore cell Monte Carlo method.
Miyahara, Minoru T; Tanaka, Hideki
2013-02-28
We present a modification of the molecular dynamics simulation method with a unit pore cell with imaginary gas phase [M. Miyahara, T. Yoshioka, and M. Okazaki, J. Chem. Phys. 106, 8124 (1997)] designed for determination of phase equilibria in nanopores. This new method is based on a Monte Carlo technique and it combines the pore cell, opened to the imaginary gas phase (open pore cell), with a gas cell to measure the equilibrium chemical potential of the confined system. The most striking feature of our new method is that the confined system is steadily led to a thermodynamically stable state by forming concave menisci in the open pore cell. This feature of the open pore cell makes it possible to obtain the equilibrium chemical potential with only a single simulation run, unlike existing simulation methods, which need a number of additional runs. We apply the method to evaluate the equilibrium chemical potentials of confined nitrogen in carbon slit pores and silica cylindrical pores at 77 K, and show that the results are in good agreement with those obtained by two conventional thermodynamic integration methods. Moreover, we also show that the proposed method can be particularly useful for determining vapor-liquid and vapor-solid coexistence curves and the triple point of the confined system.
Monte Carlo based verification of a beam model used in a treatment planning system
Wieslander, E.; Knöös, T.
2008-02-01
Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.
Coe, Jeremy P; Almeida, Nuno M S; Paterson, Martin J
2017-09-02
We investigate if a range of challenging spin systems can be described sufficiently well using Monte Carlo configuration interaction (MCCI) and the density matrix renormalization group (DMRG) in a way that heads toward a more "black box" approach. Experimental results and other computational methods are used for comparison. The gap between the lowest doublet and quartet state of methylidyne (CH) is first considered. We then look at a range of first-row transition metal monocarbonyls: MCO when M is titanium, vanadium, chromium, or manganese. For these MCO systems we also employ partially spin restricted open-shell coupled-cluster (RCCSD). We finally investigate the high-spin low-lying states of the iron dimer, its cation and its anion. The multireference character of these molecules is also considered. We find that these systems can be computationally challenging with close low-lying states and often multireference character. For this more straightforward application and for the basis sets considered, we generally find qualitative agreement between DMRG and MCCI. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Nievaart, V. A.; Daquino, G. G.; Moss, R. L.
2007-06-01
Boron Neutron Capture Therapy (BNCT) is a bimodal form of radiotherapy for the treatment of tumour lesions. Since the cancer cells in the treatment volume are targeted with 10B, a higher dose is given to these cancer cells due to the 10B(n,α)7Li reaction, in comparison with the surrounding healthy cells. In Petten (The Netherlands), at the High Flux Reactor, a specially tailored neutron beam has been designed and installed. Over 30 patients have been treated with BNCT in 2 clinical protocols: a phase I study for the treatment of glioblastoma multiforme and a phase II study on the treatment of malignant melanoma. Furthermore, activities concerning the extra-corporal treatment of metastasis in the liver (from colorectal cancer) are in progress. The irradiation beam at the HFR contains both neutrons and gammas that, together with the complex geometries of both patient and beam set-up, demands for very detailed treatment planning calculations. A well designed Treatment Planning System (TPS) should obey the following general scheme: (1) a pre-processing phase (CT and/or MRI scans to create the geometric solid model, cross-section files for neutrons and/or gammas); (2) calculations (3D radiation transport, estimation of neutron and gamma fluences, macroscopic and microscopic dose); (3) post-processing phase (displaying of the results, iso-doses and -fluences). Treatment planning in BNCT is performed making use of Monte Carlo codes incorporated in a framework, which includes also the pre- and post-processing phases. In particular, the glioblastoma multiforme protocol used BNCT_rtpe, while the melanoma metastases protocol uses NCTPlan. In addition, an ad hoc Positron Emission Tomography (PET) based treatment planning system (BDTPS) has been implemented in order to integrate the real macroscopic boron distribution obtained from PET scanning. BDTPS is patented and uses MCNP as the calculation engine. The precision obtained by the Monte Carlo based TPSs exploited at Petten
Parallel J-W Monte Carlo Simulations of Thermal Phase Changes in Finite-size Systems
Radev, R
2002-01-01
The thermodynamic properties of 59 TeF6 clusters that undergo temperature-driven phase transitions have been calculated with a canonical J-walking Monte Carlo technique. A parallel code for simulations has been developed and optimized on SUN3500 and CRAY-T3E computers. The Lindemann criterion shows that the clusters transform from liquid to solid and then from one solid structure to another in the temperature region 60-130 K.
Multilevel Monte Carlo methods for computing failure probability of porous media flow systems
Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.
2016-08-01
We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.
Kinetic Monte Carlo and cellular particle dynamics simulations of multicellular systems
Flenner, Elijah; Janosi, Lorant; Barz, Bogdan; Neagu, Adrian; Forgacs, Gabor; Kosztin, Ioan
2012-03-01
Computer modeling of multicellular systems has been a valuable tool for interpreting and guiding in vitro experiments relevant to embryonic morphogenesis, tumor growth, angiogenesis and, lately, structure formation following the printing of cell aggregates as bioink particles. Here we formulate two computer simulation methods: (1) a kinetic Monte Carlo (KMC) and (2) a cellular particle dynamics (CPD) method, which are capable of describing and predicting the shape evolution in time of three-dimensional multicellular systems during their biomechanical relaxation. Our work is motivated by the need of developing quantitative methods for optimizing postprinting structure formation in bioprinting-assisted tissue engineering. The KMC and CPD model parameters are determined and calibrated by using an original computational-theoretical-experimental framework applied to the fusion of two spherical cell aggregates. The two methods are used to predict the (1) formation of a toroidal structure through fusion of spherical aggregates and (2) cell sorting within an aggregate formed by two types of cells with different adhesivities.
Dosimetric validation of a commercial Monte Carlo based IMRT planning system.
Grofsmid, Dennis; Dirkx, Maarten; Marijnissen, Hans; Woudstra, Evert; Heijmen, Ben
2010-02-01
Recently a commercial Monte Carlo based IMRT planning system (Monaco version 1.0.0) was released. In this study the dosimetric accuracy of this new planning system was validated. Absolute dose profiles, depth dose curves, and output factors calculated by Monaco were compared with measurements in a water phantom. Different static on-axis and off-axis fields were tested at various source-skin distances for 6, 10, and 18 MV photon beams. Four clinical IMRT plans were evaluated in a water phantom using a linear diode detector array and another six IMRT plans for different tumor sites in solid water using a 2D detector array. In order to evaluate the accuracy of the dose engine near tissue inhomogeneities absolute dose distributions were measured with Gafchromic EBT film in an inhomogeneous slab phantom. For an end-to-end test a four-field IMRT plan was applied to an anthropomorphic lung phantom with a simulated tumor peripherally located in the right lung. Gafchromic EBT film, placed in and around the tumor area, was used to evaluate the dose distribution. Generally, the measured and the calculated dose distributions agreed within 2% dose difference or 2 mm distance-to-agreement. But mainly at interfaces with bone, some larger dose differences could be observed. Based on the results of this study, the authors concluded that the dosimetric accuracy of Monaco is adequate for clinical introduction.
Monte Carlo simulations of a novel coherent scatter materials discrimination system
Hassan, Laila; Starr-Baier, Sean; MacDonald, C. A.; Petruccelli, Jonathan C.
2017-05-01
X-ray coherent scatter imaging has the potential to improve the detection of liquid and powder materials of concern in security screening. While x-ray attenuation is dependent on atomic number, coherent scatter is highly dependent on the characteristic angle for the target material, and thus offers an additional discrimination. Conventional coherent scatter analysis requires pixel-by-pixel scanning, and so could be prohibitively slow for security applications. A novel slot scan system has been developed to provide rapid imaging of the coherent scatter at selected angles of interest, simultaneously with the conventional absorption images. Prior experimental results showed promising capability. In this work, Monte Carlo simulations were performed to assess discrimination capability and provide system optimization. Simulation analysis performed using the measured ring profiles for an array of powders and liquids, including water, ethanol and peroxide. For example, simulations yielded a signal-to-background ratio of 1.63+/-0.08 for a sample consisting of two 10 mm diameter vials, one containing ethanol (signal) and one water (background). This high SBR value is due to the high angular separation of the coherent scatter between the two liquids. The results indicate that the addition of coherent scatter information to single or dual energy attenuation images improves the discrimination of materials of interest.
Monte Carlo simulations of a high-resolution X-ray CT system for industrial applications
Miceli, A.; Thierry, R.; Flisch, A.; Sennhauser, U.; Casali, F.; Simon, M.
2007-12-01
An X-ray computed tomography (CT) model based on the GEANT4 Monte Carlo code was developed for simulation of a cone-beam CT system for industrial applications. The full simulation of the X-ray tube, object, and area detector was considered. The model was validated through comparison with experimental measurements of different test objects. There is good agreement between the simulated and measured projections. To validate the model we reduced the beam aperture of the X-ray tube, using a source-collimator, to decrease the scattered radiation from the CT system structure and from the walls of the X-ray shielding room. The degradation of the image contrast using larger beam apertures is also shown. Thereafter, the CT model was used to calculate the spatial distribution and the magnitude of the scattered radiation from different objects. It has been assessed that the scatter-to-primary ratio (SPR) is below 5% for small aluminum objects (approx. 5 cm path length), and in the case of large aluminum objects (approx. 20 cm path length) it can reach up to a factor of 3 in the region corresponding to the maximum path length. Therefore, the scatter from the object significantly affects quantitative accuracy. The model was also used to evaluate the degradation of the image contrast due to the detector box.
Monte Carlo simulations of morphological transitions in PbTe/CdTe immiscible material systems
Mińkowski, Marcin; Załuska-Kotur, Magdalena A.; Turski, Łukasz A.; Karczewski, Grzegorz
2016-09-01
The crystal growth of the immiscible PbTe/CdTe multilayer system is analyzed as an example of a self-organizing process. The immiscibility of the constituents leads to the observed morphological transformations such as an anisotropy driven formation of quantum dots and nanowires and to a phase separation at the highest temperatures. The proposed model accomplishes a bulk and surface diffusion together with an anisotropic mobility of the material components. We analyze its properties by kinetic Monte Carlo simulations and show that it is able to reproduce all of the structures observed experimentally during the process of the PbTe/CdTe growth. We show that all of the dynamical processes studied play an important role in the creation of zero-, one-, two-, and, finally, three-dimensional structures. The shape of the structures that are grown is different for relatively thick multilayers, when the bulk diffusion cooperates with the anisotropic mobility, as compared to the annealed structures for which only the isotropic bulk diffusion decides about the process. Finally, it is different again for thin multilayers when the surface diffusion is the most decisive factor. We compare our results with the experimentally grown systems and show that the proposed model explains the diversity of observed structures.
Recent Developments in Quantum Monte Carlo: Methods and Applications
Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.
2007-12-01
The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.
Monte Carlo approach to turbulence
Energy Technology Data Exchange (ETDEWEB)
Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik
2009-11-15
The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Lavalle, Catia; Rigol, Marcos; Muramatsu, Alejandro
2005-08-01
The cover picture of the current issue, taken from the Feature Article [1], depicts the evolution of local density (a) and its quantum fluctuations (b) in trapped fermions on one-dimensional optical lattices. As the number of fermions in the trap is increased, figure (a) shows the formation of a Mott-insulating plateau (local density equal to one) whereas the quantum fluctuations - see figure (b) - are strongly suppressed, but nonzero. For a larger number of fermions new insulating plateaus appear (this time with local density equal to two), but no density fluctuations. Regions with non-constant density are metallic and exhibit large quantum fluctuations of the density.The first author Catia Lavalle is a Postdoc at the University of Stuttgart. She works in the field of strongly correlated quantum systems by means of Quantum Monte Carlo methods (QMC). While working on her PhD thesis at the University of Stuttgart, she developed a new QMC technique that allows to study dynamical properties of the t-J model.
Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William
2013-04-30
Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible. Copyright © 2013 Wiley Periodicals, Inc.
Zhang, Zhigang; Duan, Zhenhao
2002-10-01
A new technique of temperature scaling method combined with the conventional Gibbs Ensemble Monte Carlo simulation was used to study liquid-vapor phase equilibria of the methane-ethane (CH 4-C 2H 6) system. With this efficient method, a new set of united-atom Lennard-Jones potential parameters for pure C 2H 6 was found to be more accurate than those of previous models in the prediction of phase equilibria. Using the optimized potentials for liquid simulations (OPLS) potential for CH 4 and the potential of this study for C 2H 6, together with a simple mixing rule, we simulated the equilibrium compositions and densities of the CH 4-C 2H 6 mixtures with accuracy close to experiments. The simulated data are supplements to experiments, and may cover a larger temperature-pressure-composition space than experiments. Compared with some well-established equations of state such as Peng-Robinson equation of state (PR-EQS), the simulated results are found to be closer to experiments, at least in some temperature and pressure ranges.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Energy Technology Data Exchange (ETDEWEB)
Slattery, Stuart R., E-mail: slatterysr@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Wilson, Paul P.H., E-mail: wilsonp@engr.wisc.edu [University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)
2015-12-15
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. In general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.
Quality assessment of Monte Carlo based system response matrices in PET
Energy Technology Data Exchange (ETDEWEB)
Cabello, J.; Gillam, J.E. [Valencia Univ. (Spain). Inst. de Fisica Corpuscular; Rafecas, M. [Valencia Univ. (Spain). Inst. de Fisica Corpuscular; Valencia Univ. (Spain). Dept. de Fisica Atomica, Molecular y Nuclear
2011-07-01
Iterative methods are currently accepted as the gold standard image reconstruction methods in nuclear medicine. The quality of the final reconstructed image greatly depends on how well physical processes are modelled in the System-Response- Matrix (SRM). The SRM can be obtained using experimental measurements, or calculated using Monte-Carlo (MC) or analytical methods. Nevertheless, independent on the method, the SRM is always contaminated by a certain level of error. MC based methods have recently gained popularity in calculation of the SRM due to the significant increase in computer power exhibited by regular commercial computers. MC methods can produce high accuracy results, but are subject to statistical noise, which affects the precision of the results. By increasing the number of annihilations simulated, the level of noise observed in the SRM decreases, at the additional cost of increased simulation time and increased file size necessary to store the SRM. The latter also has a negative impact on reconstruction time. A study on the noise of the SRM has been performed from a spatial point of view, identifying specific regions subject to higher levels of noise. This study will enable the calculation of SRM with different levels of statistics depending on the spatial location. A quantitative comparison of images, reconstructed using different SRM realizations, with similar and different levels of statistical quality, has been presented. (orig.)
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.
2012-01-01
International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
Energy Technology Data Exchange (ETDEWEB)
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Reducing quasi-ergodicity in a double well potential by Tsallis Monte Carlo simulation
Iwamatsu, Masao; Okabe, Yutaka
2000-01-01
A new Monte Carlo scheme based on the system of Tsallis's generalized statistical mechanics is applied to a simple double well potential to calculate the canonical thermal average of potential energy. Although we observed serious quasi-ergodicity when using the standard Metropolis Monte Carlo algorithm, this problem is largely reduced by the use of the new Monte Carlo algorithm. Therefore the ergodicity is guaranteed even for short Monte Carlo steps if we use this new canonical Monte Carlo sc...
New Approaches and Applications for Monte Carlo Perturbation Theory
Energy Technology Data Exchange (ETDEWEB)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R. H. P.; Lazopoulos, A.
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...
Tryggestad, E.; Armour, M.; Iordachita, I.; Verhaegen, F.; Wong, J W
2009-01-01
Our group has constructed the small animal radiation research platform (SARRP) for delivering focal, kilo-voltage radiation to targets in small animals under robotic control using cone-beam CT guidance. The present work was undertaken to support the SARRP’s treatment planning capabilities. We have devised a comprehensive system for characterizing the radiation dosimetry in water for the SARRP and have developed a Monte Carlo dose engine with the intent of reproducing these measured results. W...
Díez, A; Largo, J; Solana, J R
2006-08-21
Computer simulations have been performed for fluids with van der Waals potential, that is, hard spheres with attractive inverse power tails, to determine the equation of state and the excess energy. On the other hand, the first- and second-order perturbative contributions to the energy and the zero- and first-order perturbative contributions to the compressibility factor have been determined too from Monte Carlo simulations performed on the reference hard-sphere system. The aim was to test the reliability of this "exact" perturbation theory. It has been found that the results obtained from the Monte Carlo perturbation theory for these two thermodynamic properties agree well with the direct Monte Carlo simulations. Moreover, it has been found that results from the Barker-Henderson [J. Chem. Phys. 47, 2856 (1967)] perturbation theory are in good agreement with those from the exact perturbation theory.
An introduction to Monte Carlo methods
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Energy Technology Data Exchange (ETDEWEB)
Li, JS; Fan, J; Ma, C-M [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: To improve the treatment efficiency and capabilities for full-body treatment, a robotic radiosurgery system has equipped with a multileaf collimator (MLC) to extend its accuracy and precision to radiation therapy. To model the MLC and include it in the Monte Carlo patient dose calculation is the goal of this work. Methods: The radiation source and the MLC were carefully modeled to consider the effects of the source size, collimator scattering, leaf transmission and leaf end shape. A source model was built based on the output factors, percentage depth dose curves and lateral dose profiles measured in a water phantom. MLC leaf shape, leaf end design and leaf tilt for minimizing the interleaf leakage and their effects on beam fluence and energy spectrum were all considered in the calculation. Transmission/leakage was added to the fluence based on the transmission factors of the leaf and the leaf end. The transmitted photon energy was tuned to consider the beam hardening effects. The calculated results with the Monte Carlo implementation was compared with measurements in homogeneous water phantom and inhomogeneous phantoms with slab lung or bone material for 4 square fields and 9 irregularly shaped fields. Results: The calculated output factors are compared with the measured ones and the difference is within 1% for different field sizes. The calculated dose distributions in the phantoms show good agreement with measurements using diode detector and films. The dose difference is within 2% inside the field and the distance to agreement is within 2mm in the penumbra region. The gamma passing rate is more than 95% with 2%/2mm criteria for all the test cases. Conclusion: Implementation of Monte Carlo dose calculation for a MLC equipped robotic radiosurgery system is completed successfully. The accuracy of Monte Carlo dose calculation with MLC is clinically acceptable. This work was supported by Accuray Inc.
Prytkova, Vera; Heyden, Matthias; Khago, Domarin; Freites, J Alfredo; Butts, Carter T; Martin, Rachel W; Tobias, Douglas J
2016-08-25
We present a novel multi-conformation Monte Carlo simulation method that enables the modeling of protein-protein interactions and aggregation in crowded protein solutions. This approach is relevant to a molecular-scale description of realistic biological environments, including the cytoplasm and the extracellular matrix, which are characterized by high concentrations of biomolecular solutes (e.g., 300-400 mg/mL for proteins and nucleic acids in the cytoplasm of Escherichia coli). Simulation of such environments necessitates the inclusion of a large number of protein molecules. Therefore, computationally inexpensive methods, such as rigid-body Brownian dynamics (BD) or Monte Carlo simulations, can be particularly useful. However, as we demonstrate herein, the rigid-body representation typically employed in simulations of many-protein systems gives rise to certain artifacts in protein-protein interactions. Our approach allows us to incorporate molecular flexibility in Monte Carlo simulations at low computational cost, thereby eliminating ambiguities arising from structure selection in rigid-body simulations. We benchmark and validate the methodology using simulations of hen egg white lysozyme in solution, a well-studied system for which extensive experimental data, including osmotic second virial coefficients, small-angle scattering structure factors, and multiple structures determined by X-ray and neutron crystallography and solution NMR, as well as rigid-body BD simulation results, are available for comparison.
Directory of Open Access Journals (Sweden)
M. Kotbi
2013-03-01
Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.
Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE
Energy Technology Data Exchange (ETDEWEB)
Lamare, F; Turzo, A; Bizais, Y; Rest, C Cheze Le; Visvikis, D [U650 INSERM, Laboratoire du Traitement de l' information medicale (LaTIM), CHU Morvan, Universite de Bretagne Occidentale, Brest, 29609 (France)
2006-02-21
A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image
Hybrid Monte Carlo with Chaotic Mixing
Kadakia, Nirag
2016-01-01
We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.
Langevin Monte Carlo filtering for target tracking
Iglesias Garcia, Fernando; Bocquel, Melanie; Driessen, Hans
2015-01-01
This paper introduces the Langevin Monte Carlo Filter (LMCF), a particle filter with a Markov chain Monte Carlo algorithm which draws proposals by simulating Hamiltonian dynamics. This approach is well suited to non-linear filtering problems in high dimensional state spaces where the bootstrap filte
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
DEFF Research Database (Denmark)
Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.
2002-01-01
A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this opti......A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...
Quantum Monte Carlo using a Stochastic Poisson Solver
Energy Technology Data Exchange (ETDEWEB)
Das, D; Martin, R M; Kalos, M H
2005-05-06
Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.
The MC21 Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
Energy Technology Data Exchange (ETDEWEB)
Çatlı, Serap, E-mail: serapcatli@hotmail.com [Gazi University, Faculty of Sciences, 06500 Teknikokullar, Ankara (Turkey); Tanır, Güneş [Gazi University, Faculty of Sciences, 06500 Teknikokullar, Ankara (Turkey)
2013-10-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Catlı, Serap; Tanır, Güneş
2013-01-01
The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the present study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.
Tryggestad, E; Armour, M; Iordachita, I; Verhaegen, F; Wong, J W
2009-09-07
Our group has constructed the small animal radiation research platform (SARRP) for delivering focal, kilo-voltage radiation to targets in small animals under robotic control using cone-beam CT guidance. The present work was undertaken to support the SARRP's treatment planning capabilities. We have devised a comprehensive system for characterizing the radiation dosimetry in water for the SARRP and have developed a Monte Carlo dose engine with the intent of reproducing these measured results. We find that the SARRP provides sufficient therapeutic dose rates ranging from 102 to 228 cGy min(-1) at 1 cm depth for the available set of high-precision beams ranging from 0.5 to 5 mm in size. In terms of depth-dose, the mean of the absolute percentage differences between the Monte Carlo calculations and measurement is 3.4% over the full range of sampled depths spanning 0.5-7.2 cm for the 3 and 5 mm beams. The measured and computed profiles for these beams agree well overall; of note, good agreement is observed in the profile tails. Especially for the smallest 0.5 and 1 mm beams, including a more realistic description of the effective x-ray source into the Monte Carlo model may be important.
Energy Technology Data Exchange (ETDEWEB)
Tryggestad, E; Armour, M; Wong, J W [Deptartment of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD (United States); Iordachita, I [Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD (United States); Verhaegen, F [Department of Radiation Oncology (MAASTRO Physics), GROW School, Maastricht University Medical Center, Maastricht (Netherlands)
2009-09-07
Our group has constructed the small animal radiation research platform (SARRP) for delivering focal, kilo-voltage radiation to targets in small animals under robotic control using cone-beam CT guidance. The present work was undertaken to support the SARRP's treatment planning capabilities. We have devised a comprehensive system for characterizing the radiation dosimetry in water for the SARRP and have developed a Monte Carlo dose engine with the intent of reproducing these measured results. We find that the SARRP provides sufficient therapeutic dose rates ranging from 102 to 228 cGy min{sup -1} at 1 cm depth for the available set of high-precision beams ranging from 0.5 to 5 mm in size. In terms of depth-dose, the mean of the absolute percentage differences between the Monte Carlo calculations and measurement is 3.4% over the full range of sampled depths spanning 0.5-7.2 cm for the 3 and 5 mm beams. The measured and computed profiles for these beams agree well overall; of note, good agreement is observed in the profile tails. Especially for the smallest 0.5 and 1 mm beams, including a more realistic description of the effective x-ray source into the Monte Carlo model may be important.
Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods
NeuroData; Paninski, L
2015-01-01
Vogelstein JT, Paninski L. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods. Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Sequential Monte Carlo Methods, 2008
A Monte Carlo method for critical systems in infinite volume: the planar Ising model
Herdeiro, Victor
2016-01-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
Monte Carlo method for critical systems in infinite volume: The planar Ising model.
Herdeiro, Victor; Doyon, Benjamin
2016-10-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
Sign learning kink-based (SiLK) quantum Monte Carlo for molecular systems
Ma, Xiaoyao; Loffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2015-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H$_{2}$O, N$_2$, and F$_2$ molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901 (United States); Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901, USA; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Bhaskaran-Nair, Kiran [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Jarrell, Mark [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Evaluation of a commercial electron treatment planning system based on Monte Carlo techniques (eMC).
Pemler, Peter; Besserer, Jürgen; Schneider, Uwe; Neuenschwander, Hans
2006-01-01
A commercial electron beam treatment planning system on the basis of a Monte Carlo algorithm (Varian Eclipse, eMC V7.2.35) was evaluated. Measured dose distributions were used for comparison with dose distributions predicted by eMC calculations. Tests were carried out for various applicators and field sizes, irregular shaped cut outs and an inhomogeneity phantom for energies between 6 Me V and 22 MeV Monitor units were calculated for all applicator/energy combinations and field sizes down to 3 cm diameter and source-to-surface distances of 100 cm and 110 cm. A mass-density-to-Hounsfield-Units calibration was performed to compare dose distributions calculated with a default and an individual calibration. The relationship between calculation parameters of the eMC and the resulting dose distribution was studied in detail. Finally, the algorithm was also applied to a clinical case (boost treatment of the breast) to reveal possible problems in the implementation. For standard geometries there was a good agreement between measurements and calculations, except for profiles for low energies (6 MeV) and high energies (18 Me V 22 MeV), in which cases the algorithm overestimated the dose off-axis in the high-dose region. For energies of 12 MeV and higher there were oscillations in the plateau region of the corresponding depth dose curves calculated with a grid size of 1 mm. With irregular cut outs, an overestimation of the dose was observed for small slits and low energies (4% for 6 MeV), as well as for asymmetric cases and extended source-to-surface distances (12% for SSD = 120 cm). While all monitor unit calculations for SSD = 100 cm were within 3% compared to measure-ments, there were large deviations for small cut outs and source-to-surface distances larger than 100 cm (7%for a 3 cm diameter cut-out and a source-to-surface distance of 10 cm).
Monte Carlo approaches to light nuclei
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Quantum Monte Carlo for minimum energy structures
Wagner, Lucas K
2010-01-01
We present an efficient method to find minimum energy structures using energy estimates from accurate quantum Monte Carlo calculations. This method involves a stochastic process formed from the stochastic energy estimates from Monte Carlo that can be averaged to find precise structural minima while using inexpensive calculations with moderate statistical uncertainty. We demonstrate the applicability of the algorithm by minimizing the energy of the H2O-OH- complex and showing that the structural minima from quantum Monte Carlo calculations affect the qualitative behavior of the potential energy surface substantially.
Energy Technology Data Exchange (ETDEWEB)
Lakshmanan, Manu N. [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Kapadia, Anuj J., E-mail: anuj.kapadia@duke.edu [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Sahbaee, Pooyan [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Dept. of Physics, NC State University, Raleigh, NC (United States); Wolter, Scott D. [Dept. of Physics, Elon University, Elon, NC (United States); Harrawood, Brian P. [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Brady, David [Dept. of Electrical and Computer Engineering, Duke University, Durham, NC (United States); Samei, Ehsan [Ravin Advanced Imaging Laboratories, Dept. of Radiology, Duke University Medical Center, Durham, NC (United States); Dept. of Electrical and Computer Engineering, Duke University, Durham, NC (United States)
2014-09-15
The analysis of X-ray scatter patterns has been demonstrated as an effective method of identifying specific materials in mixed object environments, for both biological and non-biological applications. Here we describe an X-ray scatter imaging system for material identification in cluttered objects and investigate its performance using a large-scale Monte Carlo simulation study of one-thousand objects containing a broad array of materials. The GEANT4 Monte Carlo source code for Rayleigh scatter physics was modified to model coherent scatter diffraction in bulk materials based on experimentally measured form factors for 33 materials. The simulation was then used to model coherent scatter signals from a variety of targets and clutter (background) materials in one thousand randomized objects. The resulting scatter images were used to characterize four parameters of the imaging system that affected its ability to identify target materials: (a) the arrangement of materials in the object, (b) clutter attenuation, (c) type of target material, and (d) the X-ray tube current. We found that the positioning of target materials within the object did not significantly affect their detectability; however, a strong negative correlation was observed between the target detectability and the clutter attenuation of the object. The imaging signal was also found to be relatively invariant to increases in X-ray tube current above 1 mAs for most materials considered in the study. This work is the first Monte Carlo study to our knowledge of a large population of cluttered object of an X-ray scatter imaging system for material identification and lays the foundation for large-scale studies of the effectiveness of X-ray scatter imaging systems for material identification in complex samples.
Lakshmanan, Manu N.; Kapadia, Anuj J.; Sahbaee, Pooyan; Wolter, Scott D.; Harrawood, Brian P.; Brady, David; Samei, Ehsan
2014-09-01
The analysis of X-ray scatter patterns has been demonstrated as an effective method of identifying specific materials in mixed object environments, for both biological and non-biological applications. Here we describe an X-ray scatter imaging system for material identification in cluttered objects and investigate its performance using a large-scale Monte Carlo simulation study of one-thousand objects containing a broad array of materials. The GEANT4 Monte Carlo source code for Rayleigh scatter physics was modified to model coherent scatter diffraction in bulk materials based on experimentally measured form factors for 33 materials. The simulation was then used to model coherent scatter signals from a variety of targets and clutter (background) materials in one thousand randomized objects. The resulting scatter images were used to characterize four parameters of the imaging system that affected its ability to identify target materials: (a) the arrangement of materials in the object, (b) clutter attenuation, (c) type of target material, and (d) the X-ray tube current. We found that the positioning of target materials within the object did not significantly affect their detectability; however, a strong negative correlation was observed between the target detectability and the clutter attenuation of the object. The imaging signal was also found to be relatively invariant to increases in X-ray tube current above 1 mAs for most materials considered in the study. This work is the first Monte Carlo study to our knowledge of a large population of cluttered object of an X-ray scatter imaging system for material identification and lays the foundation for large-scale studies of the effectiveness of X-ray scatter imaging systems for material identification in complex samples.
Introduction to Cluster Monte Carlo Algorithms
Luijten, E.
This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.
Analytical, experimental, and Monte Carlo system response matrix for pinhole SPECT reconstruction
Energy Technology Data Exchange (ETDEWEB)
Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Spain and Grupo de Imaxe Molecular, IDIS, Santiago de Compostela 15706 (Spain); Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Spain and Servei de Física Médica i Protecció Radiológica, Institut Catalá d' Oncologia, Barcelona 08036 (Spain); Silva-Rodríguez, Jesús [Fundación Ramón Domínguez, Medicina Nuclear, CHUS, Santiago de Compostela 15706 (Spain); Pavía, Javier [Servei de Medicina Nuclear, Hospital Clínic, Barcelona (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ros, Doménec [Unitat de Biofísica, Facultat de Medicina, Casanova 143 (Spain); Institut d' Investigacions Biomèdiques August Pí i Sunyer (IDIBAPS) (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Ruibal, Álvaro [Servicio Medicina Nuclear, CHUS (Spain); Grupo de Imaxe Molecular, Facultade de Medicina (USC), IDIS, Santiago de Compostela 15706 (Spain); Fundación Tejerina, Madrid (Spain); and others
2014-03-15
Purpose: To assess the performance of two approaches to the system response matrix (SRM) calculation in pinhole single photon emission computed tomography (SPECT) reconstruction. Methods: Evaluation was performed using experimental data from a low magnification pinhole SPECT system that consisted of a rotating flat detector with a monolithic scintillator crystal. The SRM was computed following two approaches, which were based on Monte Carlo simulations (MC-SRM) and analytical techniques in combination with an experimental characterization (AE-SRM). The spatial response of the system, obtained by using the two approaches, was compared with experimental data. The effect of the MC-SRM and AE-SRM approaches on the reconstructed image was assessed in terms of image contrast, signal-to-noise ratio, image quality, and spatial resolution. To this end, acquisitions were carried out using a hot cylinder phantom (consisting of five fillable rods with diameters of 5, 4, 3, 2, and 1 mm and a uniform cylindrical chamber) and a custom-made Derenzo phantom, with center-to-center distances between adjacent rods of 1.5, 2.0, and 3.0 mm. Results: Good agreement was found for the spatial response of the system between measured data and results derived from MC-SRM and AE-SRM. Only minor differences for point sources at distances smaller than the radius of rotation and large incidence angles were found. Assessment of the effect on the reconstructed image showed a similar contrast for both approaches, with values higher than 0.9 for rod diameters greater than 1 mm and higher than 0.8 for rod diameter of 1 mm. The comparison in terms of image quality showed that all rods in the different sections of a custom-made Derenzo phantom could be distinguished. The spatial resolution (FWHM) was 0.7 mm at iteration 100 using both approaches. The SNR was lower for reconstructed images using MC-SRM than for those reconstructed using AE-SRM, indicating that AE-SRM deals better with the
A User's Manual for MASH V1.5 - A Monte Carlo Adjoint Shielding Code System
Energy Technology Data Exchange (ETDEWEB)
C. O. Slater; J. M. Barnes; J. O. Johnson; J.D. Drischler
1998-10-01
The Monte Carlo ~djoint ~ielding Code System, MASH, calculates neutron and gamma- ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air- over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system includes the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. The current version, MASH v 1.5, is the successor to the original MASH v 1.0 code system initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the "dose importance" of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response as a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem.
A user`s manual for MASH 1.0: A Monte Carlo Adjoint Shielding Code System
Energy Technology Data Exchange (ETDEWEB)
Johnson, J.O. [ed.
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the ``dose importance`` of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user`s manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
A user's manual for MASH 1. 0: A Monte Carlo Adjoint Shielding Code System
Energy Technology Data Exchange (ETDEWEB)
Johnson, J.O. (ed.)
1992-03-01
The Monte Carlo Adjoint Shielding Code System, MASH, calculates neutron and gamma-ray environments and radiation protection factors for armored military vehicles, structures, trenches, and other shielding configurations by coupling a forward discrete ordinates air-over-ground transport calculation with an adjoint Monte Carlo treatment of the shielding geometry. Efficiency and optimum use of computer time are emphasized. The code system include the GRTUNCL and DORT codes for air-over-ground transport calculations, the MORSE code with the GIFT5 combinatorial geometry package for adjoint shielding calculations, and several peripheral codes that perform the required data preparations, transformations, and coupling functions. MASH is the successor to the Vehicle Code System (VCS) initially developed at Oak Ridge National Laboratory (ORNL). The discrete ordinates calculation determines the fluence on a coupling surface surrounding the shielding geometry due to an external neutron/gamma-ray source. The Monte Carlo calculation determines the effectiveness of the fluence at that surface in causing a response in a detector within the shielding geometry, i.e., the dose importance'' of the coupling surface fluence. A coupling code folds the fluence together with the dose importance, giving the desired dose response. The coupling code can determine the dose response a a function of the shielding geometry orientation relative to the source, distance from the source, and energy response of the detector. This user's manual includes a short description of each code, the input required to execute the code along with some helpful input data notes, and a representative sample problem (input data and selected output edits) for each code.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Improvements in the Monte Carlo code for simulating 4πβ(PC)-γ coincidence system measurements
Dias, M. S.; Takeda, M. N.; Toledo, F.; Brancaccio, F.; Tongu, M. L. O.; Koskinas, M. F.
2013-01-01
A Monte Carlo simulation code known as ESQUEMA has been developed by the Nuclear Metrology Laboratory (Laboratório de Metrologia Nuclear-LMN) in the Nuclear and Energy Research Institute (Instituto de Pesquisas Energéticas e Nucleares-IPEN) to be used as a benchmark for radionuclide standardization. The early version of this code simulated only β-γ and ec-γ emitters with reasonably high electron and X-ray energies. To extend the code to include other radionuclides and enable the code to be applied to software coincidence counting systems, several improvements have been made and are presented in this work.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Monte Carlo simulations for plasma physics
Energy Technology Data Exchange (ETDEWEB)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Cluster hybrid Monte Carlo simulation algorithms
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Monte Carlo simulation for the transport beamline
Energy Technology Data Exchange (ETDEWEB)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.
2004-01-01
We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.
Monte Carlo Algorithms for Linear Problems
DIMOV, Ivan
2000-01-01
MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R H
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Jehan, Musarrat
The response of a dynamic system is random. There is randomness in both the applied loads and the strength of the system. Therefore, to account for the uncertainty, the safety of the system must be quantified using its probability of survival (reliability). Monte Carlo Simulation (MCS) is a widely used method for probabilistic analysis because of its robustness. However, a challenge in reliability assessment using MCS is that the high computational cost limits the accuracy of MCS. Haftka et al. [2010] developed an improved sampling technique for reliability assessment called separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables only. This dissertation extends SMC to random vibration problems with multiple random variables. This research also develops a novel method for estimation of the standard deviation of the probability of failure of a structure under static or random vibration. The method is demonstrated on quarter car models and a wind turbine. The proposed method is validated using repeated standard MCS.
Performance evaluation of Biograph PET/CT system based on Monte Carlo simulation
Wang, Bing; Gao, Fei; Liu, Hua-Feng
2010-10-01
Combined lutetium oxyorthosilicate (LSO) Biograph PET/CT is developed by Siemens Company and has been introduced into medical practice. There is no septa between the scintillator rings, the acquisition mode is full 3D mode. The PET components incorporate three rings of 48 detector blocks which comprises a 13×13 matrix of 4×4×20mm3 elements. The patient aperture is 70cm, the transversal field of view (FOV) is 58.5cm, and the axial field of view is 16.2cm. The CT components adopt 16 slices spiral CT scanner. The physical performance of this PET/CT scanner has been evaluated using Monte Carlo simulation method according to latest NEMA NU 2-2007 standard and the results have been compared with real experiment results. For PET part, in the center FOV the average transversal resolution is 3.67mm, the average axial resolution is 3.94mm, and the 3D-reconstructed scatter fraction is 31.7%. The sensitivities of the PET scanner are 4.21kcps/MBq and 4.26kcps/MBq at 0cm and 10cm off the center of the transversal FOV. The peak NEC is 95.6kcps at a concentration of 39.2kBq/ml. The spatial resolution of CT part is up to 1.12mm at 10mm off the center. The errors between simulated and real results are permitted.
Monte Carlo methods and applications in nuclear physics
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.
Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN
Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759
2010-01-01
The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.
Successful combination of the stochastic linearization and Monte Carlo methods
Elishakoff, I.; Colombi, P.
1993-01-01
A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
Monte Carlo Simulation in Digital Communication Systems%数字通信系统中的Monte Carlo仿真
Institute of Scientific and Technical Information of China (English)
高明慧
2010-01-01
介绍数字通信系统的广泛应用和Monte Carlo算法的基本思想,重点分析数字通信系统中的差错概率和应用Monte Carlo仿真对存在噪声和干扰的数字通信系统的性能进行评估.
EXTENDED MONTE CARLO LOCALIZATION ALGORITHM FOR MOBILE SENSOR NETWORKS
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A real-world localization system for wireless sensor networks that adapts for mobility and irregular radio propagation model is considered.The traditional range-based techniques and recent range-free localization schemes are not welt competent for localization in mobile sensor networks,while the probabilistic approach of Bayesian filtering with particle-based density representations provides a comprehensive solution to such localization problem.Monte Carlo localization is a Bayesian filtering method that approximates the mobile node’S location by a set of weighted particles.In this paper,an enhanced Monte Carlo localization algorithm-Extended Monte Carlo Localization (Ext-MCL) is suitable for the practical wireless network environment where the radio propagation model is irregular.Simulation results show the proposal gets better localization accuracy and higher localizable node number than previously proposed Monte Carlo localization schemes not only for ideal radio model,but also for irregular one.
Energy Technology Data Exchange (ETDEWEB)
Rojas C, E.L.; Varon T, C.F.; Pedraza N, R. [ININ, 52750 La Marquesa, Estado de Mexico (Mexico)]. e-mail: elrc@nuclear.inin.mx
2007-07-01
The treatment of the breast cancer at early stages is of vital importance. For that, most of the investigations are dedicated to the early detection of the suffering and their treatment. As investigation consequence and clinical practice, in 2002 it was developed in U.S.A. an irradiation system of high dose rate known as Mammosite. In this work we carry out dose calculations for a simplified Mammosite system with the Monte Carlo Penelope simulation code and MCNPX, varying the concentration of the contrast material that it is used in the one. (Author)
Information Geometry and Sequential Monte Carlo
Sim, Aaron; Stumpf, Michael P H
2012-01-01
This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...
Kamibayashi, Yuki; Miura, Shinichi
2016-08-01
In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.
Fuchs, M.; Ireta, J.; Scheffler, M.; Filippi, C.
2006-03-01
Dispersion (Van der Waals) forces are important in many molecular phenomena such as self-assembly of molecular crystals or peptide folding. Calculating this nonlocal correlation effect requires accurate electronic structure methods. Usual density-functional theory with generalized gradient functionals (GGA-DFT) fails unless empirical corrections are added that still need extensive validation. Quantum chemical methods like MP2 and coupled cluster are more accurate, yet limited to rather small systems by their unfavorable computational scaling. Diffusion Monte Carlo (DMC) can provide accurate molecular total energies and remains feasible also for larger systems. Here we apply the fixed-node DMC method to (bio-)molecular model systems where dispersion forces are significant: (dimethyl-) formamide and benzene dimers, and adenine-thymine DNA base pairs. Our DMC binding energies agree well with data from coupled cluster (CCSD(T)), in particular for stacked geometries where GGA-DFT fails qualitatively and MP2 predicts too strong binding.
Monte Carlo EM加速算法%Acceleration of Monte Carlo EM Algorithm
Institute of Scientific and Technical Information of China (English)
罗季
2008-01-01
EM算法是近年来常用的求后验众数的估计的一种数据增广算法,但由于求出其E步中积分的显示表达式有时很困难,甚至不可能,限制了其应用的广泛性.而Monte Carlo EM算法很好地解决了这个问题,将EM算法中E步的积分用Monte Carlo模拟来有效实现,使其适用性大大增强.但无论是EM算法,还是Monte Carlo EM算法,其收敛速度都是线性的,被缺损信息的倒数所控制,当缺损数据的比例很高时,收敛速度就非常缓慢.而Newton-Raphson算法在后验众数的附近具有二次收敛速率.本文提出Monte Carlo EM加速算法,将Monte Carlo EM算法与Newton-Raphson算法结合,既使得EM算法中的E步用Monte Carlo模拟得以实现,又证明了该算法在后验众数附近具有二次收敛速度.从而使其保留了Monte Carlo EM算法的优点,并改进了Monte Carlo EM算法的收敛速度.本文通过数值例子,将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较,进一步说明了Monte Carlo EM加速算法的优良性.
Quantum Monte Carlo Endstation for Petascale Computing
Energy Technology Data Exchange (ETDEWEB)
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
SMCTC: Sequential Monte Carlo in C++
Directory of Open Access Journals (Sweden)
Adam M. Johansen
2009-04-01
Full Text Available Sequential Monte Carlo methods are a very general class of Monte Carlo methodsfor sampling from sequences of distributions. Simple examples of these algorithms areused very widely in the tracking and signal processing literature. Recent developmentsillustrate that these techniques have much more general applicability, and can be appliedvery eectively to statistical inference problems. Unfortunately, these methods are oftenperceived as being computationally expensive and dicult to implement. This articleseeks to address both of these problems.A C++ template class library for the ecient and convenient implementation of verygeneral Sequential Monte Carlo algorithms is presented. Two example applications areprovided: a simple particle lter for illustrative purposes and a state-of-the-art algorithmfor rare event estimation.
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
A brief introduction to Monte Carlo simulation.
Bonate, P L
2001-01-01
Simulation affects our life every day through our interactions with the automobile, airline and entertainment industries, just to name a few. The use of simulation in drug development is relatively new, but its use is increasing in relation to the speed at which modern computers run. One well known example of simulation in drug development is molecular modelling. Another use of simulation that is being seen recently in drug development is Monte Carlo simulation of clinical trials. Monte Carlo simulation differs from traditional simulation in that the model parameters are treated as stochastic or random variables, rather than as fixed values. The purpose of this paper is to provide a brief introduction to Monte Carlo simulation methods.
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Kanematsu, Nobuyuki; Inaniwa, Taku; Nakao, Minoru
2016-07-01
In the conventional procedure for accurate Monte Carlo simulation of radiotherapy, a CT number given to each pixel of a patient image is directly converted to mass density and elemental composition using their respective functions that have been calibrated specifically for the relevant x-ray CT system. We propose an alternative approach that is a conversion in two steps: the first from CT number to density and the second from density to composition. Based on the latest compilation of standard tissues for reference adult male and female phantoms, we sorted the standard tissues into groups by mass density and defined the representative tissues by averaging the material properties per group. With these representative tissues, we formulated polyline relations between mass density and each of the following; electron density, stopping-power ratio and elemental densities. We also revised a procedure of stoichiometric calibration for CT-number conversion and demonstrated the two-step conversion method for a theoretically emulated CT system with hypothetical 80 keV photons. For the standard tissues, high correlation was generally observed between mass density and the other densities excluding those of C and O for the light spongiosa tissues between 1.0 g cm-3 and 1.1 g cm-3 occupying 1% of the human body mass. The polylines fitted to the dominant tissues were generally consistent with similar formulations in the literature. The two-step conversion procedure was demonstrated to be practical and will potentially facilitate Monte Carlo simulation for treatment planning and for retrospective analysis of treatment plans with little impact on the management of planning CT systems.
Monte Carlo Hamiltonian：Linear Potentials
Institute of Scientific and Technical Information of China (English)
LUOXiang－Qian; HelmutKROEGER; 等
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.
Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
The Rational Hybrid Monte Carlo Algorithm
Clark, M A
2006-01-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
The Rational Hybrid Monte Carlo algorithm
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Pasini, J M; Cordero, P
2001-04-01
We study a one-dimensional granular gas of pointlike particles not subject to gravity between two walls at temperatures T(left) and T(right). The system exhibits two distinct regimes, depending on the normalized temperature difference Delta=(T(right)-T(left))/(T(right)+T(left)): one completely fluidized and one in which a cluster coexists with the fluidized gas. When Delta is above a certain threshold, cluster formation is fully inhibited, obtaining a completely fluidized state. The mechanism that produces these two phases is explained. In the fluidized state the velocity distribution function exhibits peculiar non-Gaussian features. For this state, comparison between integration of the Boltzmann equation using the direct-simulation Monte Carlo method and results stemming from microscopic Newtonian molecular dynamics gives good coincidence, establishing that the non-Gaussian features observed do not arise from the onset of correlations.
Ishisaki, Y; Fujimoto, R; Ozaki, M; Ebisawa, K; Takahashi, T; Ueda, Y; Ogasaka, Y; Ptak, A; Mukai, K; Hamaguchi, K; Hirayama, M; Kotani, T; Kubo, H; Shibata, R; Ebara, M; Furuzawa, A; Iizuka, R; Inoue, H; Mori, H; Okada, S; Yokoyama, Y; Matsumoto, H; Nakajima, H; Yamaguchi, H; Anabuki, N; Tawa, N; Nagai, M; Katsuda, S; Hayashida, K; Bamba, A; Miller, E D; Sato, K; Yamasaki, N Y
2006-01-01
We have developed a framework for the Monte-Carlo simulation of the X-Ray Telescopes (XRT) and the X-ray Imaging Spectrometers (XIS) onboard Suzaku, mainly for the scientific analysis of spatially and spectroscopically complex celestial sources. A photon-by-photon instrumental simulator is built on the ANL platform, which has been successfully used in ASCA data analysis. The simulator has a modular structure, in which the XRT simulation is based on a ray-tracing library, while the XIS simulation utilizes a spectral "Redistribution Matrix File" (RMF), generated separately by other tools. Instrumental characteristics and calibration results, e.g., XRT geometry, reflectivity, mutual alignments, thermal shield transmission, build-up of the contamination on the XIS optical blocking filters (OBF), are incorporated as completely as possible. Most of this information is available in the form of the FITS (Flexible Image Transport System) files in the standard calibration database (CALDB). This simulator can also be ut...
Institute of Scientific and Technical Information of China (English)
YAO Xiao-yan; LI Peng-lei; DONG Shuai; LIU Jun-ming
2007-01-01
A three-dimensional Ising-like model doped with anti-ferromagnetic (AFM) bonds is proposed to investigate the magnetic properties of a doped triangular spin-chain system by using a Monte-Carlo simulation. The simulated results indicate that a steplike magnetization behavior is very sensitive to the concentration of AFM bonds. A low concentration of AFM bonds can suppress the stepwise behavior considerably, in accordance with doping experiments on Ca3Co206. The analysis of spin snapshots demonstrates that the AFM bond doping not only breaks the ferromagnetic ordered linear spin chains along the hexagonal c-axis but also has a great influence upon the spin configuration in the ab-plane.
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions
Liang, Yihao; Li, Yaohang
2016-01-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Horváthová, L; Mitas, L; Štich, I
2014-01-01
We present calculations of electronic and magnetic structures of vanadium-benzene multidecker clusters V$_{n}$Bz$_{n+1}$ ($n$ = 1 - 3) using advanced quantum Monte Carlo methods. These and related systems have been identified as prospective spin filters in spintronic applications, assuming that their ground states are half-metallic ferromagnets. Although we find that magnetic properties of these multideckers are consistent with ferromagnetic coupling, their electronic structures do not appear to be half-metallic as previously assumed. In fact, they are ferromagnetic insulators with large and broadly similar $\\uparrow$-/$\\downarrow$-spin gaps. This makes the potential of these and related materials as spin filtering devices very limited, unless they are further modified or functionalized.
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Monte Carlo scatter correction for SPECT
Liu, Zemei
The goal of this dissertation is to present a quantitatively accurate and computationally fast scatter correction method that is robust and easily accessible for routine applications in SPECT imaging. A Monte Carlo based scatter estimation method is investigated and developed further. The Monte Carlo simulation program SIMIND (Simulating Medical Imaging Nuclear Detectors), was specifically developed to simulate clinical SPECT systems. The SIMIND scatter estimation (SSE) method was developed further using a multithreading technique to distribute the scatter estimation task across multiple threads running concurrently on multi-core CPU's to accelerate the scatter estimation process. An analytical collimator that ensures less noise was used during SSE. The research includes the addition to SIMIND of charge transport modeling in cadmium zinc telluride (CZT) detectors. Phenomena associated with radiation-induced charge transport including charge trapping, charge diffusion, charge sharing between neighboring detector pixels, as well as uncertainties in the detection process are addressed. Experimental measurements and simulation studies were designed for scintillation crystal based SPECT and CZT based SPECT systems to verify and evaluate the expanded SSE method. Jaszczak Deluxe and Anthropomorphic Torso Phantoms (Data Spectrum Corporation, Hillsborough, NC, USA) were used for experimental measurements and digital versions of the same phantoms employed during simulations to mimic experimental acquisitions. This study design enabled easy comparison of experimental and simulated data. The results have consistently shown that the SSE method performed similarly or better than the triple energy window (TEW) and effective scatter source estimation (ESSE) methods for experiments on all the clinical SPECT systems. The SSE method is proven to be a viable method for scatter estimation for routine clinical use.
On nonlinear Markov chain Monte Carlo
Andrieu, Christophe; Doucet, Arnaud; Del Moral, Pierre; 10.3150/10-BEJ307
2011-01-01
Let $\\mathscr{P}(E)$ be the space of probability measures on a measurable space $(E,\\mathcal{E})$. In this paper we introduce a class of nonlinear Markov chain Monte Carlo (MCMC) methods for simulating from a probability measure $\\pi\\in\\mathscr{P}(E)$. Nonlinear Markov kernels (see [Feynman--Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004) Springer]) $K:\\mathscr{P}(E)\\times E\\rightarrow\\mathscr{P}(E)$ can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or potentially self-interacting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and Foster--Lyapunov conditions. We investigate the performance of our approximations with some simulations.
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Energy Technology Data Exchange (ETDEWEB)
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Bayoumi, T A; Reda, S M; Saleh, H M
2012-01-01
Radioactive waste generated from the nuclear applications should be properly isolated by a suitable containment system such as, multi-barrier container. The present study aims to evaluate the isolation capacity of a new multi-barrier container made from cement and clay and including borate waste materials. These wastes were spiked by (137)Cs and (60)Co radionuclides to simulate that waste generated from the primary cooling circuit of pressurized water reactors. Leaching of both radionuclides in ground water was followed and calculated during ten years. Monte Carlo (MCNP5) simulations computed the photon flux distribution of the multi-barrier container, including radioactive borate waste of specific activity 11.22KBq/g and 4.18KBq/g for (137)Cs and (60)Co, respectively, at different periods of 0, 15.1, 30.2 and 302 years. The average total flux for 100cm radius of spherical cell was 0.192photon/cm(2) at initial time and 2.73×10(-4)photon/cm(2) after 302 years. Maximum waste activity keeping the surface radiation dose within the permissible level was calculated and found to be 56KBq/g with attenuation factors of 0.73cm(-1) and 0.6cm(-1) for cement and clay, respectively. The average total flux was 1.37×10(-3)photon/cm(2) after 302 years. Monte Carlo simulations revealed that the proposed multi-barrier container is safe enough during transportation, evacuation or rearrangement in the disposal site for more than 300 years. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ruzic, David N.; Juliano, Daniel R.; Hayden, Douglas B.; Allain, Monica M. C.
1998-10-01
A code has been developed to model the transport of sputtered material in a modified industrial-scale magnetron. The device has a target diameter of 355 mm and was designed for 200 mm substrates. The chamber has been retrofitted with an auxilliary RF inductive plasma source located between the target and substrate. The source consists of a water-cooled copper coil immersed in the plasma, but with a diameter large enough to prevent shadowing of the substrate. The RF plasma, target sputter flux distribution, background gas conditions, and geometry are all inputs to the code. The plasma is characterized via a combination of a Langmuir probe apparatus and the results of a simple analytic model of the ICP system. The source of sputtered atoms from the target is found through measurements of the depth of the sputter track in an eroded target and the distribution of the sputter flux is calculated via VFTRIM. A Monte Carlo routine tracks high energy atoms emerging from the target as they move through the chamber and undergo collisions with the electrons and background gas. The sputtered atoms are tracked by this routine whatever their electronic state (neutral, excited, or ion). If the energy of a sputtered atom decreases to near-thermal levels, then it exits the Monte Carlo routine as is tracked with a simple diffusion model. In this way, all sputtered atoms are followed until they hit and stick to a surface, and the velocity distribution of the sputtered atom population (including electronic state information) at each surface is calculated, especially the substrate. Through the use of this simulation the coil parameters and geometry can be tailored to maximize deposition rate and sputter flux uniformity.
A comparison of Monte Carlo generators
Golan, Tomasz
2014-01-01
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.
Monte Carlo Tools for Jet Quenching
Zapp, Korinna
2011-01-01
A thorough understanding of jet quenching on the basis of multi-particle final states and jet observables requires new theoretical tools. This talk summarises the status and propects of the theoretical description of jet quenching in terms of Monte Carlo generators.
An Introduction to Monte Carlo Methods
Raeside, D. E.
1974-01-01
Reviews the principles of Monte Carlo calculation and random number generation in an attempt to introduce the direct and the rejection method of sampling techniques as well as the variance-reduction procedures. Indicates that the increasing availability of computers makes it possible for a wider audience to learn about these powerful methods. (CC)
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
An analysis of Monte Carlo tree search
CSIR Research Space (South Africa)
James, S
2017-02-01
Full Text Available Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner...
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Fission Matrix Capability for MCNP Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
Monte Carlo Simulation in Statistical Physics An Introduction
Binder, Kurt
2010-01-01
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...
An overview of Monte Carlo treatment planning for radiotherapy.
Spezi, Emiliano; Lewis, Geraint
2008-01-01
The implementation of Monte Carlo dose calculation algorithms in clinical radiotherapy treatment planning systems has been anticipated for many years. Despite a continuous increase of interest in Monte Carlo Treatment Planning (MCTP), its introduction into clinical practice has been delayed by the extent of calculation time required. The development of newer and faster MC codes is behind the commercialisation of the first MC-based treatment planning systems. The intended scope of this article is to provide the reader with a compact 'primer' on different approaches to MCTP with particular attention to the latest developments in the field.
Monte Carlo radiation transport in external beam radiotherapy
Çeçen, Yiğit
2013-01-01
The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...
Energy Technology Data Exchange (ETDEWEB)
Fonseca, T.C.F.; Bastos, F.M.; Figueiredo, M.T.T.; Souza, L.S.; Guimaraes, M.C.; Silva, C.R.E.; Mello, O.A.; Castelo e Silva, L.A.; Paixao, L., E-mail: tcff01@gmail.com [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Benavente, J.A.; Paiva, F.G. [Universidade Federal de Minas Gerais (PCTN/UFMG), Belo Horizonte, MG (Brazil). Curso de Pos-Graduacao em Ciencias e Tecnicas Nucleares
2015-07-01
Computational Monte Carlo (MC) codes have been used for simulation of nuclear installations mainly for internal monitoring of workers, the well known as Whole Body Counters (WBC). The main goal of this project was the modeling and simulation of the counting efficiency (CE) of a WBC system using three different MC codes: MCNPX, EGSnrc and VMC in-vivo. The simulations were performed for three different groups of analysts. The results shown differences between the three codes, as well as in the results obtained by the same code and modeled by different analysts. Moreover, all the results were also compared to the experimental results obtained in laboratory for meaning of validation and final comparison. In conclusion, it was possible to detect the influence on the results when the system is modeled by different analysts using the same MC code and in which MC code the results were best suited, when comparing to the experimental data result. (author)
LCG MCDB - a Knowledgebase of Monte Carlo Simulated Events
Belov, S; Galkin, E; Gusev, A; Pokorski, Witold; Sherstnev, A V
2008-01-01
In this paper we report on LCG Monte Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC collaborations by experts. In many cases, the modern Monte Carlo simulation of physical processes requires expert knowledge in Monte Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project.
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING
Directory of Open Access Journals (Sweden)
Kartik Dwivedi
2013-12-01
Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.
Introduction to the variational and diffusion Monte Carlo methods
Toulouse, Julien; Umrigar, C J
2015-01-01
We provide a pedagogical introduction to the two main variants of real-space quantum Monte Carlo methods for electronic-structure calculations: variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC). Assuming no prior knowledge on the subject, we review in depth the Metropolis-Hastings algorithm used in VMC for sampling the square of an approximate wave function, discussing details important for applications to electronic systems. We also review in detail the more sophisticated DMC algorithm within the fixed-node approximation, introduced to avoid the infamous Fermionic sign problem, which allows one to sample a more accurate approximation to the ground-state wave function. Throughout this review, we discuss the statistical methods used for evaluating expectation values and statistical uncertainties. In particular, we show how to estimate nonlinear functions of expectation values and their statistical uncertainties.
Energy Technology Data Exchange (ETDEWEB)
Lee, Seung Kyu; Seo, Hee; Won, Byung Hee; Lee, Hyun Su; Park, Se-Hwan; Kim, Ho-Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
The XRF technique compares the measured pulse height of U and Pu peaks which are self-induced characteristic xray emitted from U and Pu to quantify the elemental U and Pu. The measurement of the U and Pu x-ray peak ratio provides information on the relative concentration of U and Pu elements. Photon measurements of spent nuclear fuel using high resolution spectrometers show a large background continuum in the low energy x-ray region in large part from Compton scattering of energetic gamma-rays. The high Compton continuum can make measurements of plutonium x-rays difficult because the relatively small signal to background ratio produced. In pressurized water reactor (PWR) spent fuels with low plutonium contents (-1%), the signal to background ratio may be too low to get an accurate plutonium x-ray measurement. The Compton suppression system has been proposed to reduce the Compton continuum background. In the present study, the feasibility of a Compton suppression system for XRF was evaluated by Monte Carlo simulations and measurements of the radiation source. In this study, the feasibility of a Compton suppression system for XRF was evaluated by MCNP simulations and measurements of the radiation source. Experiments using a standard gamma-ray source showed that the peak-to-total ratios were improved by a factor of three when the Compton suppression system was used.
Fast and accurate Monte Carlo-based system response modeling for a digital whole-body PET
Sun, Xiangyu; Li, Yanzhao; Yang, Lingli; Wang, Shuai; Zhang, Bo; Xiao, Peng; Xie, Qingguo
2017-03-01
Recently, we have developed a digital whole-body PET scanner based on multi-voltage threshold (MVT) digitizers. To mitigate the impact of resolution degrading factors, an accurate system response is calculated by Monte Carlo simulation, which is computationally expensive. To address the problem, here we improve the method of using symmetries by simulating an axial wedge region. This approach takes full advantage of intrinsic symmetries in the cylindrical PET system without significantly increasing the computation cost in the process of symmetries. A total of 4224 symmetries are exploited. It took 17 days to generate the system maxtrix on 160 cores of Xeon 2.5 GHz. Both simulation and experimental data are used to evaluate the accuracy of system response modeling. The simulation studies show the full-width-half-maximum of a line source being 2.1 mm and 3.8 mm at the center of FOV and 200 mm at the center of FOV. Experimental results show the 2.4 mm rods in the Derenzo phantom image, which can be well distinguished.
Ahmad, Munir; Deng, Jun; Lund, Molly W.; Chen, Zhe; Kimmett, James; Moran, Meena S.; Nath, Ravinder
2009-01-01
The goal of this work is to present a systematic Monte Carlo validation study on the clinical implementation of the enhanced dynamic wedges (EDWs) into the Pinnacle3 (Philips Medical Systems, Fitchburg, WI) treatment planning system (TPS) and QA procedures for patient plan verification treated with EDWs. Modeling of EDW beams in the Pinnacle3 TPS, which employs a collapsed-cone convolution superposition (CCCS) dose model, was based on a combination of measured open-beam data and the 'Golden Segmented Treatment Table' (GSTT) provided by Varian for each photon beam energy. To validate EDW models, dose profiles of 6 and 10 MV photon beams from a Clinac 2100 C/D were measured in virtual water at depths from near-surface to 30 cm for a wide range of field sizes and wedge angles using the Profiler 2 (Sun Nuclear Corporation, Melbourne, FL) diode array system. The EDW output factors (EDWOFs) for square fields from 4 to 20 cm wide were measured in virtual water using a small-volume Farmer-type ionization chamber placed at a depth of 10 cm on the central axis. Furthermore, the 6 and 10 MV photon beams emerging from the treatment head of Clinac 2100 C/D were fully modeled and the central-axis depth doses, the off-axis dose profiles and the output factors in water for open and dynamically wedged fields were calculated using the Monte Carlo (MC) package EGS4. Our results have shown that (1) both the central-axis depth doses and the off-axis dose profiles of various EDWs computed with the CCCS dose model and MC simulations showed good agreement with the measurements to within 2%/2 mm; (2) measured EDWOFs used for monitor-unit calculation in Pinnacle3 TPS agreed well with the CCCS and MC predictions within 2%; (3) all the EDW fields satisfied our validation criteria of 1% relative dose difference and 2 mm distance-to-agreement (DTA) with 99-100% passing rate in routine patient treatment plan verification using MapCheck 2D diode array.
Kinetic Monte Carlo method applied to nucleic acid hairpin folding.
Sauerwine, Ben; Widom, Michael
2011-12-01
Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.
Energy Technology Data Exchange (ETDEWEB)
Johannesson, G; Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2005-02-07
Estimating unknown system configurations/parameters by combining system knowledge gained from a computer simulation model on one hand and from observed data on the other hand is challenging. An example of such inverse problem is detecting and localizing potential flaws or changes in a structure by using a finite-element model and measured vibration/displacement data. We propose a probabilistic approach based on Bayesian methodology. This approach does not only yield a single best-guess solution, but a posterior probability distribution over the parameter space. In addition, the Bayesian approach provides a natural framework to accommodate prior knowledge. A Markov chain Monte Carlo (MCMC) procedure is proposed to generate samples from the posterior distribution (an ensemble of likely system configurations given the data). The MCMC procedure proposed explores the parameter space at different resolutions (scales), resulting in a more robust and efficient procedure. The large-scale exploration steps are carried out using coarser-resolution finite-element models, yielding a considerable decrease in computational time, which can be a crucial for large finite-element models. An application is given using synthetic displacement data from a simple cantilever beam with MCMC exploration carried out at three different resolutions.
Vitali, Ettore; Shi, Hao; Qin, Mingpu; Zhang, Shiwei
2016-08-01
We address the calculation of dynamical correlation functions for many fermion systems at zero temperature, using the auxiliary-field quantum Monte Carlo method. The two-dimensional Hubbard hamiltonian is used as a model system. Although most of the calculations performed here are for cases where the sign problem is absent, the discussions are kept general for applications to physical problems when the sign problem does arise. We study the use of twisted boundary conditions to improve the extrapolation of the results to the thermodynamic limit. A strategy is proposed to drastically reduce finite size effects relying on a minimization among the twist angles. This approach is demonstrated by computing the charge gap at half filling. We obtain accurate results showing the scaling of the gap with the interaction strength U in two dimensions, connecting to the scaling of the unrestricted Hartree-Fock method at small U and Bethe ansatz exact result in one dimension at large U . An alternative algorithm is then proposed to compute dynamical Green functions and correlation functions which explicitly varies the number of particles during the random walks in the manifold of Slater determinants. In dilute systems, such as ultracold Fermi gases, this algorithm enables calculations with much more favorable complexity, with computational cost proportional to basis size or the number of lattice sites.
Directory of Open Access Journals (Sweden)
Khan Hamda
2017-01-01
Full Text Available This paper carries out a Monte Carlo simulation of a landmine detection system, using the MCNP5 code, for the detection of concealed explosives such as trinitrotoluene and cyclonite. In portable field detectors, the signal strength of backscattered neutrons and gamma rays from thermal neutron activation is sensitive to a number of parameters such as the mass of explosive, depth of concealment, neutron moderation, background soil composition, soil porosity, soil moisture, multiple scattering in the background material, and configuration of the detection system. In this work, a detection system, with BF3 detectors for neutrons and sodium iodide scintillator for g-rays, is modeled to investigate the neutron signal-to-noise ratio and to obtain an empirical formula for the photon production rate Ri(n,γ= SfGfMf(d,m from radiative capture reactions in constituent nuclides of trinitrotoluene. This formula can be used for the efficient landmine detection of explosives in quantities as small as ~200 g of trinitrotoluene concealed at depths down to about 15 cm. The empirical formula can be embedded in a field programmable gate array on a field-portable explosives' sensor for efficient online detection.
The application of the Monte-Carlo neutron transport code MCNP to a small "nuclear battery" system
Puigdellívol Sadurní, Roger
2009-01-01
The project consist in calculate the keff to a small nuclear battery. The code Monte- Carlo neutron transport code MCNP is used to calculate the keff. The calculations are done at the beginning of life to know the capacity of the core becomes critical in different conditions. These conditions are the study parameters that determine the criticality of the core. These parameters are the uranium enrichment, the coated particles (TRISO) packing factor and the size of the core. More...
Monte Carlo study of real time dynamics
Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C
2016-01-01
Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Composite biasing in Monte Carlo radiative transfer
Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf
2016-01-01
Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Monte Carlo simulations on SIMD computer architectures
Energy Technology Data Exchange (ETDEWEB)
Burmester, C.P.; Gronsky, R. [Lawrence Berkeley Lab., CA (United States); Wille, L.T. [Florida Atlantic Univ., Boca Raton, FL (United States). Dept. of Physics
1992-03-01
Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Energy Technology Data Exchange (ETDEWEB)
Kitazaki, Tamotsu; Kato, Tomohiko, E-mail: katou@fit.ac.jp
2014-03-15
Random magnets generally exhibit gradual phase transitions more or less. The origin of the phenomena has been controversial for a long time: intrinsic phenomena of disordered magnets or non-equilibrium effect due to finite observation time. We now support the latter, but there have not been clear evidences experimentally and theoretically. We show that the behavior of phase transition of a simple random magnetic system differs in the observation time by using a dynamic Monte Carlo simulation. The target of the simulation is experiments of the line width of NMR spin-echo spectra, a type of the order parameter, on Mn{sub x}Cd{sub 1−x} (HCOO){sub 2}·2(NH{sub 2})2CO. The calculated results indicate that, as the averaging time becomes shorter, the phase transition becomes more gradual. This tendency is most pronounced around the percolation concentration. The calculated results coincide well with the characteristic features of the experimental results. This coincidence supports that the smearing behavior of the order parameter is a non-equilibrium effect, though Ising model employed in the simulation is different with Heisenberg system of the target substance.
Broecker, Peter; Trebst, Simon
2016-12-01
In the absence of a fermion sign problem, auxiliary-field (or determinantal) quantum Monte Carlo (DQMC) approaches have long been the numerical method of choice for unbiased, large-scale simulations of interacting many-fermion systems. More recently, the conceptual scope of this approach has been expanded by introducing ingenious schemes to compute entanglement entropies within its framework. On a practical level, these approaches, however, suffer from a variety of numerical instabilities that have largely impeded their applicability. Here we report on a number of algorithmic advances to overcome many of these numerical instabilities and significantly improve the calculation of entanglement measures in the zero-temperature projective DQMC approach, ultimately allowing us to reach similar system sizes as for the computation of conventional observables. We demonstrate the applicability of this improved DQMC approach by providing an entanglement perspective on the quantum phase transition from a magnetically ordered Mott insulator to a band insulator in the bilayer square lattice Hubbard model at half filling.
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Rubenstein, Brenda M.
Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Accelerated Monte Carlo by Embedded Cluster Dynamics
Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.
1991-07-01
We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.
Paudel, Moti R; Kim, Anthony; Sarfehnia, Arman; Ahmad, Sayed B; Beachey, David J; Sahgal, Arjun; Keller, Brian M
2016-11-01
A new GPU-based Monte Carlo dose calculation algorithm (GPUMCD), developed by the vendor Elekta for the Monaco treatment planning system (TPS), is capable of modeling dose for both a standard linear accelerator and an Elekta MRI linear accelerator. We have experimentally evaluated this algorithm for a standard Elekta Agility linear accelerator. A beam model was developed in the Monaco TPS (research version 5.09.06) using the commissioned beam data for a 6 MV Agility linac. A heterogeneous phantom representing several scenarios - tumor-in-lung, lung, and bone-in-tissue - was designed and built. Dose calculations in Monaco were done using both the current clinical Monte Carlo algorithm, XVMC, and the new GPUMCD algorithm. Dose calculations in a Pinnacle TPS were also produced using the collapsed cone convolution (CCC) algorithm with heterogeneity correction. Calculations were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm2,5×5 cm2, and 10×2 cm2 field sizes. The percentage depth doses (PDDs) calculated by XVMC and GPUMCD in a homogeneous solid water phantom were within 2%/2 mm of film measurements and within 1% of ion chamber measurements. For the tumor-in-lung phantom, the calculated doses were within 2.5%/2.5 mm of film measurements for GPUMCD. For the lung phantom, doses calculated by all of the algorithms were within 3%/3 mm of film measurements, except for the 2×2 cm2 field size where the CCC algorithm underestimated the depth dose by ∼5% in a larger extent of the lung region. For the bone phantom, all of the algorithms were equivalent and calculated dose to within 2%/2 mm of film measurements, except at the interfaces. Both GPUMCD and XVMC showed interface effects, which were more pronounced for GPUMCD and were comparable to film measurements, whereas the CCC algorithm showed these effects poorly. PACS number(s): 87.53.Bn, 87.55.dh, 87.55.km. © 2016 The Authors.
Auxiliary-field quantum Monte Carlo methods in nuclei
Alhassid, Y
2016-01-01
Auxiliary-field quantum Monte Carlo methods enable the calculation of thermal and ground state properties of correlated quantum many-body systems in model spaces that are many orders of magnitude larger than those that can be treated by conventional diagonalization methods. We review recent developments and applications of these methods in nuclei using the framework of the configuration-interaction shell model.
The Metropolis Monte Carlo Method in Statistical Physics
Landau, David P.
2003-11-01
A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The magnetic anisotropy field in thin films with in-plane uniaxial anisotropy can be deduced from the VSM magnetization curves measured in magnetic fields of constant magnitudes. This offers a new possibility of applying rotational magnetization curves to determine the firstand second-order anisotropy constant in these films. In this paper we report a theoretical derivation of rotational magnetization curve in hexagonal crystal system with easy-plane anisotropy based on the principle of the minimum total energy. This model is applied to calculate and analyze the rotational magnetization process for magnetic spherical particles with hexagonal easy-plane anisotropy when rotating the external magnetic field in the basal plane. The theoretical calculations are consistent with Monte Carlo simulation results. It is found that to well reproduce experimental curves, the effect of coercive force on the magnetization reversal process should be fully considered when the intensity of the external field is much weaker than that of the anisotropy field. Our research proves that the rotational magnetization curve from VSM measurement provides an effective access to analyze the in-plane anisotropy constant K3 in hexagonal compounds, and the suitable experimental condition to measure K3 is met when the ratio of the magnitude of the external field to that of the anisotropy field is around 0.2.
Institute of Scientific and Technical Information of China (English)
WANG AiMin; PANG Hua
2009-01-01
The magnetic anisotropy field in thin films with in-plane uniaxial anisotropy can be deduced from the VSM magnetization curves measured in magnetic fields of constant magnitudes. This offers a new possibility of applying rotational magnetization curves to determine the first- and second-order ani-aotropy constant in these films. In this paper we report a theoretical derivation of rotational magnetiza-tion curve in hexagonal crystal system with easy-plane anisotropy based on the principle of the minimum total energy. This model is applied to calculate and analyze the rotational magnetization process for magnetic spherical particles with hexagonal easy-plane anisotropy when rotating the external magnetic field in the basal plane. The theoretical calculations are consistent with Monte Carlo simulation results. It is found that to well reproduce experimental curves, the effect of coercive force on the magnetization reversal process should be fully considered when the intensity of the ex-ternal field is much weaker than that of the anisotropy field. Our research proves that the rotational magnetization curve from VSM measurement provides an effective access to analyze the in-plane anisotropy constant K3 in hexagonal compounds, and the suitable experimental condition to measure K3 is met when the ratio of the magnitude of the external field to that of the anisotropy field is around 0.2.
Institute of Scientific and Technical Information of China (English)
Wei-Neng Chen; Jun Zhang
2012-01-01
Project scheduling under uncertainty is a challenging field of research that has attracted increasing attention.While most existing studies only consider the single-mode project scheduling problem under uncertainty,this paper aims to deal with a more realistic model called the stochastic multi-mode resource constrained project scheduling problem with discounted cash flows (S-MRCPSPDCF).In the model,activity durations and costs are given by random variables.The objective is to find an optimal baseline schedule so that the expected net present value (NPV) of cash flows is maximized.To solve the problem,an ant colony system (ACS) based approach is designed.The algorithm dispatches a group of ants to build baseline schedules iteratively using pheromones and an expected discounted cost (EDC) heuristic.Since it is impossible to evaluate the expected NPV directly due to the presence of random variables,the algorithm adopts the Monte Carlo (MC)simulation technique.As the ACS algorithm only uses the best-so-far solution to update pheromone values,it is found that a rough simulation with a small number of random scenarios is enough for evaluation.Thus the computational cost is reduced.Experimental results on 33 instances demonstrate the effectiveness of the proposed model and the ACS approach.
Directory of Open Access Journals (Sweden)
Christelle Garnier
2008-05-01
Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.
On a full Monte Carlo approach to quantum mechanics
Sellier, J. M.; Dimov, I.
2016-12-01
The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.
Diffusion Monte Carlo in internal coordinates.
Petit, Andrew S; McCoy, Anne B
2013-08-15
An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.
A separable shadow Hamiltonian hybrid Monte Carlo method.
Sweet, Christopher R; Hampton, Scott S; Skeel, Robert D; Izaguirre, Jesús A
2009-11-07
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.
Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo
2014-10-14
Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.
Observations on variational and projector Monte Carlo methods.
Umrigar, C J
2015-10-28
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Wagener, T.; Pianosi, F.; Woods, R. A.
2016-12-01
The need for quantifying uncertainty in earth system modelling has now been well established on both scientific and policy-making grounds. There is an urgent need to bring the skills and tools needed for doing so into practice. However, such topics are currently largely constrained to specialist graduate courses or to short courses for PhD students. Teaching the advanced skills needed for implementing and for using uncertainty analysis is difficult because students feel that it is inaccessible and it can be boring if presented using frontal teaching in the classroom. While we have made significant advancement in sharing teaching material, sometimes even including teaching notes (Wagener et al., 2012, Hydrology and Earth System Sciences), there is great need for understanding how we can bring such advanced topics into the undergraduate (and even graduate) curriculum in an effective manner. We present the results of our efforts to teach Matlab-based tools for uncertainty quantification in earth system modelling in a civil engineering undergraduate course. We use the example of teaching Monte Carlo strategies, the basis for the most widely used uncertainty quantification approaches, through the use of guided group-learning activities in the classroom. We utilize a three-step approach: [1] basic introduction to the problem, [2] guided group-learning to develop a possible solution, [3] comparison of possible solutions with state-of-the-art algorithms across groups. Our initial testing in an undergraduate course suggests that (i) overall students find a group-learning approach more engaging, (ii) that different students take charge of advancing the discussion at different stages or for different problems, and (iii) that making appropriate suggestions (facilitator) to guide the discussion keeps the speed of advancement sufficiently high. We present the approach, our initial results and suggest how a wider course on earth system modelling could be formulated in this manner.
SU-E-T-493: Accelerated Monte Carlo Methods for Photon Dosimetry Using a Dual-GPU System and CUDA.
Liu, T; Ding, A; Xu, X
2012-06-01
To develop a Graphics Processing Unit (GPU) based Monte Carlo (MC) code that accelerates dose calculations on a dual-GPU system. We simulated a clinical case of prostate cancer treatment. A voxelized abdomen phantom derived from 120 CT slices was used containing 218×126×60 voxels, and a GE LightSpeed 16-MDCT scanner was modeled. A CPU version of the MC code was first developed in C++ and tested on Intel Xeon X5660 2.8GHz CPU, then it was translated into GPU version using CUDA C 4.1 and run on a dual Tesla m(2) 090 GPU system. The code was featured with automatic assignment of simulation task to multiple GPUs, as well as accurate calculation of energy- and material- dependent cross-sections. Double-precision floating point format was used for accuracy. Doses to the rectum, prostate, bladder and femoral heads were calculated. When running on a single GPU, the MC GPU code was found to be ×19 times faster than the CPU code and ×42 times faster than MCNPX. These speedup factors were doubled on the dual-GPU system. The dose Result was benchmarked against MCNPX and a maximum difference of 1% was observed when the relative error is kept below 0.1%. A GPU-based MC code was developed for dose calculations using detailed patient and CT scanner models. Efficiency and accuracy were both guaranteed in this code. Scalability of the code was confirmed on the dual-GPU system. © 2012 American Association of Physicists in Medicine.
Directory of Open Access Journals (Sweden)
Micaeil Mollazadeh
2010-06-01
Full Text Available Introduction: GafChromic EBT films are one of the self-developing and modern films commercially available for dosimetric verification of treatment planning systems (TPSs. Their high spatial resolution, low energy dependence and near-tissue equivalence make them suitable for verification of dose distributions in radiation therapy. This study was designed to evaluate the dosimetric parameters of the RtDosePlan TPS such as PDD curves, lateral beam profiles, and isodose curves measured in a water phantom using EBT Radiochromic film and EGSnrc Monte Carlo (MC simulation. Methods and Materials: A Microtek color scanner was used as the film scanning system, where the response in the red color channel was extracted and used for the analyses. A calibration curve was measured using pieces of irradiated films to specific doses. The film was mounted inside the phantom parallel to the beam's central axis and was irradiated in a standard setup (SSD = 80 cm, FS = 10×10 cm2 with a 60Co machine. The BEAMnrc and the DOSXYZnrc codes were used to simulate the Co-60 machine and extracting the voxel-based phantom. The phantom's acquired CT data were transferred to the TPS using DICOM files. Results: Distance-To-Agreement (DTA and Dose Difference (DD among the TPS predictions, measurements and MC calculations were all within the acceptance criteria (DD=3%, DTA=3 mm. Conclusion: This study shows that EBT film is an appropriate tool for verification of 2D dose distribution predicted by a TPS system. In addition, it is concluded that MC simulation with the BEAMnrc code has a suitable potential for verifying dose distributions.
Directory of Open Access Journals (Sweden)
TEMITOPE RAPHAEL AYODELE
2016-04-01
Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
CMS Monte Carlo production in the WLCG computing grid
Hernández, J M; Mohapatra, A; Filippis, N D; Weirdt, S D; Hof, C; Wakefield, S; Guan, W; Khomitch, A; Fanfani, A; Evans, D; Flossdorf, A; Maes, J; van Mulders, P; Villella, I; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Caballero, J; Sanches, J A; Kavka, C; Van Lingen, F; Bacchi, W; Codispoti, G; Elmer, P; Eulisse, G; Lazaridis, C; Kalini, S; Sarkar, S; Hammad, G
2008-01-01
Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG).
Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm
Gubernatis, James
2014-03-01
A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2015-01-01
For non-linear systems the estimation of fatigue damage under stochastic loadings can be rather time-consuming. Usually Monte Carlo simulation (MCS) is applied, but the coefficient-of-variation (COV) can be large if only a small set of simulations can be done due to otherwise excessive CPU time. ...... the COV. For a specific example dealing with stresses in a tendon in a tension leg platform the COV is thereby reduced by a factor of three....
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
Goldman, Saul
1983-10-01
A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.
Cully, William P.L.; Cotton, Simon L.; Scanlon, W.G.
2012-01-01
The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body
Cully, William P.L.; Cotton, Simon L.; Scanlon, William G.
2012-01-01
The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body leadin
Energy Technology Data Exchange (ETDEWEB)
Moriarty, K.J.M. (Royal Holloway Coll., Englefield Green (UK). Dept. of Mathematics); Blackshaw, J.E. (Floating Point Systems UK Ltd., Bracknell)
1983-04-01
The computer program calculates the average action per plaquette for SU(6)/Z/sub 6/ lattice gauge theory. By considering quantum field theory on a space-time lattice, the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure. Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600.
Global Monte Carlo Simulation with High Order Polynomial Expansions
Energy Technology Data Exchange (ETDEWEB)
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
Energy Technology Data Exchange (ETDEWEB)
Hernandez A, P. L.; Medina C, D.; Rodriguez I, J. L.; Salas L, M. A.; Vega C, H. R., E-mail: pabloyae_2@hotmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)
2015-10-15
The problems associated with insecurity and terrorism have forced to designing systems for detecting nuclear materials, drugs and explosives that are installed on roads, ports and airports. Organic materials are composed of C, H, O and N; similarly the explosive materials are manufactured which can be distinguished by the concentration of these elements. Its elemental composition, particularly the concentration of hydrogen and oxygen, allow distinguish them from other organic substances. When these materials are irradiated with neutrons nuclear reactions (n, γ) are produced, where the emitted photons are ready gamma rays whose energy is characteristic of each element and its abundance allows estimating their concentration. The aim of this study was designed using Monte Carlo methods a system with neutron source, gamma rays detector and moderator able to distinguish the presence of Rdx and urea. In design were used as moderators: paraffin, light water, polyethylene and graphite; as detectors were used HPGe and the NaI(Tl). The design that showed the best performance was the moderator of light water and HPGe, with a source of {sup 241}AmBe. For this design, the values of ambient dose equivalent around the system were calculated. (Author)
Hashemi-Nezhad, S R; Westmeier, W; Bamblevski, V P; Krivopustov, M I; Kulakov, B A; Sosnin, A N; Wan, J S; Odoj, R
2001-01-01
The neutron yield in the interaction of protons with lead and uranium targets has been studied using the LAHET code system. The dependence of the neutron multiplicity on target dimensions and proton energy has been calculated and the dependence of the energy amplification on the proton energy has been investigated in an accelerator-driven system of a given effective multiplication coefficient. Some of the results are compared with experimental findings and with similar calculations by the DCM/CEM code of Dubna and the FLUKA code system used in CERN. (14 refs).
Comparison of scoring systems for invasive pests using ROC analysis and Monte Carlo simulations.
Makowski, David; Mittinty, Murthy Narasimha
2010-06-01
Different international plant protection organisations advocate different schemes for conducting pest risk assessments. Most of these schemes use structured questionnaire in which experts are asked to score several items using an ordinal scale. The scores are then combined using a range of procedures, such as simple arithmetic mean, weighted averages, multiplication of scores, and cumulative sums. The most useful schemes will correctly identify harmful pests and identify ones that are not. As the quality of a pest risk assessment can depend on the characteristics of the scoring system used by the risk assessors (i.e., on the number of points of the scale and on the method used for combining the component scores), it is important to assess and compare the performance of different scoring systems. In this article, we proposed a new method for assessing scoring systems. Its principle is to simulate virtual data using a stochastic model and, then, to estimate sensitivity and specificity values from these data for different scoring systems. The interest of our approach was illustrated in a case study where several scoring systems were compared. Data for this analysis were generated using a probabilistic model describing the pest introduction process. The generated data were then used to simulate the outcome of scoring systems and to assess the accuracy of the decisions about positive and negative introduction. The results showed that ordinal scales with at most 5 or 6 points were sufficient and that the multiplication-based scoring systems performed better than their sum-based counterparts. The proposed method could be used in the future to assess a great diversity of scoring systems.
Status of Monte-Carlo Event Generators
Energy Technology Data Exchange (ETDEWEB)
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Quantum Monte Carlo for vibrating molecules
Energy Technology Data Exchange (ETDEWEB)
Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.
1996-08-01
Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.
Blind Decoding of Multiple Description Codes over OFDM Systems via Sequential Monte Carlo
Directory of Open Access Journals (Sweden)
Guo Dong
2005-01-01
Full Text Available We consider the problem of transmitting a continuous source through an OFDM system. Multiple description scalar quantization (MDSQ is applied to the source signal, resulting in two correlated source descriptions. The two descriptions are then OFDM modulated and transmitted through two parallel frequency-selective fading channels. At the receiver, a blind turbo receiver is developed for joint OFDM demodulation and MDSQ decoding. Transformation of the extrinsic information of the two descriptions are exchanged between each other to improve system performance. A blind soft-input soft-output OFDM detector is developed, which is based on the techniques of importance sampling and resampling. Such a detector is capable of exchanging the so-called extrinsic information with the other component in the above turbo receiver, and successively improving the overall receiver performance. Finally, we also treat channel-coded systems, and a novel blind turbo receiver is developed for joint demodulation, channel decoding, and MDSQ source decoding.
Drug Release from Inert Spherical Matrix Systems Using Monte Carlo Simulations.
Villalobos, Rafael; Garcia, Erika V; Quintanar, David; Young, Paul M
2017-01-01
Computational approaches for predicting release properties from matrix devices have recently been purposed as an approach to better understand and predict such systems. The objective of this research is to study the behavior of drug delivery from inert spherical matrix systems of different size by means of computer simulation. To simulate the matrix medium, a simple cubic lattice was used, which was sectioned to make a spherical macroscopic system. The sites within the system were randomly occupied by drug-particles or excipient-particles in accordance with chosen drug/excipient ratios. Then, the drug was released from the matrix system simulating a diffusion process. When the released fraction was processed until 90% release, the Weibull equation suitably expressed the release profiles. On the basis of the analysis of release equations, it was found that close to the percolation threshold an anomalous released occurs, while in the systems with an initial drug load greater than 0.45, the released was Fickian type. It was also possible to determine the amount of drug trapped in the matrix, which was found to be a function of the initial drug load. The relationship between the two mentioned variables was adequately described by a model that involves the error function. Based on the these results and by means of a non-linear regression to the previous model, it was possible to determine the drug percolation threshold in these matrix devices. It was found that the percolation threshold is consistent with the value predicted by the percolation theory. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Micro- and macrophase separation in complex polymer systems : a Monte Carlo study
Huh, J
1998-01-01
Under proper conditions block copolymers or the analogous molecular complexes formed by non-covalent bonding are known to self-assemble spontaneously in the melt to form spatially ordered microscopic structures. In this kind of systems, the phase structures depend on the molecular structures and the
A Monte Carlo algorithm for degenerate plasmas
Energy Technology Data Exchange (ETDEWEB)
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
A note on simultaneous Monte Carlo tests
DEFF Research Database (Denmark)
Hahn, Ute
In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....
Archimedes, the Free Monte Carlo simulator
Sellier, Jean Michel D
2012-01-01
Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
Diffusion quantum Monte Carlo for molecules
Energy Technology Data Exchange (ETDEWEB)
Lester, W.A. Jr.
1986-07-01
A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy (E/sub T/ - V(R)) can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi/sup 2/) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
kmos: A lattice kinetic Monte Carlo framework
Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten
2014-07-01
Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.
Cheng, Jason Y; Ning, Holly; Arora, Barbara C; Zhuge, Ying; Miller, Robert W
2016-05-08
The dose measurements of the small field sizes, such as conical collimators used in stereotactic radiosurgery (SRS), are a significant challenge due to many factors including source occlusion, detector size limitation, and lack of lateral electronic equilibrium. One useful tool in dealing with the small field effect is Monte Carlo (MC) simulation. In this study, we report a comparison of Monte Carlo simulations and measurements of output factors for the Varian SRS system with conical collimators for energies of 6 MV flattening filter-free (6 MV) and 10 MV flattening filter-free (10 MV) on the TrueBeam accelerator. Monte Carlo simulations of Varian's SRS system for 6 MV and 10 MV photon energies with cones sizes of 17.5 mm, 15.0 mm, 12.5 mm, 10.0 mm, 7.5 mm, 5.0 mm, and 4.0 mm were performed using EGSnrc (release V4 2.4.0) codes. Varian's version-2 phase-space files for 6 MV and 10 MV of TrueBeam accelerator were utilized in the Monte Carlo simulations. Two small diode detectors Edge (Sun Nuclear) and Small Field Detector (SFD) (IBA Dosimetry) were applied to measure the output factors. Significant errors may result if detector correction factors are not applied to small field dosimetric measurements. Although it lacked the machine-specific kfclin,fmsrQclin,Qmsr correction factors for diode detectors in this study, correction factors were applied utilizing published studies conducted under similar conditions. For cone diameters greater than or equal to 12.5 mm, the differences between output factors for the Edge detector, SFD detector, and MC simulations are within 3.0% for both energies. For cone diameters below 12.5 mm, output factors differences exhibit greater variations.
Monte-Carlo Simulation of the Features of Bi-Reactior Accelerator Driven Systems
Bznuni, S A; Khudaverdian, A G; Barashenkov, V S; Sosnin, A N; Polyanskii, A A
2002-01-01
Parameters of accelerator-driven systems containing two "cascade" subcritical assemblies (liquid metal fast reactor, used as a neutron booster, and a thermal reactor, where main heat production is taking place) are investigated. Three main reactor cores analogous to VVER-1000, MSBR-1000 and CANDU-6 reactors are considered. Functioning in a safe mode (k_{eff}=0.94-0.98) these systems under consideration demonstrate much larger capacity in the wide range of k_{eff} in comparison with analogous systems without intermediate fast booster reactor and simultaneously having the density of thermal neutron flux equal to Phi^{max}=10^{14} cm^{-2}c^{-1} and operating with the fast and thermal zones they are capable to transmute the whole scope of nuclear waste reducing the requirements on the beam current of the accelerator by one order of magnitude. It seems to be the most important in case when molten salt thermal breeder reactor cores are considered as a main heat generating zone.
Saccomandi, Paola; Larocca, Enza Stefania; Rendina, Veneranda; Schena, Emiliano; D'Ambrosio, Roberto; Crescenzi, Anna; Di Matteo, Francesco Maria; Silvestri, Sergio
2016-08-01
The investigation of laser-tissue interaction is crucial for diagnostics and therapeutics. In particular, the estimation of tissue optical properties allows developing predictive models for defining organ-specific treatment planning tool. With regard to laser ablation (LA), optical properties are among the main responsible for the therapy efficacy, as they globally affect the heating process of the tissue, due to its capability to absorb and scatter laser energy. The recent introduction of LA for pancreatic tumor treatment in clinical studies has fostered the need to assess the laser-pancreas interaction and hence to find its optical properties in the wavelength of interest. This work aims at estimating optical properties (i.e., absorption, μ a , scattering, μ s , anisotropy, g, coefficients) of neuroendocrine pancreas tumor at 1064 nm. Experiments were performed using two popular sample storage methods; the optical properties of frozen and paraffin-embedded neuroendocrine tumor of the pancreas are estimated by employing a double-integrating-sphere system and inverse Monte Carlo algorithm. Results show that paraffin-embedded tissue is characterized by absorption and scattering coefficients significantly higher than frozen samples (μ a of 56 cm(-1) vs 0.9 cm(-1), μ s of 539 cm(-1) vs 130 cm(-1), respectively). Simulations show that such different optical features strongly influence the pancreas temperature distribution during LA. This result may affect the prediction of therapeutic outcome. Therefore, the choice of the appropriate preparation technique of samples for optical property estimation is crucial for the performances of the mathematical models which predict LA thermal outcome on the tissue and lead the selection of optimal LA settings.
Schiapparelli, P; Zefiro, D; Taccini, G
2009-05-01
The aim of this work was to evaluate the performance of the voxel-based Monte Carlo algorithm implemented in the commercial treatment planning system ONCENTRA MASTERPLAN for a 9 MeV electron beam produced by a linear accelerator Varian Clinac 2100 C/D. In order to realize an experimental verification of the computed data, three different groups of tests were planned. The first set was performed in a water phantom to investigate standard fields, custom inserts, and extended treatment distances. The second one concerned standard field, irregular entrance surface, and oblique incidence in a homogeneous PMMA phantom. The last group involved the introduction of inhomogeneities in a PMMA phantom to simulate high and low density materials such as bone and lung. Measurements in water were performed by means of cylindrical and plane-parallel ionization chambers, whereas measurements in PMMA were carried out by the use of radiochromic films. Point dose values were compared in terms of percentage difference, whereas the gamma index tool was used to perform the comparison between computed and measured dose profiles, considering different tolerances according to the test complexity. In the case of transverse scans, the agreement was searched in the plane formed by the intersection of beam axis and the profile (2D analysis), while for percentage depth dose curves, only the beam axis was explored (1D analysis). An excellent agreement was found for point dose evaluation in water (discrepancies smaller than 2%). Also the comparison between planned and measured dose profiles in homogeneous water and PMMA phantoms showed good results (agreement within 2%-2 mm). Profile evaluation in phantoms with internal inhomogeneities showed a good agreement in the case of "lung" insert, while in tests concerning a small "bone" inhomogeneity, a discrepancy was particularly evidenced in dose values on the beam axis. This is due to the inaccurate geometrical description of the phantom that is linked
Russo, S; Mariani, T T; Migliorini, R; Marcellusi, A; Mennini, F S
2015-09-16
The aim of the study is to estimate the pension costs incurred for patients with musculoskeletal disorders (MDs) and specifically with rheumatoid arthritis (RA) and ankylosing spondylitis (AS) in Italy between 2009 and 2012. We analyzed the database of the Italian National Social Security Institute (Istituto Nazionale Previdenza Sociale i.e. INPS) to estimate the total costs of three types of social security benefits granted to patients with MDs, RA and AS: disability benefits (for people with reduced working ability), disability pensions (for people who cannot qualify as workers) and incapacity pensions (for people without working ability). We developed a probabilistic model with a Monte Carlo simulation to estimate the total costs for each type of benefit associated with MDs, RA and AS. We also estimated the productivity loss resulting from RA in 2013. From 2009 to 2012 about 393 thousand treatments were paid for a total of approximately €2.7 billion. The annual number of treatments was on average 98 thousand and cost in total €674 million per year. In particular, the total pension burden was about €99 million for RA and €26 million for AS. The productivity loss for AR in 2013 was equal to €707,425,191 due to 9,174,221 working days lost. Our study is the fi rst to estimate the burden of social security pensions for MDs based on data of both approved claims and benefits paid by the national security system. From 2009 to 2012, in Italy, the highest indirect costs were associated with disability pensions (54% of the total indirect cost), followed by disability benefits (44.1% of cost) and incapacity pensions (1.8% of cost). In conclusion, MDs are chronic and highly debilitating diseases with a strong female predominance and very significant economic and social costs that are set to increase due to the aging of the population.
Directory of Open Access Journals (Sweden)
S. Russo
2015-09-01
Full Text Available The aim of the study is to estimate the pension costs incurred for patients with musculoskeletal disorders (MDs and specifi cally with rheumatoid arthritis (RA and ankylosing spondylitis (AS in Italy between 2009 and 2012. We analyzed the database of the Italian National Social Security Institute (Istituto Nazionale Previdenza Sociale i.e. INPS to estimate the total costs of three types of social security benefi ts granted to patients with MDs, RA and AS: disability benefi ts (for people with reduced working ability, disability pensions (for people who cannot qualify as workers and incapacity pensions (for people without working ability. We developed a probabilistic model with a Monte Carlo simulation to estimate the total costs for each type of benefi t associated with MDs, RA and AS. We also estimated the productivity loss resulting from RA in 2013. From 2009 to 2012 about 393 thousand treatments were paid for a total of approximately €2.7 billion. The annual number of treatments was on average 98 thousand and cost in total €674 million per year. In particular, the total pension burden was about €99 million for RA and €26 million for AS. The productivity loss for AR in 2013 was equal to €707,425,191 due to 9,174,221 working days lost. Our study is the fi rst to estimate the burden of social security pensions for MDs based on data of both approved claims and benefi ts paid by the national security system. From 2009 to 2012, in Italy, the highest indirect costs were associated with disability pensions (54% of the total indirect cost, followed by disability benefi ts (44.1% of cost and incapacity pensions (1.8% of cost. In conclusion, MDs are chronic and highly debilitating diseases with a strong female predominance and very signifi cant economic and social costs that are set to increase due to the aging of the population.
Amoush, Ahmad; Luckstead, Marcus; Lamba, Michael; Elson, Howard; Kassing, William
2013-01-01
This study aimed to investigate the high-dose rate Iridium-192 brachytherapy, including near source dosimetry, of a catheter-based applicator from 0.5mm to 1cm along the transverse axis. Radiochromic film and Monte Carlo (MC) simulation were used to generate absolute dose for the catheter-based applicator. Results from radiochromic film and MC simulation were compared directly to the treatment planning system (TPS) based on the American Association of Physicists in Medicine Updated Task Group 43 (TG-43U1) dose calculation formalism. The difference between dose measured using radiochromic film along the transverse plane at 0.5mm from the surface and the predicted dose by the TPS was 24%±13%. The dose difference between the MC simulation along the transverse plane at 0.5mm from the surface and the predicted dose by the TPS was 22.1%±3%. For distances from 1.5mm to 1cm from the surface, radiochromic film and MC simulation agreed with TPS within an uncertainty of 3%. The TPS under-predicts the dose at the surface of the applicator, i.e., 0.5mm from the catheter surface, as compared to the measured and MC simulation predicted dose. MC simulation results demonstrated that 15% of this error is due to neglecting the beta particles and discrete electrons emanating from the sources and not considered by the TPS, and 7% of the difference was due to the photon alone, potentially due to the differences in MC dose modeling, photon spectrum, scoring techniques, and effect of the presence of the catheter and the air gap. Beyond 1mm from the surface, the TPS dose algorithm agrees with the experimental and MC data within 3%.
State-of-the-art Monte Carlo 1988
Energy Technology Data Exchange (ETDEWEB)
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system
Turner, Travis L.
1991-01-01
A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.
Using Monte Carlo techniques and parallel processing for debris hazard analysis of rocket systems
Energy Technology Data Exchange (ETDEWEB)
LaFarge, R.A.
1994-02-01
Sandia National Laboratories has been involved with rocket systems for many years. Some of these systems have carried high explosive onboard, while others have had FTS for destruction purposes whenever a potential hazard is detected. Recently, Sandia has also been involved with flight tests in which a target vehicle is intentionally destroyed by a projectile. Such endeavors always raise questions about the safety of personnel and the environment in the event of a premature detonation of the explosive or an activation of the FTS, as well as intentional vehicle destruction. Previous attempts to investigate fragmentation hazards for similar configurations have analyzed fragment size and shape in detail but have computed only a limited number of trajectories to determine the probabilities of impact and casualty expectations. A computer program SAFETIE has been written in support of various SNL flight experiments to compute better approximations of the hazards. SAFETIE uses the AMEER trajectory computer code and the Engineering Sciences Center LAN of Sun workstations to determine more realistically the probability of impact for an arbitrary number of exclusion areas. The various debris generation models are described.
Scemama, Anthony; Oseret, Emmanuel; Jalby, William
2012-01-01
Various strategies to implement efficiently QMC simulations for large chemical systems are presented. These include: i.) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), ii.) the possibility of keeping the memory footprint minimal, iii.) the important enhancement of single-core performance when efficient optimization tools are employed, and iv.) the definition of a universal, dynamic, fault-tolerant, and load-balanced computational framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056 and 1731 electrons). Using 10k-80k computing cores of the Curie machine (GENCI-T...
Demchik, Vadim
2013-01-01
The multi-GPU open-source package QCDGPU for lattice Monte Carlo simulations of pure SU(N) gluodynamics in external magnetic field at finite temperature and O(N) model is developed. The code is implemented in OpenCL, tested on AMD and NVIDIA GPUs, AMD and Intel CPUs and may run on other OpenCL-compatible devices. The package contains minimal external library dependencies and is OS platform-independent. It is optimized for heterogeneous computing due to the possibility of dividing the lattice into non-equivalent parts to hide the difference in performances of the devices used. QCDGPU has client-server part for distributed simulations. The package is designed to produce lattice gauge configurations as well as to analyze previously generated ones. QCDGPU may be executed in fault-tolerant mode. Monte Carlo procedure core is based on PRNGCL library for pseudo-random numbers generation on OpenCL-compatible devices, which contains several most popular pseudo-random number generators.
Experience with the gLite workload management system in ATLAS Monte Carlo production on LCG
Campana, S.; Rebatto, D.; Sciaba', A.
2008-07-01
The ATLAS experiment has been running continuous simulated events production since more than two years. A considerable fraction of the jobs is daily submitted and handled via the gLite Workload Management System, which overcomes several limitations of the previous LCG Resource Broker. The gLite WMS has been tested very intensively for the LHC experiments use cases for more than six months, both in terms of performance and reliability. The tests were carried out by the LCG Experiment Integration Support team (in close contact with the experiments) together with the EGEE integration and certification team and the gLite middleware developers. A pragmatic iterative and interactive approach allowed a very quick rollout of fixes and their rapid deployment, together with new functionalities, for the ATLAS production activities. The same approach is being adopted for other middleware components like the gLite and CREAM Computing Elements. In this contribution we will summarize the learning from the gLite WMS testing activity, pointing out the most important achievements and the open issues. In addition, we will present the current situation of the ATLAS simulated event production activity on the EGEE infrastructure based on the gLite WMS, showing the main improvements and benefits from the new middleware. Finally, the gLite WMS is being used by many other VOs, including the LHC experiments. In particular, some statistics will be shown on the CMS experience running WMS user analysis via the WMS
Experience with the gLite workload management system in ATLAS Monte Carlo production on LCG
Energy Technology Data Exchange (ETDEWEB)
Campana, S; Sciaba, A [CERN (Switzerland); Rebatto, D [INFN Milano (Italy)], E-mail: Simone.Campana@cern.ch
2008-07-15
The ATLAS experiment has been running continuous simulated events production since more than two years. A considerable fraction of the jobs is daily submitted and handled via the gLite Workload Management System, which overcomes several limitations of the previous LCG Resource Broker. The gLite WMS has been tested very intensively for the LHC experiments use cases for more than six months, both in terms of performance and reliability. The tests were carried out by the LCG Experiment Integration Support team (in close contact with the experiments) together with the EGEE integration and certification team and the gLite middleware developers. A pragmatic iterative and interactive approach allowed a very quick rollout of fixes and their rapid deployment, together with new functionalities, for the ATLAS production activities. The same approach is being adopted for other middleware components like the gLite and CREAM Computing Elements. In this contribution we will summarize the learning from the gLite WMS testing activity, pointing out the most important achievements and the open issues. In addition, we will present the current situation of the ATLAS simulated event production activity on the EGEE infrastructure based on the gLite WMS, showing the main improvements and benefits from the new middleware. Finally, the gLite WMS is being used by many other VOs, including the LHC experiments. In particular, some statistics will be shown on the CMS experience running WMS user analysis via the WMS.
Experience with the gLite Workload Management System in ATLAS Monte Carlo production on LCG
Campana, S; Sciabá, A; CERN. Geneva. IT Department
2008-01-01
The ATLAS experiment has been running continuous simulated events production since more than two years. A considerable fraction of the jobs is daily submitted and handled via the gLite Workload Management System, which overcomes several limitationsof the previous LCG Resource Broker. The gLite WMS has been tested very intensively for the LHC experiments use cases for more than six months, both in terms of performance and reliability. The tests were carried out by the LCG Experiment Integration Support team (in close contact with the experiments) together with the EGEE integration and certification team and the gLite middleware developers. A pragmatic iterative and interactive approach allowed a very quick rollout of fixes and their rapid deployment, together with new functionalities, for the ATLAS production activities. The same approach is being adopted for other middleware components like the gLite and CREAM Computing Elements. In this contribution we will summarize the learning from the gLite WMS testing a...
Gaudoin, R
2000-01-01
correlation terms. 2. We use standard VMC in conjunction with iterative variance minimisation to study bulk aluminium as a test bed for future work on surfaces. QMC has been used successfully for insulators and semiconductors, but little is known about applying it to metals. LDA calculations for aluminium are reasonably accurate for the bulk modulus and lattice constant. In contrast, the LDA cohesive energy is 1.25 times the experimental value. Due to the large statistical uncertainties the VMC result for the bulk modulus is disappointing, but the VMC cohesive energy is a clear improvement on LDA. In general, we find that QMC is applicable to metals and that the finite-size and other errors are qualitatively no different from those encountered in non-metallic systems. The quantum many-body problem is among the most challenging in physics. A popular approach is to reduce the problem to the study of a single particle in an effective potential. These one-particle schemes, the most popular of which is density fun...
Monte Carlo Simulations: Number of Iterations and Accuracy
2015-07-01
Jessica Schultheis for her editorial review. vi INTENTIONALLY LEFT BLANK. 1 1. Introduction Monte Carlo (MC) methods1 are often used...ARL-TN-0684 ● JULY 2015 US Army Research Laboratory Monte Carlo Simulations: Number of Iterations and Accuracy by William...needed. Do not return it to the originator. ARL-TN-0684 ● JULY 2015 US Army Research Laboratory Monte Carlo Simulations: Number
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Alternative Monte Carlo Approach for General Global Illumination
Institute of Scientific and Technical Information of China (English)
徐庆; 李朋; 徐源; 孙济洲
2004-01-01
An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.
Monte Carlo simulation of large electron fields
Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto
2008-03-01
Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.
Institute of Scientific and Technical Information of China (English)
吴永鹏; 汤彬
2012-01-01
Usually, there are several methods, e.g. experiment, interpolation experiment-based, analytic function, and Monte-Carlo simulation, to calculate the response functions in LaBr3(Ce) detectors. In logging applications, the experiment-based methods cannot be adopted because of their limitations. Analytic function has the advantage of fast calculating speed, but it is very difficult to take into account many effects that occur in practical applications. On the contrary, Monte-Carlo simulation can deal with physical and geometric configurations very tactfully. It has a distinct advantage for calculating the functions with complex configurations in borehole. A new application of LaBr3(Ce) detector is in natural gamma-rays borehole spectrometer for uranium well logging. Calculation of response functions must consider a series of physical and geometric factors under complex logging conditions, including earth formations and its relevant parameters, different energies, material and thickness of the casings, the fluid between the two tubes, and relative position of the LaBr3(Ce) crystal to steel ingot at the front of logging tube. The present work establishes Monte-Carlo simulation models for the above-mentioned situations, and then performs calculations for main gamma-rays from natural radio-elements series. The response functions can offer experimental directions for the design of borehole detection system, and provide technique basis and basic data for spectral analysis of natural gamma-rays, and for sonrceless calibration in uranium quantitative interpretation.
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test...
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
Energy Technology Data Exchange (ETDEWEB)
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2015-01-07
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.
Chemical application of diffusion quantum Monte Carlo
Reynolds, P. J.; Lester, W. A., Jr.
1983-10-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.
Multi-Index Monte Carlo (MIMC)
Haji Ali, Abdul Lateef
2016-01-06
We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).
Discrete range clustering using Monte Carlo methods
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Quantum Monte Carlo Calculations of Neutron Matter
Carlson, J; Ravenhall, D G
2003-01-01
Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...
Efficient Word Alignment with Markov Chain Monte Carlo
Directory of Open Access Journals (Sweden)
Östling Robert
2016-10-01
Full Text Available We present EFMARAL, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC inference. Through careful selection of data structures and model architecture we are able to surpass the fast_align system, commonly used for performance-critical word alignment, both in computational efficiency and alignment accuracy. Our evaluation shows that a phrase-based statistical machine translation (SMT system produces translations of higher quality when using word alignments from EFMARAL than from fast_align, and that translation quality is on par with what is obtained using GIZA++, a tool requiring orders of magnitude more processing time. More generally we hope to convince the reader that Monte Carlo sampling, rather than being viewed as a slow method of last resort, should actually be the method of choice for the SMT practitioner and others interested in word alignment.
Quantum Monte Carlo study of the protonated water dimer
Dagrada, Mario; Saitta, Antonino M; Sorella, Sandro; Mauri, Francesco
2013-01-01
We report an extensive theoretical study of the protonated water dimer (Zundel ion) by means of the highly correlated variational Monte Carlo and lattice regularized Monte Carlo approaches. This system represents the simplest model for proton transfer (PT) and a correct description of its properties is essential in order to understand the PT mechanism in more complex acqueous systems. Our Jastrow correlated AGP wave function ensures an accurate treatment of electron correlations. Exploiting the advantages of contracting the primitive basis set over atomic hybrid orbitals, we are able to limit dramatically the number of variational parameters with a systematic control on the numerical precision, crucial in order to simulate larger systems. We investigate energetics and geometrical properties of the Zundel ion as a function of the oxygen-oxygen distance, taken as reaction coordinate. In both cases, our QMC results are found in excellent agreement with coupled cluster CCSD(T) technique, the quantum chemistry "go...
Monte Carlo Study on Singly Tagged D Mesons at BES-Ⅲ
Institute of Scientific and Technical Information of China (English)
ZHAO Ming-Gang; YU Chun-Xu; LI Xue-Qian
2009-01-01
We present Monte Carlo studies on the singly tagged D mesons,which are crucial in the absolute measurements of D meson decays,based on a full Monte Carlo simulation for the BES-Ⅲ detector,with the BES-Ⅲ Offline Software System.The expected detection efficiencies and mass resolutions of the tagged D mesons are well estimated.
Monte Carlo Simulation Of Emission Tomography And Other Medical Imaging Techniques.
Harrison, Robert L
2010-01-05
An introduction to Monte Carlo simulation of emission tomography. This paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.
Schach von Wittenau, A E; Logan, C M; Aufderheide, M B; Slone, D M
2002-11-01
Originally designed for use at medical-imaging x-ray energies, imaging systems comprising scintillating screens and amorphous Si detectors are also used at the megavoltage photon energies typical of portal imaging and industrial radiography. While image blur at medical-imaging x-ray energies is strongly influenced both by K-shell fluorescence and by the transport of optical photons within the scintillator layer, at higher photon energies the image blur is dominated by radiation scattered from the detector housing and internal support structures. We use Monte Carlo methods to study the blurring in a notional detector: a series of semi-infinite layers with material compositions, thicknesses, and densities similar to those of a commercially available flat-panel amorphous Si detector system comprising a protective housing, a gadolinium oxysulfide scintillator screen, and associated electronics. We find that the image blurring, as described by a point-spread function (PSF), has three length scales. The first component, with a submillimeter length scale, arises from electron scatter within the scintillator and detection electronics. The second component, with a millimeter-to-centimeter length scale, arises from electrons produced in the front cover of the detector. The third component, with a length scale of tens of centimeters, arises from photon scatter by the back cover of the detector. The relative contributions of each of these components to the overall PSF vary with incident photon energy. We present an algorithm that includes the energy-dependent sensitivity and energy-dependent PSF within a ray-tracing formalism. We find quantitative agreement (approximately 2%) between predicted radiographs with radiographs of copper step wedges, taken with a 9 MV bremsstrahlung source and a commercially available flat-panel system. The measured radiographs show the blurring artifacts expected from both the millimeter-scale electron transport and from the tens
Minimising biases in full configuration interaction quantum Monte Carlo
Vigor, W. A.; Spencer, J. S.; Bearpark, M. J.; Thom, A. J. W.
2015-03-01
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.
Minimising biases in full configuration interaction quantum Monte Carlo.
Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W
2015-03-14
We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.
Kastinen, D.; Kero, J.
2017-09-01
We present the current status and first results from a Monte Carlo-type simulation toolbox for Solar System small body dynamics. We also present fundamental methods for evaluating the results of this type of simulations using convergence criteria. The calculations consider a body in the Solar System with a mass loss mechanism that generates smaller particles. In our application the body, or parent body, is a comet and the mass loss mechanism is a sublimation process. In order to study mass propagation from parent bodies to Earth, we use the toolbox to sample the uncertainty distributions of relevant comet parameters and to find the resulting Earth influx distributions. The initial distributions considered represent orbital elements, sublimation distance, cometary and meteoroid densities, comet and meteoroid sizes and cometary surface activity. Simulations include perturbations from all major planets, radiation pressure and the Poynting-Robertson effect. In this paper we present the results of an initial software validation performed by producing synthetic versions of the 1933, 1946, 2011 and 2012 October Draconids meteor outbursts and comparing them with observational data and previous models. The synthetic meteor showers were generated by ejecting and propagating material from the recognized parent body of the October Draconids; the comet 21P/Giacobini-Zinner. Material was ejected during 17 perihelion passages between 1866 and 1972. Each perihelion passage was sampled with 50 clones of the parent body, all producing meteoroid streams. The clones were drawn from a multidimensional Gaussian distribution on the orbital elements, with distribution variances proportional to observational uncertainties. In the simulations, each clone ejected 8000 particles. Each particle was assigned an individual weight proportional to the mass loss it represented. This generated a total of 6.7 million test particles, out of which 43 thousand entered the Earth's Hill sphere during 1900
Alvarez, G.; Şen, C.; Furukawa, N.; Motome, Y.; Dagotto, E.
2005-05-01
A software library is presented for the polynomial expansion method (PEM) of the density of states (DOS) introduced in [Y. Motome, N. Furukawa, J. Phys. Soc. Japan 68 (1999) 3853; N. Furukawa, Y. Motome, H. Nakata, Comput. Phys. Comm. 142 (2001) 410]. The library provides all necessary functions for the use of the PEM and its truncated version (TPEM) in a model independent way. The PEM/TPEM replaces the exact diagonalization of the one electron sector in models for fermions coupled to classical fields. The computational cost of the algorithm is O(N)—with N the number of lattice sites—for the TPEM [N. Furukawa, Y. Motome, J. Phys. Soc. Japan 73 (2004) 1482] which should be contrasted with the computational cost of the diagonalization technique that scales as O(N). The method is applied for the first time to a double exchange model with finite Hund coupling and also to diluted spin-fermion models. Program summaryTitle of library:TPEM Catalogue identifier: ADVK Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVK Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland No. of lines in distributed program, including test data, etc.: 1707 No. of bytes in distributed program, including test data, etc.: 13 644 Distribution format:tar.gz Operating system:Linux, UNIX Number of files:4 plus 1 test program Programming language used:C Computer:PC Nature of the physical problem:The study of correlated electrons coupled to classical fields appears in the treatment of many materials of much current interest in condensed matter theory, e.g., manganites, diluted magnetic semiconductors and high temperature superconductors among others. Method of solution: Typically an exact diagonalization of the electronic sector is performed in this type of models for each configuration of classical fields, which are integrated using a classical Monte Carlo algorithm. A polynomial expansion of the density of states is able to replace the exact
Coherent Scattering Imaging Monte Carlo Simulation
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Quantum Monte Carlo Study of Random Antiferromagnetic Heisenberg Chain
Todo, Synge; Kato, Kiyoshi; Takayama, Hajime
1998-01-01
Effects of randomness on the spin-1/2 and 1 antiferromagnetic Heisenberg chains are studied using the quantum Monte Carlo method with the continuous-time loop algorithm. We precisely calculated the uniform susceptibility, string order parameter, spatial and temporal correlation length, and the dynamical exponent, and obtained a phase diagram. The generalization of the continuous-time loop algorithm for the systems with higher-S spins is also presented.
Monte Carlo Simulation of Argon in Nano-Space
Institute of Scientific and Technical Information of China (English)
CHEN Min; YANG Chun; GUO Zeng-Yuan
2000-01-01
Monte Carlo simulations are performed to investigate the thermodynamic properties of argon confined in nano-scale cubes constructed of graphite walls. A remarkable depression of the system pressures is observed. The simulations reveal that the length-scale of the cube, the magnitude of the interaction between the fluid and the graphite wall and the density of the fluid exhibit reasonable effects on the thermodynamic property shifts of the luid.
Monte Carlo simulations to replace film dosimetry in IMRT verification
Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig
2011-01-01
Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assu...
Assessing Excel VBA Suitability for Monte Carlo Simulation
2015-01-01
Monte Carlo (MC) simulation includes a wide range of stochastic techniques used to quantitatively evaluate the behavior of complex systems or processes. Microsoft Excel spreadsheets with Visual Basic for Applications (VBA) software is, arguably, the most commonly employed general purpose tool for MC simulation. Despite the popularity of the Excel in many industries and educational institutions, it has been repeatedly criticized for its flaws and often described as questionable, if not complet...
Ghoshal, Nababrata; Shabnam, Sabana; DasGupta, Sudeshna; Roy, Soumen Kumar
2016-05-01
Extensive Monte Carlo simulations are performed to investigate the critical properties of a special singular point usually known as the Landau point. The singular behavior is studied in the case when the order parameter is a tensor of rank 2. Such an order parameter is associated with a nematic-liquid-crystal phase. A three-dimensional lattice dispersion model that exhibits a direct biaxial nematic-to-isotropic phase transition at the Landau point is thus chosen for the present study. Finite-size scaling and cumulant methods are used to obtain precise values of the critical exponent ν=0.713(4), the ratio γ/ν=1.85(1), and the fourth-order critical Binder cumulant U^{*}=0.6360(1). Estimated values of the exponents are in good agreement with renormalization-group predictions.
Garrett, Sierra; Barker, Alyson; Parkhurst, Rachel; Rogers, Warren; Kuchera, Anthony; MoNA Collaboration
2014-09-01
The LISA Commissioning experiment, conducted at NSCL at Michigan State University, used the Modular Neutron Array (MoNA) and the Large multi-Institutional Scintillator Array (LISA) in conjunction with the Sweeper Magnet and Detector Chamber, in order to investigate unbound excited states of 24O produced by proton knockout from a secondary 26F beam. Experimental energy spectra for the 24O --> 23O + n decays were obtained through invariant mass spectroscopy using neutron and charged fragment trajectories and energies following decay. GEANT4-based Monte Carlo simulations, which included MENATE_R for modeling neutron scattering, and STMONA developed by the MoNA group at NSCL, were used to take into account specific reaction dynamics and geometry, as well as all detector acceptances and efficiencies, in order to extract individual decay energies and widths from our experimental data. Results for this decay will be presented. The LISA Commissioning experiment, conducted at NSCL at Michigan State University, used the Modular Neutron Array (MoNA) and the Large multi-Institutional Scintillator Array (LISA) in conjunction with the Sweeper Magnet and Detector Chamber, in order to investigate unbound excited states of 24O produced by proton knockout from a secondary 26F beam. Experimental energy spectra for the 24O --> 23O + n decays were obtained through invariant mass spectroscopy using neutron and charged fragment trajectories and energies following decay. GEANT4-based Monte Carlo simulations, which included MENATE_R for modeling neutron scattering, and STMONA developed by the MoNA group at NSCL, were used to take into account specific reaction dynamics and geometry, as well as all detector acceptances and efficiencies, in order to extract individual decay energies and widths from our experimental data. Results for this decay will be presented. Work Supported by NSF Grant PHY-1101745.
A Monte Carlo algorithm for simulating fermions on Lefschetz thimbles
Alexandru, Andrei; Bedaque, Paulo
2016-01-01
A possible solution of the notorious sign problem preventing direct Monte Carlo calculations for systems with non-zero chemical potential is to deform the integration region in the complex plane to a Lefschetz thimble. We investigate this approach for a simple fermionic model. We introduce an easy to implement Monte Carlo algorithm to sample the dominant thimble. Our algorithm relies only on the integration of the gradient flow in the numerically stable direction, which gives it a distinct advantage over the other proposed algorithms. We demonstrate the stability and efficiency of the algorithm by applying it to an exactly solvable fermionic model and compare our results with the analytical ones. We report a very good agreement for a certain region in the parameter space where the dominant contribution comes from a single thimble, including a region where standard methods suffer from a severe sign problem. However, we find that there are also regions in the parameter space where the contribution from multiple...
Estimation of beryllium ground state energy by Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Kabir, K. M. Ariful [Department of Physical Sciences, School of Engineering and Computer Science, Independent University, Bangladesh (IUB) Dhaka (Bangladesh); Halder, Amal [Department of Mathematics, University of Dhaka Dhaka (Bangladesh)
2015-05-15
Quantum Monte Carlo method represent a powerful and broadly applicable computational tool for finding very accurate solution of the stationary Schrödinger equation for atoms, molecules, solids and a variety of model systems. Using variational Monte Carlo method we have calculated the ground state energy of the Beryllium atom. Our calculation are based on using a modified four parameters trial wave function which leads to good result comparing with the few parameters trial wave functions presented before. Based on random Numbers we can generate a large sample of electron locations to estimate the ground state energy of Beryllium. Our calculation gives good estimation for the ground state energy of the Beryllium atom comparing with the corresponding exact data.
Monte Carlo simulation of quantum Zeno effect in the brain
Georgiev, Danko
2014-01-01
Environmental decoherence appears to be the biggest obstacle for successful construction of quantum mind theories. Nevertheless, the quantum physicist Henry Stapp promoted the view that the mind could utilize quantum Zeno effect to influence brain dynamics and that the efficacy of such mental efforts would not be undermined by environmental decoherence of the brain. To address the physical plausibility of Stapp's claim, we modeled the brain using quantum tunneling of an electron in a multiple-well structure such as the voltage sensor in neuronal ion channels and performed Monte Carlo simulations of quantum Zeno effect exerted by the mind upon the brain in the presence or absence of environmental decoherence. The simulations unambiguously showed that the quantum Zeno effect breaks down for timescales greater than the brain decoherence time. To generalize the Monte Carlo simulation results for any n-level quantum system, we further analyzed the change of brain entropy due to the mind probing actions and proved ...
Monte Carlo Simulations of Neutron Oil well Logging Tools
Azcurra, M
2002-01-01
Monte Carlo simulations of simple neutron oil well logging tools into typical geological formations are presented.The simulated tools consist of both 14 MeV pulsed and continuous Am-Be neutron sources with time gated and continuous gamma ray detectors respectively.The geological formation consists of pure limestone with 15% absolute porosity in a wide range of oil saturation.The particle transport was performed with the Monte Carlo N-Particle Transport Code System, MCNP-4B.Several gamma ray spectra were obtained at the detector position that allow to perform composition analysis of the formation.In particular, the ratio C/O was analyzed as an indicator of oil saturation.Further calculations are proposed to simulate actual detector responses in order to contribute to understand the relation between the detector response with the formation composition
Accelerated Monte Carlo simulations with restricted Boltzmann machines
Huang, Li; Wang, Lei
2017-01-01
Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.
Accelerate Monte Carlo Simulations with Restricted Boltzmann Machines
Huang, Li
2016-01-01
Despite their exceptional flexibility and popularity, the Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feedforward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine for efficient Monte Carlo updates and to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate improved acceptance ratio and autocorrelation time near the phase transition point.
Variational Monte Carlo study of pentaquark states
Energy Technology Data Exchange (ETDEWEB)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-12-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential......Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...
Experimental Monte Carlo Quantum Process Certification
Steffen, L; Fedorov, A; Baur, M; Wallraff, A
2012-01-01
Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data post-processing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data is compared directly to an ideal process using Monte Carlo sampling. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of two qubit gates, such as the cphase and the cnot gate, and three qubit gates, such as the Toffoli gate and two sequential cphase gates.
Gas discharges modeling by Monte Carlo technique
Directory of Open Access Journals (Sweden)
Savić Marija
2010-01-01
Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].
Monte Carlo exploration of warped Higgsless models
Energy Technology Data Exchange (ETDEWEB)
Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu
2004-10-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)
Monte Carlo Exploration of Warped Higgsless Models
Hewett, J L; Rizzo, T G
2004-01-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.
Monte Carlo Implementation of Polarized Hadronization
Matevosyan, Hrayr H; Thomas, Anthony W
2016-01-01
We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...
Commensurabilities between ETNOs: a Monte Carlo survey
Marcos, C de la Fuente
2016-01-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nin...
Variable length trajectory compressible hybrid Monte Carlo
Nishimura, Akihiko
2016-01-01
Hybrid Monte Carlo (HMC) generates samples from a prescribed probability distribution in a configuration space by simulating Hamiltonian dynamics, followed by the Metropolis (-Hastings) acceptance/rejection step. Compressible HMC (CHMC) generalizes HMC to a situation in which the dynamics is reversible but not necessarily Hamiltonian. This article presents a framework to further extend the algorithm. Within the existing framework, each trajectory of the dynamics must be integrated for the same amount of (random) time to generate a valid Metropolis proposal. Our generalized acceptance/rejection mechanism allows a more deliberate choice of the integration time for each trajectory. The proposed algorithm in particular enables an effective application of variable step size integrators to HMC-type sampling algorithms based on reversible dynamics. The potential of our framework is further demonstrated by another extension of HMC which reduces the wasted computations due to unstable numerical approximations and corr...
Lunar Regolith Albedos Using Monte Carlos
Wilson, T. L.; Andersen, V.; Pinsky, L. S.
2003-01-01
The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.
Nuclear reactions in Monte Carlo codes.
Ferrari, A; Sala, P R
2002-01-01
The physics foundations of hadronic interactions as implemented in most Monte Carlo codes are presented together with a few practical examples. The description of the relevant physics is presented schematically split into the major steps in order to stress the different approaches required for the full understanding of nuclear reactions at intermediate and high energies. Due to the complexity of the problem, only a few semi-qualitative arguments are developed in this paper. The description will be necessarily schematic and somewhat incomplete, but hopefully it will be useful for a first introduction into this topic. Examples are shown mostly for the high energy regime, where all mechanisms mentioned in the paper are at work and to which perhaps most of the readers are less accustomed. Examples for lower energies can be found in the references.
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction......, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Geometric Monte Carlo and Black Janus Geometries
Bak, Dongsu; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil
2016-01-01
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Modeling neutron guides using Monte Carlo simulations
Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R
2002-01-01
Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.
Accurate barrier heights using diffusion Monte Carlo
Krongchon, Kittithat; Wagner, Lucas K
2016-01-01
Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.
QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS
DERAEDT, H
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
QWalk: A Quantum Monte Carlo Program for Electronic Structure
Wagner, Lucas K; Mitas, Lubos
2007-01-01
We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org
Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications
Raedt, H. De
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Reporting Monte Carlo Studies in Structural Equation Modeling
Boomsma, Anne
2013-01-01
In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, Claudia
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
The Monte Carlo Method. Popular Lectures in Mathematics.
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Sensitivity of Monte Carlo simulations to input distributions
Energy Technology Data Exchange (ETDEWEB)
RamoRao, B. S.; Srikanta Mishra, S.; McNeish, J.; Andrews, R. W.
2001-07-01
The sensitivity of the results of a Monte Carlo simulation to the shapes and moments of the probability distributions of the input variables is studied. An economical computational scheme is presented as an alternative to the replicate Monte Carlo simulations and is explained with an illustrative example. (Author) 4 refs.
Further experience in Bayesian analysis using Monte Carlo Integration
H.K. van Dijk (Herman); T. Kloek (Teun)
1980-01-01
textabstractAn earlier paper [Kloek and Van Dijk (1978)] is extended in three ways. First, Monte Carlo integration is performed in a nine-dimensional parameter space of Klein's model I [Klein (1950)]. Second, Monte Carlo is used as a tool for the elicitation of a uniform prior on a finite region by
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, C.
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
Energy Technology Data Exchange (ETDEWEB)
Richet, Y
2006-12-15
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
A semianalytic Monte Carlo code for modelling LIDAR measurements
Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio
2007-10-01
LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.
A New Approach to Monte Carlo Simulations in Statistical Physics
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
Monte Carlo modelling of positron transport in real world applications
Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj
2014-05-01
Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.
An Overview of the Monte Carlo Application ToolKit (MCATK)
Energy Technology Data Exchange (ETDEWEB)
Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-01-07
MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.
Vectorized Monte Carlo methods for reactor lattice analysis
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Burrows, John
2013-04-01
An introduction to the use of the mathematical technique of Monte Carlo simulations to evaluate least squares regression calibration is described. Monte Carlo techniques involve the repeated sampling of data from a population that may be derived from real (experimental) data, but is more conveniently generated by a computer using a model of the analytical system and a randomization process to produce a large database. Datasets are selected from this population and fed into the calibration algorithms under test, thus providing a facile way of producing a sufficiently large number of assessments of the algorithm to enable a statically valid appraisal of the calibration process to be made. This communication provides a description of the technique that forms the basis of the results presented in Parts II and III of this series, which follow in this issue, and also highlights the issues arising from the use of small data populations in bioanalysis.
Stanica, Nicolae; Cimpoesu, Fanica; Radu, Cosmin; Chihaia, Viorel; Suh, Soong-Hyuck
2015-01-01
As for the systematic investigations of magnetic behaviors and its related properties, computer simulations in extended quantum spin networks have been performed in good conditions via the generalized Ising model using the Monte Carlo-Metropolis algorithm with proven efficiencies. The present work, starting from a real magnetic system, provides detailed insights into the finite size effects and the ferrimagnetic properties in various 1 D, 2D and 3D geometries such as the magnetic moment, ordering temperature, and magnetocaloric effects with the different values of spins localized on the different coordinated sites.
Energy Technology Data Exchange (ETDEWEB)
Ishmael Parsai, E., E-mail: e.parsai@utoledo.ed [University of Toledo Health Science Campus, Department of Radiation Oncology, Mail Stop 1151, 3000 Arlington Avenue, Toledo, OH 43614 (United States); Shvydka, Diana; Kang, Jun; Chan, Philip; Pearson, David; Ahmad, Faheem [University of Toledo Health Science Campus, Department of Radiation Oncology, Mail Stop 1151, 3000 Arlington Avenue, Toledo, OH 43614 (United States)
2010-12-15
We assess the accuracy of ADAC Pinnacle{sup 3} commercial treatment planning system (TPS) in computation of isodose distributions for shaped electron fields. The assessment is based on comparison of dose profiles generated by TPS and a Monte Carlo model for different beam energies, applicator sizes, and percentages of field blocking. Dose differences of up to 14% are observed at the depth of maximum dose. These discrepancies, often ignored in clinical evaluations, are attributable to inadequate modeling of scatter from applicators and blocks by TPS.
Parsai, E Ishmael; Shvydka, Diana; Kang, Jun; Chan, Philip; Pearson, David; Ahmad, Faheem
2010-12-01
We assess the accuracy of ADAC Pinnacle(3) commercial treatment planning system (TPS) in computation of isodose distributions for shaped electron fields. The assessment is based on comparison of dose profiles generated by TPS and a Monte Carlo model for different beam energies, applicator sizes, and percentages of field blocking. Dose differences of up to 14% are observed at the depth of maximum dose. These discrepancies, often ignored in clinical evaluations, are attributable to inadequate modeling of scatter from applicators and blocks by TPS.
Monte Carlo studies of model Langmuir monolayers.
Opps, S B; Yang, B; Gray, C G; Sullivan, D E
2001-04-01
This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Monte Carlo Numerical Models for Nuclear Logging Applications
Directory of Open Access Journals (Sweden)
Fusheng Li
2012-06-01
Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models
Monte Carlo simulation of ÃÂ² ÃÂ³ coincidence system using plastic scintillators in 4ÃÂ geometry
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
Cluster Monte Carlo simulations of the nematic-isotropic transition
Priezjev, N. V.; Pelcovits, Robert A.
2001-06-01
We report the results of simulations of the three-dimensional Lebwohl-Lasher model of the nematic-isotropic transition using a single cluster Monte Carlo algorithm. The algorithm, first introduced by Kunz and Zumbach to study two-dimensional nematics, is a modification of the Wolff algorithm for spin systems, and greatly reduces critical slowing down. We calculate the free energy in the neighborhood of the transition for systems up to linear size 70. We find a double well structure with a barrier that grows with increasing system size. We thus obtain an upper estimate of the value of the transition temperature in the thermodynamic limit.
Wissdorf, Walter; Seifert, Luzia; Derpmann, Valerie; Klee, Sonja; Vautz, Wolfgang; Benter, Thorsten
2013-04-01
For the comprehensive simulation of ion trajectories including reactive collisions at elevated pressure conditions, a chemical reaction simulation (RS) extension to the popular SIMION software package was developed, which is based on the Monte Carlo statistical approach. The RS extension is of particular interest to SIMION users who wish to simulate ion trajectories in collision dominated environments such as atmospheric pressure ion sources, ion guides (e.g., funnels, transfer multi poles), chemical reaction chambers (e.g., proton transfer tubes), and/or ion mobility analyzers. It is well known that ion molecule reaction rate constants frequently reach or exceed the collision limit obtained from kinetic gas theory. Thus with a typical dwell time of ions within the above mentioned devices in the ms range, chemical transformation reactions are likely to occur. In other words, individual ions change critical parameters such as mass, mobility, and chemical reactivity en passage to the analyzer, which naturally strongly affects their trajectories. The RS method simulates elementary reaction events of individual ions reflecting the behavior of a large ensemble by a representative set of simulated reacting particles. The simulation of the proton bound water cluster reactant ion peak (RIP) in ion mobility spectrometry (IMS) was chosen as a benchmark problem. For this purpose, the RIP was experimentally determined as a function of the background water concentration present in the IMS drift tube. It is shown that simulation and experimental data are in very good agreement, demonstrating the validity of the method.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production
Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S
2004-01-01
High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...
Wierling, Christoph; Kühn, Alexander; Hache, Hendrik; Daskalaki, Andriani; Maschke-Dutz, Elisabeth; Peycheva, Svetlana; Li, Jian; Herwig, Ralf; Lehrach, Hans
2012-08-15
Cancer is known to be a complex disease and its therapy is difficult. Much information is available on molecules and pathways involved in cancer onset and progression and this data provides a valuable resource for the development of predictive computer models that can help to identify new potential drug targets or to improve therapies. Modeling cancer treatment has to take into account many cellular pathways usually leading to the construction of large mathematical models. The development of such models is complicated by the fact that relevant parameters are either completely unknown, or can at best be measured under highly artificial conditions. Here we propose an approach for constructing predictive models of such complex biological networks in the absence of accurate knowledge on parameter values, and apply this strategy to predict the effects of perturbations induced by anti-cancer drug target inhibitions on an epidermal growth factor (EGF) signaling network. The strategy is based on a Monte Carlo approach, in which the kinetic parameters are repeatedly sampled from specific probability distributions and used for multiple parallel simulations. Simulation results from different forms of the model (e.g., a model that expresses a certain mutation or mutation pattern or the treatment by a certain drug or drug combination) can be compared with the unperturbed control model and used for the prediction of the perturbation effects. This framework opens the way to experiment with complex biological networks in the computer, likely to save costs in drug development and to improve patient therapy.
Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions
Directory of Open Access Journals (Sweden)
Samuel Livingstone
2014-06-01
Full Text Available Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
Energy Technology Data Exchange (ETDEWEB)
Heidary, Saeed, E-mail: saeedheidary@aut.ac.ir; Setayeshi, Saeed, E-mail: setayesh@aut.ac.ir
2015-01-11
This work presents a simulation based study by Monte Carlo which uses two adaptive neuro-fuzzy inference systems (ANFIS) for cross talk compensation of simultaneous {sup 99m}Tc/{sup 201}Tl dual-radioisotope SPECT imaging. We have compared two neuro-fuzzy systems based on fuzzy c-means (FCM) and subtractive (SUB) clustering. Our approach incorporates eight energy-windows image acquisition from 28 keV to 156 keV and two main photo peaks of {sup 201}Tl (77±10% keV) and {sup 99m}Tc (140±10% keV). The Geant4 application in emission tomography (GATE) is used as a Monte Carlo simulator for three cylindrical and a NURBS Based Cardiac Torso (NCAT) phantom study. Three separate acquisitions including two single-isotopes and one dual isotope were performed in this study. Cross talk and scatter corrected projections are reconstructed by an iterative ordered subsets expectation maximization (OSEM) algorithm which models the non-uniform attenuation in the projection/back-projection. ANFIS-FCM/SUB structures are tuned to create three to sixteen fuzzy rules for modeling the photon cross-talk of the two radioisotopes. Applying seven to nine fuzzy rules leads to a total improvement of the contrast and the bias comparatively. It is found that there is an out performance for the ANFIS-FCM due to its acceleration and accurate results.
Monte Carlo simulations for heavy ion dosimetry
Energy Technology Data Exchange (ETDEWEB)
Geithner, O.
2006-07-26
Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
A continuation multilevel Monte Carlo algorithm
Collier, Nathan
2014-09-05
We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.
Monte Carlo Simulations of the Photospheric Process
Santana, Rodolfo; Hernandez, Roberto A; Kumar, Pawan
2015-01-01
We present a Monte Carlo (MC) code we wrote to simulate the photospheric process and to study the photospheric spectrum above the peak energy. Our simulations were performed with a photon to electron ratio $N_{\\gamma}/N_{e} = 10^{5}$, as determined by observations of the GRB prompt emission. We searched an exhaustive parameter space to determine if the photospheric process can match the observed high-energy spectrum of the prompt emission. If we do not consider electron re-heating, we determined that the best conditions to produce the observed high-energy spectrum are low photon temperatures and high optical depths. However, for these simulations, the spectrum peaks at an energy below 300 keV by a factor $\\sim 10$. For the cases we consider with higher photon temperatures and lower optical depths, we demonstrate that additional energy in the electrons is required to produce a power-law spectrum above the peak-energy. By considering electron re-heating near the photosphere, the spectrum for these simulations h...
Finding Planet Nine: a Monte Carlo approach
Marcos, C de la Fuente
2016-01-01
Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30 degrees, and an argument of perihelion of 150 degrees. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistic...
Atomistic Monte Carlo simulation of lipid membranes.
Wüstner, Daniel; Sklenar, Heinz
2014-01-24
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC) local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA) for the phospholipid dipalmitoylphosphatidylcholine (DPPC). We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
Parallel Monte Carlo Simulation of Aerosol Dynamics
Directory of Open Access Journals (Sweden)
Kun Zhou
2014-02-01
Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.
Monte Carlo simulations of Protein Adsorption
Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges
2008-03-01
Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.
Monte Carlo simulations of the NIMROD diffractometer
Energy Technology Data Exchange (ETDEWEB)
Botti, A. [University of Roma TRE, Rome (Italy)]. E-mail: botti@fis.uniroma3.it; Ricci, M.A. [University of Roma TRE, Rome (Italy); Bowron, D.T. [ISIS-Rutherford Appleton Laboratory, Chilton (United Kingdom); Soper, A.K. [ISIS-Rutherford Appleton Laboratory, Chilton (United Kingdom)
2006-11-15
The near and intermediate range order diffractometer (NIMROD) has been selected as a day one instrument on the second target station at ISIS. Uniquely, NIMROD will provide continuous access to particle separations ranging from the interatomic (<1A) to the mesoscopic (<300A). This instrument is mainly designed for structural investigations, although the possibility of putting a Fermi chopper (and corresponding NIMONIC chopper) in the incident beam line, will potentially allow the performance of low resolution inelastic scattering measurements. The performance characteristics of the TOF diffractometer have been simulated by means of a series of Monte Carlo calculations. In particular, the flux as a function of the transferred momentum Q as well as the resolution in Q and transferred energy have been estimated. Moreover, the possibility of including a honeycomb collimator in order to achieve better resolution has been tested. Here, we want to present the design of this diffractometer that will bridge the gap between wide- and small-angle neutron scattering experiments.
Monte Carlo Simulation of River Meander Modelling
Posner, A. J.; Duan, J. G.
2010-12-01
This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.
Commensurabilities between ETNOs: a Monte Carlo survey
de la Fuente Marcos, C.; de la Fuente Marcos, R.
2016-07-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.
Monte Carlo simulations for focusing elliptical guides
Energy Technology Data Exchange (ETDEWEB)
Valicu, Roxana [FRM2 Garching, Muenchen (Germany); Boeni, Peter [E20, TU Muenchen (Germany)
2009-07-01
The aim of the Monte Carlo simulations using McStas Programme was to improve the focusing of the neutron beam existing at PGAA (FRM II) by prolongation of the existing elliptic guide (coated now with supermirrors with m=3) with a new part. First we have tried with an initial length of the additional guide of 7,5cm and coatings for the neutron guide of supermirrors with m=4,5 and 6. The gain (calculated by dividing the intensity in the focal point after adding the guide by the intensity at the focal point with the initial guide) obtained for this coatings indicated that a coating with m=5 would be appropriate for a first trial. The next step was to vary the length of the additional guide for this m value and therefore choosing the appropriate length for the maximal gain. With the m value and the length of the guide fixed we have introduced an aperture 1 cm before the focal point and we have varied the radius of this aperture in order to obtain a focused beam. We have observed a dramatic decrease in the size of the beam in the focal point after introducing this aperture. The simulation results, the gains obtained and the evolution of the beam size will be presented.
Monte Carlo Production Management at CMS
Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni
2015-01-01
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...
Monte Carlo models of dust coagulation
Zsom, Andras
2010-01-01
The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...
Atomistic Monte Carlo Simulation of Lipid Membranes
Directory of Open Access Journals (Sweden)
Daniel Wüstner
2014-01-01
Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
Measuring Berry curvature with quantum Monte Carlo
Kolodrubetz, Michael
2014-01-01
The Berry curvature and its descendant, the Berry phase, play an important role in quantum mechanics. They can be used to understand the Aharonov-Bohm effect, define topological Chern numbers, and generally to investigate the geometric properties of a quantum ground state manifold. While Berry curvature has been well-studied in the regimes of few-body physics and non-interacting particles, its use in the regime of strong interactions is hindered by the lack of numerical methods to solve it. In this paper we fill this gap by implementing a quantum Monte Carlo method to solve for the Berry curvature, based on interpreting Berry curvature as a leading correction to imaginary time ramps. We demonstrate our algorithm using the transverse-field Ising model in one and two dimensions, the latter of which is non-integrable. Despite the fact that the Berry curvature gives information about the phase of the wave function, we show that our algorithm has no sign or phase problem for standard sign-problem-free Hamiltonians...
Foudray, Angela M K; Habte, Frezghi; Chinn, Garry; Zhang, Jin; Levin, Craig S
2006-01-01
We are investigating a high-sensitivity, high-resolution positron emission tomography (PET) system for clinical use in the detection, diagnosis and staging of breast cancer. Using conventional figures of merit, design parameters were evaluated for count rate performance, module dead time, and construction complexity. The detector system modeled comprises extremely thin position-sensitive avalanche photodiodes coupled to lutetium oxy-orthosilicate scintillation crystals. Previous investigations of detector geometries with Monte Carlo indicated that one of the largest impacts on sensitivity is local scintillation crystal density when considering systems having the same average scintillation crystal densities (same crystal packing fraction and system solid-angle coverage). Our results show the system has very good scatter and randoms rejection at clinical activity ranges ( approximately 200 muCi).
Algebraic Monte Carlo precedure reduces statistical analysis time and cost factors
Africano, R. C.; Logsdon, T. S.
1967-01-01
Algebraic Monte Carlo procedure statistically analyzes performance parameters in large, complex systems. The individual effects of input variables can be isolated and individual input statistics can be changed without having to repeat the entire analysis.
ARCHER, a New Monte Carlo Software Tool for Emerging Heterogeneous Computing Environments
Xu, X. George; Liu, Tianyu; Su, Lin; Du, Xining; Riblett, Matthew; Ji, Wei; Gu, Deyang; Carothers, Christopher D.; Shephard, Mark S.; Brown, Forrest B.; Kalra, Mannudeep K.; Liu, Bob
2014-06-01
The Monte Carlo radiation transport community faces a number of challenges associated with peta- and exa-scale computing systems that rely increasingly on heterogeneous architectures involving hardware accelerators such as GPUs. Existing Monte Carlo codes and methods must be strategically upgraded to meet emerging hardware and software needs. In this paper, we describe the development of a software, called ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments), which is designed as a versatile testbed for future Monte Carlo codes. Preliminary results from five projects in nuclear engineering and medical physics are presented.
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.
2015-01-01
that corresponds to the real part of the neutron balance, and one that corresponds to the imaginary part. The two equivalent problems are in nature similar to two subcritical systems driven by external neutron sources, and can thus be treated as such in a Monte Carlo framework. The definition of these two...... of light water reactor conditions in an infinite lattice of fuel pins surrounded by water. The test case highlights flux gradients that are steeper in the Monte Carlo-based transport solution than in the diffusion-based solution. Compared to other Monte Carlo-based methods earlier proposed for carrying out...
A first look at quasi-Monte Carlo for lattice field theory problems
Jansen, K; Nube, A; Griewank, A; Mueller-Preussker, M
2012-01-01
In this project we initiate an investigation of the applicability of Quasi-Monte Carlo methods to lattice field theories in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Monte Carlo simulation behaves like 1/sqrt(N), where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to up to 1/N. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Monte-Carlo simulation-based statistical modeling
Chen, John
2017-01-01
This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.
On the Markov Chain Monte Carlo (MCMC) method
Indian Academy of Sciences (India)
Rajeeva L Karandikar
2006-04-01
Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be speciﬁed indirectly. In this article, we give an introduction to this method along with some examples.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo.
Cheon, Sooyoung; Liang, Faming
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.
Monte Carlo techniques for analyzing deep penetration problems
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.; Gonnord, J.; Hendricks, J.S.
1985-01-01
A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.
Monte Carlo simulations: Hidden errors from ``good'' random number generators
Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna
1992-12-01
The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.
An Introduction to Multilevel Monte Carlo for Option Valuation
Higham, Desmond J
2015-01-01
Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
1995-01-01
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Using Supervised Learning to Improve Monte Carlo Integral Estimation
Tracey, Brendan; Alonso, Juan J
2011-01-01
Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Accelerating Monte Carlo Renderers by Ray Histogram Fusion
Directory of Open Access Journals (Sweden)
Mauricio Delbracio
2015-03-01
Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.
Optical coherence tomography: Monte Carlo simulation and improvement by optical amplification
DEFF Research Database (Denmark)
Tycho, Andreas
2002-01-01
An advanced novel Monte Carlo simulation model of the detection process of an optical coherence tomography (OCT) system is presented. For the first time it is shown analytically that the applicability of the incoherent Monte Carlo approach to model the heterodyne detection process of an OCT system...... model of the OCT signal. The OCT signal from a scattering medium are obtained for several beam and sample geometries using the new Monte Carlo model, and when comparing to results of an analytical model based on the extended Huygens-Fresnel principle excellent agreement is obtained. With the greater...... flexibility of Monte Carlo simulations, this new model is demonstrated to be excellent as a numerical phantom, i.e., as a substitute for otherwise difficult experiments. Finally, a new model of the signal-to-noise ratio (SNR) of an OCT system with optical amplification of the light reflected from the sample...
Novel Extrapolation Method in the Monte Carlo Shell Model
Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio
2010-01-01
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.
Monte Carlo estimation of the number of tatami tilings
Kimura, Kenji
2016-01-01
Motivated by the way Japanese tatami mats are placed on the floor, we consider domino tilings with a constraint and estimate the number of such tilings of plane regions. We map the system onto a monomer-dimer model with a novel local interaction on the dual lattice. We use a variant of the Hamiltonian replica exchange Monte Carlo method and the multi-parameter reweighting technique to study the model. The properties of the quantity are studied beyond exact enumeration and combinatorial method. The logarithm of the number of the tilings is linear in the boundary length of the region for all the regions studied.
Non-Boltzmann Ensembles and Monte Carlo Simulations
Murthy, K. P. N.
2016-10-01
Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc. This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g(E, M), as a function of both energy E, and order parameter M. This is carried out in two stages. We estimate g(E) in the first stage. Employing g
Computed radiography simulation using the Monte Carlo code MCNPX
Energy Technology Data Exchange (ETDEWEB)
Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)
2010-09-15
Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.
Monte Carlo conformal treatment planning as an independent assessment
Energy Technology Data Exchange (ETDEWEB)
Rincon, M.; Leal, A.; Perucha, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Medrano, J.C. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica
2001-07-01
The wide range of possibilities available in Radiotherapy with conformal fields cannot be covered experimentally. For this reason, dosimetrical and planning procedures are based on approximate algorithms or systematic measurements. Dose distribution calculations based on Monte Carlo (MC) simulations can be used to check results. In this work, two examples of conformal field treatments are shown: A prostate carcinoma and an ocular lymphoma. The dose distributions obtained with a conventional Planning System and with MC have been compared. Some significant differences have been found. (orig.)
Monte Carlo simulation of charge mediated magnetoelectricity in multiferroic bilayers
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Álvarez, H.H. [Universidad de Caldas, Manizales (Colombia); Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Bedoya-Hincapié, C.M. [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Universidad Santo Tomás, Bogotá (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia)
2014-12-01
Simulations of a bilayer ferroelectric/ferromagnetic multiferroic system were carried out, based on the Monte Carlo method and Metropolis dynamics. A generic model was implemented with a Janssen-like Hamiltonian, taking into account magnetoelectric interactions due to charge accumulation at the interface. Two different magnetic exchange constants were considered for accumulation and depletion states. Several screening lengths were also included. Simulations exhibit considerable magnetoelectric effects not only at low temperature, but also at temperature near to the transition point of the ferromagnetic layer. The results match experimental observations for this kind of structure and mechanism.