Sample records for vim monte carlo

  1. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L


    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  2. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)


    Major ad- vances of the MC techniques were made during World. War II by scientists such as John Von Neumann, En- rico Fermi, S M Ulam and Nicholas Metropolis working on the development of nuclear weapons in Los Alamos. National Laboratory, USA [10, 11]. 3. Monte Carlo Integration Using Importance. Sampling.

  3. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE. Markov Chain Monte Carlo. 1. ... Note that I have numbered only those squares that are. 'stable', 'i. e., do not have .... probability approaches 1 in the limit as t tends to infinity was obvious even without all this mathematics, since it is a common experience that all games of Ludo eventually end since.

  4. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian


    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  5. Improved diffusion Monte Carlo


    Hairer, Martin; Weare, Jonathan


    We propose a modification, based on the RESTART (repetitive simulation trials after reaching thresholds) and DPR (dynamics probability redistribution) rare event simulation algorithms, of the standard diffusion Monte Carlo (DMC) algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the "na\\"ive" generalisation of the standard algorithm would be impractical, due to an exponential e...

  6. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  7. LMC: Logarithmantic Monte Carlo (United States)

    Mantz, Adam B.


    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  8. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory


    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  9. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO


    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  10. Metropolis Methods for Quantum Monte Carlo Simulations


    Ceperley, D.M.


    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  11. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal


    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  12. Wormhole Hamiltonian Monte Carlo. (United States)

    Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak


    In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.

  13. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay


    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  14. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros


    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  15. Proton Upset Monte Carlo Simulation (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.


    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  16. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Monte Carlo is a city in Monaco, famous for its casinos offering games of chance. Games of chance ex- hibit random behaviour, much like the random variables generated for the statistical simulation exercises. Early ideas of probability and simulation were developed in the context of gambling here and hence these simula-.

  17. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Markov Chain Monte Carlo Methods. 2. The Markov Chain Case. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance. His spare time is ...

  18. Exact Monte Carlo for molecules

    Energy Technology Data Exchange (ETDEWEB)

    Lester, W.A. Jr.; Reynolds, P.J.


    A brief summary of the fixed-node quantum Monte Carlo method is presented. Results obtained for binding energies, the classical barrier height for H + H2, and the singlet-triplet splitting in methylene are presented and discussed. 17 refs.

  19. Monte Carlo calculations of nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.


    Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

  20. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Introduction. In parts 1 and 2 of this series it was shown how Markov chain Monte Carlo (MCMC) methods can be employed to obtain satisfactory approximations for integrals that are not easy to evaluate analytically. Such integrals arise routinely in statistical problems. Some of the statistical concepts that are relevant for the ...

  1. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  2. Monte Carlo Particle Lists: MCPL

    DEFF Research Database (Denmark)

    Kittelmann, Thomas; Klinkby, Esben Bryndt; Bergbäck Knudsen, Erik


    simulation packages. Program summary: Program Title: MCPL. Program Files doi: Licensing provisions: CC0 for core MCPL, see LICENSE file for details. Programming language: C and C++ External routines/libraries: Geant4, MCNP, McStas, McXtrace Nature of problem: Saving......A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular...... particle states in Monte Carlo simulations, for interchange between simulation packages or for reuse within a single package. Solution method: Binary interchange format with associated code written in portable C along with tools and interfaces for relevant simulation packages....

  3. Shell model Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)


    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

  4. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H


    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  5. The fermion Monte Carlo revisited (United States)

    Assaraf, Roland; Caffarel, Michel; Khelif, Anatole


    In this work we present a detailed study of the fermion Monte Carlo algorithm (FMC), a recently proposed stochastic method for calculating fermionic ground-state energies. A proof that the FMC method is an exact method is given. In this work the stability of the method is related to the difference between the lowest (bosonic-type) eigenvalue of the FMC diffusion operator and the exact Fermi energy. It is shown that within a FMC framework the lowest eigenvalue of the new diffusion operator is no longer the bosonic ground-state eigenvalue as in standard exact diffusion Monte Carlo (DMC) schemes but a modified value which is strictly greater. Accordingly, FMC can be viewed as an exact DMC method built from a correlated diffusion process having a reduced Bose-Fermi gap. As a consequence, the FMC method is more stable than any transient method (or nodal release-type approaches). It is shown that the most recent ingredient of the FMC approach (Kalos M H and Pederiva F 2000 Phys. Rev. Lett. 85 3547), namely the introduction of non-symmetric guiding functions, does not necessarily improve the stability of the algorithm. We argue that the stability observed with such guiding functions is in general a finite-size population effect disappearing for a very large population of walkers. The counterpart of this stability is a control population error which is different in nature from the standard diffusion Monte Carlo algorithm and which is at the origin of an uncontrolled approximation in FMC. We illustrate the various ideas presented in this work with calculations performed on a very simple model having only nine states but a full 'sign problem'. Already for this toy model it is clearly seen that FMC calculations are inherently uncontrolled.

  6. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank


    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  7. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre


    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  8. Uncertainty Propagation with Fast Monte Carlo Techniques (United States)

    Rochman, D.; van der Marck, S. C.; Koning, A. J.; Sjöstrand, H.; Zwermann, W.


    Two new and faster Monte Carlo methods for the propagation of nuclear data uncertainties in Monte Carlo nuclear simulations are presented (the "Fast TMC" and "Fast GRS" methods). They are addressing the main drawback of the original Total Monte Carlo method (TMC), namely the necessary large time multiplication factor compared to a single calculation. With these new methods, Monte Carlo simulations can now be accompanied with uncertainty propagation (other than statistical), with small additional calculation time. The new methods are presented and compared with the TMC methods for criticality benchmarks.

  9. Monte Carlo calculations for HTRs

    Energy Technology Data Exchange (ETDEWEB)

    Hogenbirk, A. [ECN Nuclear Research, Petten (Netherlands)


    From a neutronics point of view pebble-bed HTRs are completely different from standard LWRs. The most important differences are to be found in the reactor geometry, the properties of the moderator (graphite instead of water) and the self-shielding of the fuel regions. Therefore, computer packages normally used for core analyses should be validated with experimental data before they can be used for HTR analyses. This especially holds for deterministic computer codes, in which approximations are made which may not be valid in pebble-bed HTRs. Monte Carlo codes more based on first principles suffer much less from this problem. In order to study small- and medium-sized LEU-HTR systems in the late 1980s an IAEA Coordinated Research Programme (CRP) was started. This CRP was mainly directed to the effects of water ingress and neutron streaming. The PROTEUS facility at the Paul Scherrer Institute (PSI) in Villigen, Switzerland, played a central role in this CRP. Benchmark quality measurements were provided in clean, easy-to-interpret critical configurations, using pebble-type fuel. ECN in Petten, Netherlands, contributed to the CRP by performing reactor calculations using the WIMS code system with deterministic calculations. However, a need was felt for reference calculations, in which as few approximations as possible were made. These analyses were performed with the Monte Carlo code MCNP-4A. In this contribution the results are given of the main MCNP-calculations. In these analyses a detailed model of the PROTEUS experimental set-up was used, whereas in the calculations use was made of high-quality continuous-energy cross-section data. The attention was focused on the calculation of the value of k{sub eff} and of streaming effects in the pebble-bed core. 15 refs.

  10. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  11. Path integral Monte Carlo simulations of silicates


    Rickwardt, Chr.; Nielaba, P.; Müser, M. H.; Binder, K.


    We investigate the thermal expansion of crystalline SiO$_2$ in the $\\beta$-- cristobalite and the $\\beta$-quartz structure with path integral Monte Carlo (PIMC) techniques. This simulation method allows to treat low-temperature quantum effects properly. At temperatures below the Debye temperature, thermal properties obtained with PIMC agree better with experimental results than those obtained with classical Monte Carlo methods.

  12. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  13. Monte Carlo simulation of model Spin systemsr

    Indian Academy of Sciences (India)

    three~dimensional Ising models and Heisenberg models are dealt with in some detail. Recent applications of the Monte Carlo method to spin glass systems and to estimate renormalisation group critical exponents are reviewod. Keywords. _ Monte-carlo simulation; critical phenomena; Ising models; Heisenberg models ...

  14. Monte Carlo Simulation of Phase Transitions


    村井, 信行; N., MURAI; 中京大学教養部


    In the Monte Carlo simulation of phase transition, a simple heat bath method is applied to the classical Heisenberg model in two dimensions. It reproduces the correlation length predicted by the Monte Carlo renor-malization group and also computed in the non-linear σ model

  15. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun


    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  16. Adaptive Markov Chain Monte Carlo

    KAUST Repository

    Jadoon, Khan


    A substantial interpretation of electromagnetic induction (EMI) measurements requires quantifying optimal model parameters and uncertainty of a nonlinear inverse problem. For this purpose, an adaptive Bayesian Markov chain Monte Carlo (MCMC) algorithm is used to assess multi-orientation and multi-offset EMI measurements in an agriculture field with non-saline and saline soil. In the MCMC simulations, posterior distribution was computed using Bayes rule. The electromagnetic forward model based on the full solution of Maxwell\\'s equations was used to simulate the apparent electrical conductivity measured with the configurations of EMI instrument, the CMD mini-Explorer. The model parameters and uncertainty for the three-layered earth model are investigated by using synthetic data. Our results show that in the scenario of non-saline soil, the parameters of layer thickness are not well estimated as compared to layers electrical conductivity because layer thicknesses in the model exhibits a low sensitivity to the EMI measurements, and is hence difficult to resolve. Application of the proposed MCMC based inversion to the field measurements in a drip irrigation system demonstrate that the parameters of the model can be well estimated for the saline soil as compared to the non-saline soil, and provide useful insight about parameter uncertainty for the assessment of the model outputs.

  17. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk


    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  18. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y


    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  19. Monte Carlo simulations for plasma physics

    Energy Technology Data Exchange (ETDEWEB)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)


    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  20. Monte Carlo methods for particle transport

    CERN Document Server

    Haghighat, Alireza


    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  1. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.


    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  2. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico


    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  3. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay


    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  4. Approaching Chemical Accuracy with Quantum Monte Carlo

    CERN Document Server

    Petruzielo, F R; Umrigar, C J


    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.

  5. Self-learning Monte Carlo method (United States)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang


    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.

  6. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S


    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  7. Off-diagonal expansion quantum Monte Carlo. (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay


    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  8. Off-diagonal expansion quantum Monte Carlo (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay


    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  9. Monte Carlo sampling for stochastic weight functions. (United States)

    Frenkel, Daan; Schrenk, K Julian; Martiniani, Stefano


    Conventional Monte Carlo simulations are stochastic in the sense that the acceptance of a trial move is decided by comparing a computed acceptance probability with a random number, uniformly distributed between 0 and 1. Here, we consider the case that the weight determining the acceptance probability itself is fluctuating. This situation is common in many numerical studies. We show that it is possible to construct a rigorous Monte Carlo algorithm that visits points in state space with a probability proportional to their average weight. The same approach may have applications for certain classes of high-throughput experiments and the analysis of noisy datasets.

  10. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...

  11. AlphaGo ja Monte Carlo -puuhaku


    Kumpulainen, Samu


    Tässä tutkielmassa käsittelen Monte Carlo -puuhakualgoritmia sekä sen roolia hyvin menestyneessä AlphaGo-tekoälyssä. Tavoitteena oli muodostaa kokonaiskuva AlphaGo:n toiminnasta, painottuen etenkin Monte Carlo -puuhaun näkökulmaan. Tutkimuksen perusteella selvisi miten ohjelma hyödyntää omassa hakualgoritmissään kyseistä puuhakua, jota se parantaa hyödyntäen koneoppimista ja useita eri tarkoituksiin opetettuja neuroverkkoja. AlphaGo saavutti näin tehokkuuden, johon muut go-ohjelmat eivä...

  12. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav


    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  13. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Granero Cabanero, D.


    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  14. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules

    CERN Document Server

    Lester, William A; Reynolds, PJ


    This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n

  15. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming


    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  16. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)


    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  17. Monte Carlo studies of uranium calorimetry

    Energy Technology Data Exchange (ETDEWEB)

    Brau, J.; Hargis, H.J.; Gabriel, T.A.; Bishop, B.L.


    Detailed Monte Carlo calculations of uranium calorimetry are presented which reveal a significant difference in the responses of liquid argon and plastic scintillator in uranium calorimeters. Due to saturation effects, neutrons from the uranium are found to contribute only weakly to the liquid argon signal. Electromagnetic sampling inefficiencies are significant and contribute substantially to compensation in both systems. 17 references.

  18. Quantum algorithm for exact Monte Carlo sampling


    Destainville, Nicolas; Georgeot, Bertrand; Giraud, Olivier


    We build a quantum algorithm which uses the Grover quantum search procedure in order to sample the exact equilibrium distribution of a wide range of classical statistical mechanics systems. The algorithm is based on recently developed exact Monte Carlo sampling methods, and yields a polynomial gain compared to classical procedures.

  19. Monte Carlo calculations of atoms and molecules (United States)

    Schmidt, K. E.; Moskowitz, J. W.


    The variational and Green's function Monte Carlo (GFMC) methods can treat many interesting atomic and molecular problems. These methods can give chemical accuracy for up to 10 or so electrons. The various implementations of the GFMC method, including the domain Green's function method and the short-time approximation, are discussed. Results are presented for several representative atoms and molecules.

  20. Dynamic bounds coupled with Monte Carlo simulations

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.


    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper

  1. An analysis of Monte Carlo tree search

    CSIR Research Space (South Africa)

    James, S


    Full Text Available Monte Carlo Tree Search (MCTS) is a family of directed search algorithms that has gained widespread attention in recent years. Despite the vast amount of research into MCTS, the effect of modifications on the algorithm, as well as the manner...

  2. Hypothesis testing of scientific Monte Carlo calculations (United States)

    Wallerberger, Markus; Gull, Emanuel


    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  3. No-compromise reptation quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Yuen, W K [Department of Mathematics, Brock University, St. Catharines, ON L2S 3A1 (Canada); Farrar, Thomas J [Department of Mathematics, Brock University, St. Catharines, ON L2S 3A1 (Canada); Rothstein, Stuart M [Departments of Chemistry and Physics, Brock University, St. Catharines, ON L2S 3A1 (Canada)


    Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)

  4. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay


    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  5. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.


    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  6. Evaluation Function Based Monte-Carlo LOA (United States)

    Winands, Mark H. M.; Björnsson, Yngvi

    Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantially. Also in the game of Lines of Action (LOA), which has been dominated so far by αβ, MCTS is making an inroad. In this paper we investigate how to use a positional evaluation function in a Monte-Carlo simulation-based LOA program (MC-LOA). Four different simulation strategies are designed, called Evaluation Cut-Off, Corrective, Greedy, and Mixed. They use an evaluation function in several ways. Experimental results reveal that the Mixed strategy is the best among them. This strategy draws the moves randomly based on their transition probabilities in the first part of a simulation, but selects them based on their evaluation score in the second part of a simulation. Using this simulation strategy the MC-LOA program plays at the same level as the αβ program MIA, the best LOA-playing entity in the world.

  7. Adaptive Monte Carlo for nuclear data evaluation

    Directory of Open Access Journals (Sweden)

    Schnabel Georg


    Full Text Available An adaptive Monte Carlo method for nuclear data evaluation is presented. A fast evaluation method based on the linearization of the nuclear model guides the adaptation of the sampling distribution towards the posterior distribution. The method is suited for parallel computation and provides detailed uncertainty information about nuclear model parameters. Especially, the posterior distribution of the model parameters is not restricted to be multivariate normal. The method is demonstrated in an evaluation of the 181Ta total cross section for incident neutrons. Future applications are as an efficient sampling scheme in the Total Monte Carlo method, and the restriction of parameter uncertainties in nuclear models by both differential and integral data.

  8. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia


    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  9. Monte Carlo methods for preference learning

    DEFF Research Database (Denmark)

    Viappiani, P.


    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....



    Tarnóczi Tibor; Tóth Réka; Fenyves Veronika


    We present a simulation model in this paper to determine the value of intellectual capital. In frame of the simulation model we have used the Baruch Lev’s intellectual capital valuation modell.We have built in the Baruch Lev model in a two-dimensional Monte Carlo simulation modell. We have determined the intellectual capital in case of some stock exchange company. The calculation are presented in case of a selected company.


    Directory of Open Access Journals (Sweden)

    Tarnóczi Tibor


    Full Text Available We present a simulation model in this paper to determine the value of intellectual capital. In frame of the simulation model we have used the Baruch Lev’s intellectual capital valuation modell.We have built in the Baruch Lev model in a two-dimensional Monte Carlo simulation modell. We have determined the intellectual capital in case of some stock exchange company. The calculation are presented in case of a selected company.

  12. Efficient Monte Carlo sampling by parallel marginalization


    Weare, Jonathan


    Markov chain Monte Carlo sampling methods often suffer from long correlation times. Consequently, these methods must be run for many steps to generate an independent sample. In this paper, a method is proposed to overcome this difficulty. The method utilizes information from rapidly equilibrating coarse Markov chains that sample marginal distributions of the full system. This is accomplished through exchanges between the full chain and the auxiliary coarse chains. Results of numerical tests o...

  13. Handbook of Markov chain Monte Carlo

    CERN Document Server

    Brooks, Steve


    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  14. Applications of Maxent to quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Silver, R.N.; Sivia, D.S.; Gubernatis, J.E. (Los Alamos National Lab., NM (USA)); Jarrell, M. (Ohio State Univ., Columbus, OH (USA). Dept. of Physics)


    We consider the application of maximum entropy methods to the analysis of data produced by computer simulations. The focus is the calculation of the dynamical properties of quantum many-body systems by Monte Carlo methods, which is termed the Analytical Continuation Problem.'' For the Anderson model of dilute magnetic impurities in metals, we obtain spectral functions and transport coefficients which obey Kondo Universality.'' 24 refs., 7 figs.

  15. Conical Reflection in Direct Simulation Monte Carlo (United States)

    Sampson, Andrew; Payne, Adam; Somers, William; Spencer, Ross


    Fenix is a particle-in-cell simulation, using a Direct Simulation Monte Carlo method, and is aimed to improve the accuracy of Inductively Coupled Plasma Mass Spectrometry (ICP-MS). It currently focuses on the ICP-MS first expansion region through a supersonic nozzle in cylindrical symmetry. Due to increased complexity in Fenix, it has become necessary to solve the general conical surface reflection problem. The previous method, the new solution, and results from the enhanced simulation will be presented.

  16. Status of Monte-Carlo Event Generators

    Energy Technology Data Exchange (ETDEWEB)

    Hoeche, Stefan; /SLAC


    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  17. Monte Carlo Simulation of an American Option

    Directory of Open Access Journals (Sweden)

    Gikiri Thuo


    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  18. A Monte Carlo algorithm for degenerate plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Turrell, A.E., E-mail:; Sherlock, M.; Rose, S.J.


    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  19. Markov chains analytic and Monte Carlo computations

    CERN Document Server

    Graham, Carl


    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  20. Exascale Monte Carlo R&D

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory


    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  1. Efficient Monte Carlo sampling by parallel marginalization. (United States)

    Weare, Jonathan


    Markov chain Monte Carlo sampling methods often suffer from long correlation times. Consequently, these methods must be run for many steps to generate an independent sample. In this paper, a method is proposed to overcome this difficulty. The method utilizes information from rapidly equilibrating coarse Markov chains that sample marginal distributions of the full system. This is accomplished through exchanges between the full chain and the auxiliary coarse chains. Results of numerical tests on the bridge sampling and filtering/smoothing problems for a stochastic differential equation are presented.

  2. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A


    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  3. Canonical demon Monte Carlo renormalization group

    CERN Document Server

    Hasenbusch, M; Wieczerkowski, C


    We describe a new method to compute renormalized coupling constants in a Monte Carlo renormalization group calculation. The method can be used for a general class of models, e.g., lattice spin or gauge models. The basic idea is to simulate a joint system of block spins and canonical demons. In contrast to the Microcanonical Renormalization Group invented by Creutz et al. our method does not suffer from systematical errors stemming from a simultaneous use of two different ensembles. We present numerical results for the O(3) nonlinear \\sigma-model.

  4. Score Bounded Monte-Carlo Tree Search (United States)

    Cazenave, Tristan; Saffidine, Abdallah

    Monte-Carlo Tree Search (MCTS) is a successful algorithm used in many state of the art game engines. We propose to improve a MCTS solver when a game has more than two outcomes. It is for example the case in games that can end in draw positions. In this case it improves significantly a MCTS solver to take into account bounds on the possible scores of a node in order to select the nodes to explore. We apply our algorithm to solving Seki in the game of Go and to Connect Four.

  5. by means of FLUKA Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Ermis Elif Ebru


    Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.

  6. Archimedes, the Free Monte Carlo simulator

    CERN Document Server

    Sellier, Jean Michel D


    Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.

  7. Scalar QED, NLO and PHOTOS Monte Carlo


    Nanava, G.; Was, Z.


    Recently, QED bremsstrahlung in $B$-meson decays into pair of scalars (\\pi's and/or K's) is of interest. If experimental acceptance must be taken into account, PHOTOS Monte Carlo is often used in experimental simulations. We will use scalar QED to benchmark PHOTOS, even though this theory is of limited use for complex objects. We present the analytical form of the kernel used in the older versions of PHOTOS, and the new, exact (scalar QED) one. Matrix element and phase-space Jacobians are sep...

  8. Monte Carlo simulations of dense quantum plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Filinov, V S [Institute for High Energy Density, Izhorskay 13/19, Moscow 125412 (Russian Federation); Bonitz, M [Universitaet Kiel, Leibnizstrasse 15, 24098 Kiel (Germany); Fortov, V E [Institute for High Energy Density, Izhorskay 13/19, Moscow 125412 (Russian Federation); Ebeling, W [Humbold Universitaet Berlin, Invalidenstrasse 110, D-10115 Berlin (Germany); Fehske, H [Universitaet Greifswald, Domstrasse 10a, D-17487, Greifswald (Germany); Kremp, D [Universitaet Rostock, Universitaetsplatz 3, D-18051 Rostock (Germany); Kraeft, W D [Universitaet Greifswald, Domstrasse 10a, D-17487, Greifswald (Germany); Bezkrovniy, V [Universitaet Rostock, Universitaetsplatz 3, D-18051 Rostock (Germany); Levashov, P [Institute for High Energy Density, Izhorskay 13/19, Moscow 125412 (Russian Federation)


    Thermodynamic properties of the equilibrium strongly coupled quantum plasmas investigated by direct path integral Monte Carlo (DPIMC) simulations within a wide region of density, temperature and positive to negative particle mass ratio. Pair distribution functions (PDF), equation of state (EOS), internal energy and Hugoniot are compared with available theoretical and experimental results. Possibilities of the phase transition in hydrogen and electron-hole plasma from neutral particle system to metallic-like state and crystal-like structures, including antiferromagnetic hole structure in semiconductors at low temperatures, are discussed.

  9. Monte Carlo Based Framework to Support HAZOP Study

    DEFF Research Database (Denmark)

    Danko, Matej; Frutiger, Jerome; Jelemenský, Ľudovít


    This study combines Monte Carlo based process simulation features with classical hazard identification techniques for consequences of deviations from normal operating conditions investigation and process safety examination. A Monte Carlo based method has been used to sample and evaluate different...... deviations in process parameters simultaneously, thereby bringing an improvement to the Hazard and Operability study (HAZOP), which normally considers only one at a time deviation in process parameters. Furthermore, Monte Carlo filtering was then used to identify operability and hazard issues including...

  10. Composite biasing in Monte Carlo radiative transfer (United States)

    Baes, Maarten; Gordon, Karl D.; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf


    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the specific problems that we consider: in simulations with composite path length stretching, high accuracy results are obtained even for simulations with modest numbers of photon packages, while simulations without biasing cannot reach convergence, even with a huge number of photon packages.

  11. Parallel Monte Carlo Search for Hough Transform (United States)

    Lopes, Raul H. C.; Franqueira, Virginia N. L.; Reid, Ivan D.; Hobson, Peter R.


    We investigate the problem of line detection in digital image processing and in special how state of the art algorithms behave in the presence of noise and whether CPU efficiency can be improved by the combination of a Monte Carlo Tree Search, hierarchical space decomposition, and parallel computing. The starting point of the investigation is the method introduced in 1962 by Paul Hough for detecting lines in binary images. Extended in the 1970s to the detection of space forms, what came to be known as Hough Transform (HT) has been proposed, for example, in the context of track fitting in the LHC ATLAS and CMS projects. The Hough Transform transfers the problem of line detection, for example, into one of optimization of the peak in a vote counting process for cells which contain the possible points of candidate lines. The detection algorithm can be computationally expensive both in the demands made upon the processor and on memory. Additionally, it can have a reduced effectiveness in detection in the presence of noise. Our first contribution consists in an evaluation of the use of a variation of the Radon Transform as a form of improving theeffectiveness of line detection in the presence of noise. Then, parallel algorithms for variations of the Hough Transform and the Radon Transform for line detection are introduced. An algorithm for Parallel Monte Carlo Search applied to line detection is also introduced. Their algorithmic complexities are discussed. Finally, implementations on multi-GPU and multicore architectures are discussed.

  12. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef


    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2).

  13. Multi-Index Monte Carlo (MIMC)

    KAUST Repository

    Haji Ali, Abdul Lateef


    We propose and analyze a novel Multi-Index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, instead of using first-order differences as in MLMC, we use in MIMC high-order mixed differences to reduce the variance of the hierarchical differences dramatically. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be of Total Degree (TD) type. When using such sets, MIMC yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence.

  14. Evaluating Wave Random Path Using Multilevel Monte Carlo

    Directory of Open Access Journals (Sweden)

    Behrouz Fathi-Vajargah


    Full Text Available Wind waves are important due to their high energy and impact on marine activities. This phenomenon is affects directly or indirectly the construction of coastal infrastructure, shipping and recreational activities. Due to the issues presented, marine parameters are very important. In this study, we try to pay attention to wave as one of the most important marine parameters. As the movements of waves have high uncertainty, financial models can be used to simulate the wave's paths. We use the Monte Carlo method for this purpose. The Monte Carlo simulation is a flexible and simple tool that is widely used in the evaluation of random paths. To compute a random path, we require an integral discretization. In this paper, we study the valuation of European options using Monte Carlo simulation and then compare this result with multi-level Monte Carlo approach and other antithetic variables. Then, we use the multi-level Monte Carlo approach proposed by (M. B. Giles, 2008 for pricing under the two-factor stochastic volatility model. We show that the multi-level Monte Carlo method reduces the computational complexity and also cost of the two-factor stochastic volatility model when compared with the standard Monte Carlo method. Also, we compare the multi-level Monte Carlo method and standard Monte Carlo method using an Euler discretization scheme and then, analyze the numerical results.

  15. Monte Carlo modeling of spatial coherence: free-space diffraction. (United States)

    Fischer, David G; Prahl, Scott A; Duncan, Donald D


    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excellent agreement between the physical optics and Monte Carlo predictions is demonstrated in all cases. Convergence criteria are presented for judging the quality of the Monte Carlo predictions.

  16. Quantum Monte Carlo Endstation for Petascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Lubos Mitas


    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  17. Monte Carlo simulations of medical imaging modalities

    Energy Technology Data Exchange (ETDEWEB)

    Estes, G.P. [Los Alamos National Lab., NM (United States)


    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.

  18. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz


    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....

  19. Monte-Carlo Simulation Balancing in Practice (United States)

    Huang, Shih-Chieh; Coulom, Rémi; Lin, Shun-Shii

    Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.

  20. Population Annealing Monte Carlo for Frustrated Systems (United States)

    Amey, Christopher; Machta, Jonathan

    Population annealing is a sequential Monte Carlo algorithm that efficiently simulates equilibrium systems with rough free energy landscapes such as spin glasses and glassy fluids. A large population of configurations is initially thermalized at high temperature and then cooled to low temperature according to an annealing schedule. The population is kept in thermal equilibrium at every annealing step via resampling configurations according to their Boltzmann weights. Population annealing is comparable to parallel tempering in terms of efficiency, but has several distinct and useful features. In this talk I will give an introduction to population annealing and present recent progress in understanding its equilibration properties and optimizing it for spin glasses. Results from large-scale population annealing simulations for the Ising spin glass in 3D and 4D will be presented. NSF Grant DMR-1507506.

  1. Geometric Monte Carlo and black Janus geometries

    Energy Technology Data Exchange (ETDEWEB)

    Bak, Dongsu, E-mail: [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: [Department of Chemistry, Brown University, Providence, RI 02912 (United States)


    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  2. Variational Monte Carlo study of pentaquark states

    Energy Technology Data Exchange (ETDEWEB)

    Mark W. Paris


    Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.

  3. Methods for Monte Carlo simulations of biomacromolecules. (United States)

    Vitalis, Andreas; Pappu, Rohit V


    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  4. Modeling neutron guides using Monte Carlo simulations

    CERN Document Server

    Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R


    Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.

  5. Accelerated GPU based SPECT Monte Carlo simulations. (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris


    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  6. Monte Carlo modelling of TRIGA research reactor (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.


    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  7. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.


    NARCIS (Netherlands)


    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  9. Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications

    NARCIS (Netherlands)

    Raedt, H. De


    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  10. A Primer in Monte Carlo Integration Using Mathcad (United States)

    Hoyer, Chad E.; Kegerreis, Jeb S.


    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  11. An Unbiased Hessian Representation for Monte Carlo PDFs

    NARCIS (Netherlands)

    Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, Jose Ignacio; Rojo, Juan


    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the

  12. A Monte Carlo adapted finite element method for dislocation ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Earth System Science; Volume 126; Issue 7. A Monte Carlo adapted finite element method for dislocation ... However, geological features of a fault cannot be measured exactly, and therefore these features and data involve uncertainties. This paper presents a Monte Carlo based random model of ...

  13. forecasting with nonlinear time series model: a monte-carlo ...

    African Journals Online (AJOL)


    ABSTRACT. In this paper, we propose a new method of forecasting with nonlinear time series model using. Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-. Carlo method of forecasting ...

  14. Forecasting with nonlinear time series model: A Monte-Carlo ...

    African Journals Online (AJOL)

    In this paper, we propose a new method of forecasting with nonlinear time series model using Monte-Carlo Bootstrap method. This new method gives better result in terms of forecast root mean squared error (RMSE) when compared with the traditional Bootstrap method and Monte-Carlo method of forecasting using a ...

  15. Approximating Sievert Integrals to Monte Carlo Methods to calculate ...

    African Journals Online (AJOL)

    Radiation dose rates along the transverse axis of a miniature P192PIr source were calculated using Sievert Integral (considered simple and inaccurate), and by the sophisticated and accurate Monte Carlo method. Using data obt-ained by the Monte Carlo method as benchmark and applying least squares regression curve ...

  16. The application of Bayesian interpolation in Monte Carlo simulations

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; van Gelder, P.H.A.J.M.; van Erp, N.; Martorell, Sebastian; Soares, C. Guedes; Barnett, Julie


    To reduce the cost of Monte Carlo (MC) simulations for time-consuming processes (like Finite Elements), a Bayesian interpolation method is coupled with the Monte Carlo technique. It is, therefore, possible to reduce the number of realizations in MC by interpolation. Besides, there is a possibility

  17. New Approaches and Applications for Monte Carlo Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano


    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.

  18. A Monte-Carlo weighted moving average process for smoothing ...

    African Journals Online (AJOL)

    A Monte-Carlo weighted moving average process for smoothing time series data. N Ekhosuehi, DEA Omorogbe. Abstract. A Monte-Carlo weighted moving average procedure was developed for smoothing time series data. The applicability of the method was demonstrated by using two economic time series data set to ...

  19. Quantum Monte Carlo method for attractive Coulomb potentials

    NARCIS (Netherlands)

    Kole, J.S.; Raedt, H. De


    Starting from an exact lower bound on the imaginary-time propagator, we present a path-integral quantum Monte Carlo method that can handle singular attractive potentials. We illustrate the basic ideas of this quantum Monte Carlo algorithm by simulating the ground state of hydrogen and helium.

  20. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.


    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  1. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.


    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  2. Crop canopy BRDF simulation and analysis using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.


    This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and

  3. Fission Matrix Capability for MCNP Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory


    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  4. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    CERN Document Server

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro


    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  5. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan


    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  6. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp


    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  7. The computation of Greeks with multilevel Monte Carlo


    Sylvestre Burgos; M. B. Giles


    In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carl...

  8. Recommender engine for continuous-time quantum Monte Carlo methods (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei


    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  9. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA


    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  10. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.


    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  11. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni


    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  12. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo


    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  13. Algorithmic differentiation of diffusion Monte Carlo (United States)

    Poole, Tom; Foulkes, Matthew; Spencer, James; Haynes, Peter


    Algorithmic differentiation (AD) is a programming technique for the efficient evaluation of the derivatives of a computed function. This approach proceeds via the application of the chain rule to the lines of source code that constitute the mathematical operation of a computer program, allowing access to the derivatives of functions that lack an algebraic representation. Another important element of the AD method is that the ``reverse mode'' of operation yields the derivative of a function output with respect to all inputs, simultaneously, in a small multiple of the computational cost of evaluating the underlying function in isolation. These features make this method particularly applicable to the diffusion Monte Carlo (DMC) algorithm where, despite a number of recent advances in the area, total energy derivatives have remained problematic. Here we present results illustrating accurate DMC energy derivatives with respect to both the input wave function parameters and the nuclear positions, with the former enabling DMC wave function optimization and the latter facilitating DMC molecular dynamics simulations.

  14. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan


    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  15. Monte Carlo simulations for heavy ion dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Geithner, O.


    Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)

  16. Accelerated Monte Carlo Methods for Coulomb Collisions (United States)

    Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce


    We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.

  17. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John


    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  18. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)


    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  19. Bayesian phylogeny analysis via stochastic approximation Monte Carlo. (United States)

    Cheon, Sooyoung; Liang, Faming


    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  20. GE781: a Monte Carlo package for fixed target experiments (United States)

    Davidenko, G.; Funk, M. A.; Kim, V.; Kuropatkin, N.; Kurshetsov, V.; Molchanov, V.; Rud, S.; Stutte, L.; Verebryusov, V.; Zukanovich Funchal, R.

    The Monte Carlo package for the fixed target experiment B781 at Fermilab, a third generation charmed baryon experiment, is described. This package is based on GEANT 3.21, ADAMO database and DAFT input/output routines.

  1. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung


    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  2. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test...... with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......(s) lead to the rejection at the prescribed significance level of the test. Examples of null hypothesis from point process and random set statistics are used to demonstrate the strength of the rank envelope test. The examples include goodness-of-fit test with several test functions, goodness-of-fit test...

  3. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    Abstract. Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  4. Accelerating Monte Carlo Renderers by Ray Histogram Fusion

    Directory of Open Access Journals (Sweden)

    Mauricio Delbracio


    Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.

  5. Self-learning Monte Carlo method: Continuous-time algorithm (United States)

    Nagai, Yuki; Shen, Huitao; Qi, Yang; Liu, Junwei; Fu, Liang


    The recently introduced self-learning Monte Carlo method is a general-purpose numerical method that speeds up Monte Carlo simulations by training an effective model to propose uncorrelated configurations in the Markov chain. We implement this method in the framework of a continuous-time Monte Carlo method with an auxiliary field in quantum impurity models. We introduce and train a diagram generating function (DGF) to model the probability distribution of auxiliary field configurations in continuous imaginary time, at all orders of diagrammatic expansion. By using DGF to propose global moves in configuration space, we show that the self-learning continuous-time Monte Carlo method can significantly reduce the computational complexity of the simulation.


    NARCIS (Netherlands)


    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  7. Studies of Monte Carlo Modelling of Jets at ATLAS

    CERN Document Server

    Kar, Deepak; The ATLAS collaboration


    The predictions of different Monte Carlo generators for QCD jet production, both in multijets and for jets produced in association with other objects, are presented. Recent improvements in showering Monte Carlos provide new tools for assessing systematic uncertainties associated with these jets.  Studies of the dependence of physical observables on the choice of shower tune parameters and new prescriptions for assessing systematic uncertainties associated with the choice of shower model and tune are presented.


    Directory of Open Access Journals (Sweden)



    Full Text Available Interest in radio frequency (rf discharges has grown tremendously in recent years due to their importance in microelectronic technologies. Especially interesting are the properties of discharges in electronegative gases which are most frequently used for technological applications. Monte Carlo simulation have become increasingly important as a simulation tool particularly in the area of plasma physics. In this work, we present some detailed properties of rf plasmas obtained by Monte Carlo simulation code, in SF6

  9. NUEN-618 Class Project: Actually Implicit Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Vega, R. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunner, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)


    This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.

  10. Shift: A Massively Parallel Monte Carlo Radiation Transport Package

    Energy Technology Data Exchange (ETDEWEB)

    Pandya, Tara M [ORNL; Johnson, Seth R [ORNL; Davidson, Gregory G [ORNL; Evans, Thomas M [ORNL; Hamilton, Steven P [ORNL


    This paper discusses the massively-parallel Monte Carlo radiation transport package, Shift, developed at Oak Ridge National Laboratory. It reviews the capabilities, implementation, and parallel performance of this code package. Scaling results demonstrate very good strong and weak scaling behavior of the implemented algorithms. Benchmark results from various reactor problems show that Shift results compare well to other contemporary Monte Carlo codes and experimental results.

  11. Modern analysis of ion channeling data by Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Nowicki, Lech [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland)]. E-mail:; Turos, Andrzej [Institute of Electronic Materials Technology, Wolczynska 133, 01-919 Warsaw (Poland); Ratajczak, Renata [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Stonert, Anna [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Garrido, Frederico [Centre de Spectrometrie Nucleaire et Spectrometrie de Masse, CNRS-IN2P3-Universite Paris-Sud, 91405 Orsay (France)


    Basic scheme of ion channeling spectra Monte Carlo simulation is reformulated in terms of statistical sampling. The McChasy simulation code is described and two examples of the code applications are presented. These are: calculation of projectile flux in uranium dioxide crystal and defect analysis for ion implanted InGaAsP/InP superlattice. Virtues and pitfalls of defect analysis using Monte Carlo simulations are discussed.

  12. Herwig: The Evolution of a Monte Carlo Simulation

    CERN Multimedia

    CERN. Geneva


    Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.

  13. Monte Carlo simulation in SPECT: a comparison of two approaches (United States)

    Sled, John G.; Celler, Anna; Barney, J. Scott; Ivanovic, Marija


    Monte Carlo methods play an important role in medical imaging research. Direct analog Monte Carlo simulations can be very accurate but require considerable computational resources. Variance reduction techniques may offer a solution to this problem. In this paper we present a comparison of expected values of standard quantities of interest for SPECT using these two simulation methods. The effect of variance reduction on the statistical characteristics of the simulated data is also investigated.

  14. Monte Carlo modeling of spatial coherence: free-space diffraction


    Fischer, David G.; Prahl, Scott A.; Duncan, Donald D


    We present a Monte Carlo method for propagating partially coherent fields through complex deterministic optical systems. A Gaussian copula is used to synthesize a random source with an arbitrary spatial coherence function. Physical optics and Monte Carlo predictions of the first- and second-order statistics of the field are shown for coherent and partially coherent sources for free-space propagation, imaging using a binary Fresnel zone plate, and propagation through a limiting aperture. Excel...

  15. Study of the Transition Flow Regime using Monte Carlo Methods (United States)

    Hassan, H. A.


    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  16. Models of network reliability analysis, combinatorics, and Monte Carlo

    CERN Document Server

    Gertsbakh, Ilya B


    Unique in its approach, Models of Network Reliability: Analysis, Combinatorics, and Monte Carlo provides a brief introduction to Monte Carlo methods along with a concise exposition of reliability theory ideas. From there, the text investigates a collection of principal network reliability models, such as terminal connectivity for networks with unreliable edges and/or nodes, network lifetime distribution in the process of its destruction, network stationary behavior for renewable components, importance measures of network elements, reliability gradient, and network optimal reliability synthesis

  17. Monte Carlo implementation of polarized hadronization (United States)

    Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.


    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.

  18. Monte Carlo Volcano Seismic Moment Tensors (United States)

    Waite, G. P.; Brill, K. A.; Lanza, F.


    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  19. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki


    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  20. Monte Carlo systems used for treatment planning and dose verification

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo [Universitaetsklinikum Essen, NCTeam, Strahlenklinik, Essen (Germany); Rodriguez, Miguel [Centro Medico Paitilla, Balboa (Panama); Lallena, Antonio M. [Universidad de Granada, Departamento de Fisica Atomica, Molecular y Nuclear, Granada (Spain)


    General-purpose radiation transport Monte Carlo codes have been used for estimation of the absorbed dose distribution in external photon and electron beam radiotherapy patients since several decades. Results obtained with these codes are usually more accurate than those provided by treatment planning systems based on non-stochastic methods. Traditionally, absorbed dose computations based on general-purpose Monte Carlo codes have been used only for research, owing to the difficulties associated with setting up a simulation and the long computation time required. To take advantage of radiation transport Monte Carlo codes applied to routine clinical practice, researchers and private companies have developed treatment planning and dose verification systems that are partly or fully based on fast Monte Carlo algorithms. This review presents a comprehensive list of the currently existing Monte Carlo systems that can be used to calculate or verify an external photon and electron beam radiotherapy treatment plan. Particular attention is given to those systems that are distributed, either freely or commercially, and that do not require programming tasks from the end user. These systems are compared in terms of features and the simulation time required to compute a set of benchmark calculations. (orig.) [German] Seit mehreren Jahrzehnten werden allgemein anwendbare Monte-Carlo-Codes zur Simulation des Strahlungstransports benutzt, um die Verteilung der absorbierten Dosis in der perkutanen Strahlentherapie mit Photonen und Elektronen zu evaluieren. Die damit erzielten Ergebnisse sind meist akkurater als solche, die mit nichtstochastischen Methoden herkoemmlicher Bestrahlungsplanungssysteme erzielt werden koennen. Wegen des damit verbundenen Arbeitsaufwands und der langen Dauer der Berechnungen wurden Monte-Carlo-Simulationen von Dosisverteilungen in der konventionellen Strahlentherapie in der Vergangenheit im Wesentlichen in der Forschung eingesetzt. Im Bemuehen, Monte-Carlo

  1. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.


    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  2. Copper precipitation in iron: a comparison between metropolis Monte Carlo and lattice kinetic Monte Carlo methods

    CERN Document Server

    Khrushcheva, O; Malerba, L; Becquart, C S; Domain, C; Hou, M


    Several variants are possible in the suite of programs forming multiscale predictive tools to estimate the yield strength increase caused by irradiation in RPV steels. For instance, at the atomic scale, both the Metropolis and the lattice kinetic Monte Carlo methods (MMC and LKMC respectively) allow predicting copper precipitation under irradiation conditions. Since these methods are based on different physical models, the present contribution discusses their consistency on the basis of a realistic case study. A cascade debris in iron containing 0.2% of copper was modelled by molecular dynamics with the DYMOKA code, which is part of the REVE suite. We use this debris as input for both the MMC and the LKMC simulations. Thermal motion and lattice relaxation can be avoided in the MMC, making the model closer to the LKMC (LMMC method). The predictions and the complementarity of the three methods for modelling the same phenomenon are then discussed.

  3. Numerical integration of detector response functions via Monte Carlo simulations (United States)

    Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.


    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.

  4. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid


    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  5. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt


    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups ( Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  6. Exploring cluster Monte Carlo updates with Boltzmann machines (United States)

    Wang, Lei


    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  7. Skin image reconstruction using Monte Carlo based color generation (United States)

    Aizu, Yoshihisa; Maeda, Takaaki; Kuwahara, Tomohiro; Hirao, Tetsuji


    We propose a novel method of skin image reconstruction based on color generation using Monte Carlo simulation of spectral reflectance in the nine-layered skin tissue model. The RGB image and spectral reflectance of human skin are obtained by RGB camera and spectrophotometer, respectively. The skin image is separated into the color component and texture component. The measured spectral reflectance is used to evaluate scattering and absorption coefficients in each of the nine layers which are necessary for Monte Carlo simulation. Various skin colors are generated by Monte Carlo simulation of spectral reflectance in given conditions for the nine-layered skin tissue model. The new color component is synthesized to the original texture component to reconstruct the skin image. The method is promising for applications in the fields of dermatology and cosmetics.

  8. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H


    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  9. Monte Carlo simulation of gas-flow using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Matthes, W.K. [21027 Ispra, Via Francia 146 (Italy)]. E-mail:


    The simulation of the flow of rarefied gases by Monte Carlo has been long established and goes by the name DSMC (Direct Simulation by Monte Carlo). The theory, applications and references are well documented in Monographs on this subject, e.g., Bird [Bird, G.A., 1998. Molecular Gas Dynamics and the Direct Simulation of Gas Flows, Clarendon Press, Oxford], Cercignani [Cercignani, C., 2000. Rarified Gas Dynamics, Cambridge University Press, Cambridge]. However, as most applications are restricted to two-dimensional flows only, we want to demonstrate that the MCNP code (see [Briesmeier, J.F., 1986. MCNP-A General Monte Carlo Code for Neutron and Photon Transport, Version 3A, Los Alamos National Laboratory]), after a few modifications, provides a very flexible tool to investigate the flow (and reactions) of multicomponent gas-mixtures in complicated three-dimensional structures.

  10. Monte Carlo simulation in statistical physics an introduction

    CERN Document Server

    Binder, Kurt


    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  11. Stabilization effect of fission source in coupled Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, Borge; Dufek, Jan [Div. of Nuclear Reactor Technology, KTH Royal Institute of Technology, AlbaNova University Center, Stockholm (Sweden)


    A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.

  12. Fixed forced detection for fast SPECT Monte-Carlo simulation. (United States)

    Cajgfinger, Thomas; Rit, Simon; Letang, Jean Michel; Halty, Adrien; Sarrut, David


    Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on Fixed Forced Detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte-Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to 5 orders of magnitude. Source code and examples are available in the Gate V8.0 release. . © 2017 Institute of Physics and Engineering in Medicine.

  13. Applicability of quasi-Monte Carlo for lattice systems

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hartung, Tobias [King' s College London (United Kingdom). Dept. of Mathematics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leovey, Hernan; Griewank, Andreas [Berlin Humboldt-Univ. (Germany). Dept. of Mathematics; Mueller-Preussker, Michael [Berlin Humboldt-Univ. (Germany). Dept. of Physics


    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like N{sup -1/2}, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to N{sup -1}, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  14. Application of biasing techniques to the contributon Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, A.; Gerstl, S.A.W.


    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  15. A Semi-Analytic Monte Carlo Method for Optimization Problems (United States)

    Sale, Kenneth E.


    Presently available Monte Carlo radiation transport codes require all aspects of a problem to be fixed so that optimizing a system involves running the code multiple times, once for each alternative value of the parameters that characterize the system (e.g. thickness or shape of an attenuator). By combining the standard Monte Carlo(Lux, Ivan and Koblinger, Laszlo, Monte Carlo Particle Transport Methods: Neutron and Photon Calculations, CRC Press, 1991) algorithm with the Next-Event point flux estimator and a computer algebra system it is possible to calculate the flux at a point as a function of parameters describing the problem rather as a single number for one specific set of parameter values. The calculated flux function is a perturbative estimate about the default values of the problem parameters. Parametric descriptions can be used in the geometry or material specifications. Several examples will be presented.

  16. RNA folding kinetics using Monte Carlo and Gillespie algorithms. (United States)

    Clote, Peter; Bayegan, Amir H


    RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny ([Formula: see text]20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to [Formula: see text] times that of the Gillespie algorithm, where [Formula: see text] denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by [Formula: see text]; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gillespie algorithms is publicly available, as is our software to compute the expected degree of the network of secondary structures of a given RNA sequence-see .

  17. Monte Carlo simulations applied to conjunctival lymphoma radiotherapy treatment

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo; Sauerwein, Wolfgang [Universitaetsklinikum Essen (Germany). NCTeam, Strahlenklinik; Palanco-Zamora, Ricardo [Karolinska University Hospital, Stockholm (Sweden); Steuhl, Klaus-Peter [Universitaetsklinikum Essen (Germany). Klinik fuer Erkrankungen des vorderen Augenabschnittes; Bornfeld, Norbert [Universitaetsklinikum Essen (Germany). Klinik fuer Erkrankungen des hinteren Augenabschnittes


    Small radiation fields are increasingly applied in clinical routine. In particular, they are necessary for the treatment of eye tumors. However, available treatment planning systems do not calculate the absorbed dose with the desired accuracy in the presence of small fields. Absorbed dose estimations obtained with Monte Carlo methods have the required accuracy for clinical applications, but the exceedingly long computation times associated with them hinder their routine use. In this article, a code for automatic Monte Carlo simulation of linacs and an application in the treatment of conjunctival lymphoma are presented. Simulations of clinical linear accelerators were performed with the general-purpose radiation transport Monte Carlo code penelope. Accelerator geometry files, in electron mode, were generated with the program AutolinaC. The Monte Carlo simulation of an annular electron 6 MeV field used for the treatment of the conjunctival lymphoma yielded absorbed dose results statistically compatible with experimental measurements. In this simulation, 2% standard statistical uncertainty was reached in the same time employed by a hybrid Monte Carlo commercial code (eMC); however, eMC showed discrepancies of up to 7% on the absorbed dose with respect to experimental data. Results obtained with the analytic algorithm Pencil Beam Convolution differed from experimental data by 10% for this case. Owing to the systematic application of variance-reduction techniques, it is possible to accurately estimate the absorbed dose in patient images, using Monte Carlo methods, in times within clinical routine requirements. The program AutolinaC allows systematic use of these variance-reduction techniques within the code penelope. (orig.)

  18. Superposition dose calculation incorporating Monte Carlo generated electron track kernels. (United States)

    Keall, P J; Hoban, P W


    The superposition/convolution method and the transport of pregenerated Monte Carlo electron track data have been combined into the Super-Monte Carlo (SMC) method, an accurate 3-D x-ray dose calculation algorithm. The primary dose (dose due to electrons ejected by primary photons) is calculated by transporting pregenerated (in water) Monte Carlo electron tracks from each primary photon interaction site, weighted by the terma for that site. The length of each electron step is scaled by the inverse of the density of the medium at the beginning of the step. Because the density scaling of the electron tracks is performed for each individual transport step, the limitations of the macroscopic scaling of kernels (in the superposition algorithm) are overcome. This time-consuming step-by-step transport is only performed for the primary dose calculation, where current superposition methods are most lacking. The scattered dose (dose due to electrons set in motion by scattered photons) is calculated by superposition. In both a water-lung-water phantom and a two lung-block phantom, SMC dose distributions are more consistent with Monte Carlo generated dose distributions than are superposition dose distributions, especially for small fields and high energies-for an 18-MV, 5 X 5-cm(2) beam, the central axis dose discrepancy from Monte Carlo is reduced from 4.5% using superposition to 1.5% using SMC. The computation time for this technique is approximately 2 h (depending on the simulation history), 20 times slower than superposition, but 15 times faster than a full Monte Carlo simulation (on our platform).

  19. A study on the shielding element using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Jeong [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Shim, Jae Goo [Dept. of Radiologic Technology, Daegu Health College, Daegu (Korea, Republic of)


    In this research, we simulated the elementary star shielding ability using Monte Carlo simulation to apply medical radiation shielding sheet which can replace existing lead. In the selection of elements, mainly elements and metal elements having a large atomic number, which are known to have high shielding performance, recently, various composite materials have improved shielding performance, so that weight reduction, processability, In consideration of activity etc., 21 elements were selected. The simulation tools were utilized Monte Carlo method. As a result of simulating the shielding performance by each element, it was estimated that the shielding ratio is the highest at 98.82% and 98.44% for tungsten and gold.

  20. Utilising Monte Carlo Simulation for the Valuation of Mining Concessions

    Directory of Open Access Journals (Sweden)

    Rosli Said


    Full Text Available Valuation involves the analyses of various input data to produce an estimated value. Since each input is itself often an estimate, there is an element of uncertainty in the input. This leads to uncertainty in the resultant output value. It is argued that a valuation must also convey information on the uncertainty, so as to be more meaningful and informative to the user. The Monte Carlo simulation technique can generate the information on uncertainty and is therefore potentially useful to valuation. This paper reports on the investigation that has been conducted to apply Monte Carlo simulation technique in mineral valuation, more specifically, in the valuation of a quarry concession.

  1. Monte Carlo simulations of electron photoemission from cesium antimonide (United States)

    Gupta, Pranav; Cultrera, Luca; Bazarov, Ivan


    We report on the results from semi-classical Monte Carlo simulations of electron photoemission (photoelectric emission) from cesium antimonide (Cs3Sb) and compare them with experimental results at 90 K and room temperature, with an emphasis on near-threshold photoemission properties. Interfacial effects, impurities, and electron-phonon coupling are central features of our Monte Carlo model. We use these simulations to predict photoemission properties at the ultracold cryogenic temperature of 20 K and to identify critical material parameters that need to be properly measured experimentally for reproducing the electron photoemission properties of Cs3Sb and other materials more accurately.

  2. Parton distribution functions in Monte Carlo factorisation scheme

    Energy Technology Data Exchange (ETDEWEB)

    Jadach, S.; Skrzypek, M. [Polish Academy of Sciences, Institute of Nuclear Physics, Krakow (Poland); Placzek, W. [Jagiellonian University, Marian Smoluchowski Institute of Physics, Krakow (Poland); Sapeta, S.; Siodmok, A. [Polish Academy of Sciences, Institute of Nuclear Physics, Krakow (Poland); CERN, Theoretical Physics Department, Geneva (Switzerland)


    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions. (orig.)

  3. SIMDET—a parametric Monte Carlo for a TESLA detector (United States)

    Pohl, Martin; Schreiber, H. Jürgen


    We briefly describe the principles of operation of the program package SIMDET, a parametric Monte Carlo program to simulate the response of a detector for the TESLA linear collider. Main detector components are implemented according to the TESLA Conceptual Design report, with a tracking system, an electromagnetic and a hadronic calorimeter, a vertex and a luminosity detector. Using the results from the ab initio Monte Carlo program BRAHMS, track parameters and calorimetric deposits are treated in a realistic way. Pattern recognition is emulated using cross references between generated particles and detector response. An energy flow algorithm defines the output of the program. Further improvements and completions of SIMDET are also discussed.

  4. Hamiltonian Monte Carlo with Constrained Molecular Dynamics as Gibbs Sampling. (United States)

    Spiridon, Laurentiu; Minh, David D L


    Compared to fully flexible molecular dynamics, simulations of constrained systems can use larger time steps and focus kinetic energy on soft degrees of freedom. Achieving ergodic sampling from the Boltzmann distribution, however, has proven challenging. Using recent generalizations of the equipartition principle and Fixman potential, here we implement Hamiltonian Monte Carlo based on constrained molecular dynamics as a Gibbs sampling move. By mixing Hamiltonian Monte Carlo based on fully flexible and torsional dynamics, we are able to reproduce free energy landscapes of simple model systems and enhance sampling of macrocycles.

  5. Monte Carlo Form-Finding Method for Tensegrity Structures (United States)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping


    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  6. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)


    This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)

  7. Pseudopotentials for quantum-Monte-Carlo-calculations; Pseudopotentiale fuer Quanten-Monte-Carlo-Rechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Burkatzki, Mark Thomas


    The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)

  8. Monte Carlo event generators for hadron-hadron collisions

    Energy Technology Data Exchange (ETDEWEB)

    Knowles, I.G. [Argonne National Lab., IL (United States). High Energy Physics Div.; Protopopescu, S.D. [Brookhaven National Lab., Upton, NY (United States)


    A brief review of Monte Carlo event generators for simulating hadron-hadron collisions is presented. Particular emphasis is placed on comparisons of the approaches used to describe physics elements and identifying their relative merits and weaknesses. This review summarizes a more detailed report.

  9. Impact of random numbers on parallel Monte Carlo application

    Energy Technology Data Exchange (ETDEWEB)

    Pandey, Ras B.


    A number of graduate students are involved at various level of research in this project. We investigate the basic issues in materials using Monte Carlo simulations with specific interest in heterogeneous materials. Attempts have been made to seek collaborations with the DOE laboratories. Specific details are given.

  10. Monte Carlo Generation of the 2BN Bremsstrahlung Distribution

    CERN Document Server

    Peralta, L; Trindade, A


    The 2BN bremsstrahlung cross-section is a well-adapted distribution to describe the radiative processes at low electron kinetic energy (Ek<500 keV). In this work a method to implement this distribution in a Monte Carlo generator is developed.

  11. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    van der Stoep, A.W.; Grzelak, L.A.; Oosterlee, C.W.


    We present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant. Finance,

  12. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli


    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  13. Play It Again: Teaching Statistics with Monte Carlo Simulation (United States)

    Sigal, Matthew J.; Chalmers, R. Philip


    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  14. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  15. APS undulator and wiggler sources: Monte-Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, S.L.; Lai, B.; Viccaro, P.J.


    Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).

  16. Monte-Carlo Tree Search for Poly-Y

    NARCIS (Netherlands)

    Wevers, L.; te Brinke, Steven


    Monte-Carlo tree search (MCTS) is a heuristic search algorithm that has recently been very successful in the games of Go and Hex. In this paper, we describe an MCTS player for the game of Poly-Y, which is a connection game similar to Hex. Our player won the CodeCup 2014 AI programming competition.

  17. Quantum Monte Carlo study of quasiparticles in the Hubbard model

    NARCIS (Netherlands)

    Linden, W. von der; Morgenstern, I.; Raedt, H. de


    We present an improved version of the projector quantum Monte Carlo method, which has recently been proposed. This scheme allows a very precise computation of the ground-state energy of fermionic models. The “minus sign” has been treated without further approximations and does not influence the

  18. Harnessing graphical structure in Markov chain Monte Carlo learning

    Energy Technology Data Exchange (ETDEWEB)

    Stolorz, P.E. [California Inst. of Technology, Pasadena, CA (United States); Chew P.C. [Univ. of Pennsylvania, Philadelphia, PA (United States)


    The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is to approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.

  19. Monte Carlo investigation of the one-dimensional Potts model

    Energy Technology Data Exchange (ETDEWEB)

    Karma, A.S.; Nolan, M.J.


    Monte Carlo results are presented for a variety of one-dimensional dynamical q-state Potts models. Our calculations confirm the expected universal value z = 2 for the dynamic scaling exponent. Our results also indicate that an increase in q at fixed correlation length drives the dynamics into the scaling regime.

  20. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Koning, A.J., E-mail:


    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  1. A combination of Monte Carlo Temperature Basin Paving and Graph ...

    Indian Academy of Sciences (India)

    Abstract. The knowledge of degree of completeness of energy landscape search by stochastic algorithms is often lacking. A graph theory based method is used to investigate the completeness of search performed by. Monte Carlo Temperature Basin Paving (MCTBP) algorithm for (H2O)n, (n=6, 7, and 20). In the second part.

  2. minimum thresholds of monte carlo cycles for nigerian empirical

    African Journals Online (AJOL)


    Nov 3, 2012 ... Abstract. Monte Carlo simulation has proven to be an effective means of incorporating reliability analysis into the Mechanistic-Empirical (M-E) design process for flexible pavements. Nigerian Empirical-. Mechanistic Pavement Analysis and Design System procedure for Nigeria Environments has been.

  3. Minimum Thresholds of Monte Carlo Cycles for Nigerian Empirical ...

    African Journals Online (AJOL)

    Monte Carlo simulation has proven to be an eective means of incorporating reliability analysisinto the Mechanistic-Empirical (M-E) design process for exible pavements. Nigerian Empirical-Mechanistic Pavement Analysis and Design System procedure for Nigeria Environments has beenproposed. This work aimed at ...

  4. Projector Quantum Monte Carlo Method for Nonlinear Wave Functions (United States)

    Schwarz, Lauretta R.; Alavi, A.; Booth, George H.


    We reformulate the projected imaginary-time evolution of the full configuration interaction quantum Monte Carlo method in terms of a Lagrangian minimization. This naturally leads to the admission of polynomial complex wave function parametrizations, circumventing the exponential scaling of the approach. While previously these functions have traditionally inhabited the domain of variational Monte Carlo approaches, we consider recent developments for the identification of deep-learning neural networks to optimize this Lagrangian, which can be written as a modification of the propagator for the wave function dynamics. We demonstrate this approach with a form of tensor network state, and use it to find solutions to the strongly correlated Hubbard model, as well as its application to a fully periodic ab initio graphene sheet. The number of variables which can be simultaneously optimized greatly exceeds alternative formulations of variational Monte Carlo methods, allowing for systematic improvability of the wave function flexibility towards exactness for a number of different forms, while blurring the line between traditional variational and projector quantum Monte Carlo approaches.

  5. Quantum Monte Carlo with a stochastic Poisson solver (United States)

    Das, Dyutiman

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually QMC has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall QMC scheme. We have developed a modified "Walk On Spheres" (WOS) algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of QMC algorithms. This stochastically obtained potential can be easily incorporated within popular QMC techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo(DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor. Then we apply this method to calculate the singlet-triplet splitting in a realistic heterostructure device. We also outline some other prospective applications for spherical quantum dots where the dielectric mismatch becomes an important issue for the addition energy spectrum.

  6. The Smoothed Monte Carlo Method in Robustness Optimization

    NARCIS (Netherlands)

    Hendrix, E.M.T.; Olieman, N.J.


    The concept of robustness as the probability mass of a design-dependent set has been introduced in the literature. Optimization of robustness can be seen as finding the design that has the highest robustness. The reference method for estimating the robustness is the Monte Carlo (MC) simulation, and

  7. K-Antithetic Variates in Monte Carlo Simulation | Nasroallah | Afrika ...

    African Journals Online (AJOL)

    Abstract. Standard Monte Carlo simulation needs prohibitive time to achieve reasonable estimations. for untractable integrals (i.e. multidimensional integrals and/or intergals with complex integrand forms). Several statistical technique, called variance reduction methods, are used to reduce the simulation time. In this note ...

  8. Direct Monte Carlo simulation of nanoscale mixed gas bearings

    Directory of Open Access Journals (Sweden)

    Kyaw Sett Myo


    Full Text Available The conception of sealed hard drives with helium gas mixture has been recently suggested over the current hard drives for achieving higher reliability and less position error. Therefore, it is important to understand the effects of different helium gas mixtures on the slider bearing characteristics in the head–disk interface. In this article, the helium/air and helium/argon gas mixtures are applied as the working fluids and their effects on the bearing characteristics are studied using the direct simulation Monte Carlo method. Based on direct simulation Monte Carlo simulations, the physical properties of these gas mixtures such as mean free path and dynamic viscosity are achieved and compared with those obtained from theoretical models. It is observed that both results are comparable. Using these gas mixture properties, the bearing pressure distributions are calculated under different fractions of helium with conventional molecular gas lubrication models. The outcomes reveal that the molecular gas lubrication results could have relatively good agreement with those of direct simulation Monte Carlo simulations, especially for pure air, helium, or argon gas cases. For gas mixtures, the bearing pressures predicted by molecular gas lubrication model are slightly larger than those from direct simulation Monte Carlo simulation.

  9. Monte Carlo Approach for Reliability Estimations in Generalizability Studies. (United States)

    Dimitrov, Dimiter M.

    A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…

  10. Monte Carlo simulations of adsorption-induced segregation

    DEFF Research Database (Denmark)

    Christoffersen, Ebbe; Stoltze, Per; Nørskov, Jens Kehlet


    Through the use of Monte Carlo simulations we study the effect of adsorption-induced segregation. From the bulk composition, degree of dispersion and the partial pressure of the gas phase species we calculate the surface composition of bimetallic alloys. We show that both segregation and adsorption...

  11. Bayesian methods, maximum entropy, and quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Gubernatis, J.E.; Silver, R.N. (Los Alamos National Lab., NM (United States)); Jarrell, M. (Cincinnati Univ., OH (United States))


    We heuristically discuss the application of the method of maximum entropy to the extraction of dynamical information from imaginary-time, quantum Monte Carlo data. The discussion emphasizes the utility of a Bayesian approach to statistical inference and the importance of statistically well-characterized data. 14 refs.

  12. Two Dimensional Potential Mapping–Monte Carlo Simulation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 7. Two Dimensional Potential Mapping – Monte Carlo Simulation. J Meena Devi K Ramachandran. Classroom Volume 10 Issue 7 July 2005 pp 73-84. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation (United States)

    Silver, N. Clayton; Hittner, James B.; May, Kim


    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  14. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad


    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used

  15. Monte Carlo simulations of the stability of delta-Pu

    DEFF Research Database (Denmark)

    Landa, A.; Soderlind, P.; Ruban, Andrei


    The transition temperature (T-c) for delta-Pu has been calculated for the first time. A Monte Carlo method is employed for this purpose and the effective cluster interactions are obtained from first-principles calculations incorporated with the Connolly-Williams and generalized perturbation methods...

  16. Monte Carlo Capabilities of the SCALE Code System (United States)

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.


    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  17. Quantum Monte Carlo diagonalization method as a variational calculation

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio


    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  18. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.


    Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate...

  19. Monte Carlo simulation by computer for life-cycle costing (United States)

    Gralow, F. H.; Larson, W. J.


    Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.

  20. Monte Carlo methods: Application to hydrogen gas and hard spheres (United States)

    Dewing, Mark Douglas


    Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied to a system of boson hard spheres to get exact, infinite system size results for the ground state at several densities. The kinds of problems that can be simulated with Monte Carlo methods are expanded through the development of new algorithms for combining a QMC simulation with a classical Monte Carlo simulation, which we call Coupled Electronic-Ionic Monte Carlo (CEIMC). The new CEIMC method is applied to a system of molecular hydrogen at temperatures ranging from 2800K to 4500K and densities from 0.25 to 0.46 g/cm3. VMC requires optimizing a parameterized wave function to find the minimum energy. We examine several techniques for optimizing VMC wave functions, focusing on the ability to optimize parameters appearing in the Slater determinant. Classical Monte Carlo simulations use an empirical interatomic potential to compute equilibrium properties of various states of matter. The CEIMC method replaces the empirical potential with a QMC calculation of the electronic energy. This is similar in spirit to the Car-Parrinello technique, which uses Density Functional Theory for the electrons and molecular dynamics for the nuclei. The challenges in constructing an efficient CEIMC simulation center mostly around the noisy results generated from the QMC computations of the electronic energy. We introduce two complementary techniques, one for tolerating the noise and the other for reducing it. The penalty method modifies the Metropolis acceptance ratio to tolerate noise without introducing a bias in the simulation of the nuclei. For reducing the noise, we introduce the two-sided energy

  1. Global Monte Carlo Simulation with High Order Polynomial Expansions

    Energy Technology Data Exchange (ETDEWEB)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin


    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source

  2. Direct aperture optimization for IMRT using Monte Carlo generated beamlets. (United States)

    Bergman, Alanah M; Bush, Karl; Milette, Marie-Pierre; Popescu, I Antoniu; Otto, Karl; Duzenli, Cheryl


    This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5 X 5.0 mm2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is approximately 33% compared to fluence-based optimization methods.

  3. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    National Research Council Canada - National Science Library

    Nicolas Chapoutier; François Mollier; Guillaume Nolin; Matthieu Culioli; Jean-Reynald Mace


    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow...

  4. Monte Carlo Simulation Program from the World Petroleum Assessment 2000, DDS-60 (Emc2.xls). (United States)

    U.S. Geological Survey, Department of the Interior — Monte Carlo programs described in chapter MC, Monte Carlo Simulation Method. Emc2.xls was the program used to calculate the estimates of undiscovered resources for...

  5. A preliminary study of in-house Monte Carlo simulations: an integrated Monte Carlo verification system. (United States)

    Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki


    To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.

  6. The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V


    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.

  7. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne


    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  8. A Monte Carlo approach for the bouncer model (United States)

    Díaz, Gabriel; Yoshida, Makoto; Leonel, Edson D.


    A Monte Carlo investigation is made in a dissipative bouncer model to describe some statistical properties for chaotic dynamics as a function of the control parameters. The dynamics of the system is described via a two dimensional mapping for the variables velocity of the particle and phase of the moving wall at the instant of the impact. A small stochastic noise is introduced in the time of flight of the particle as an attempt to investigate the evolution of the system without the need to solve transcendental equations. We show that average values along the chaotic dynamics do not strongly depend on the noise size. It allows us to propose a Monte Carlo like simulation that lead to calculate average values for the observables with great accuracy and fast simulations.

  9. A Monte Carlo code for ion beam therapy

    CERN Multimedia

    Anaïs Schaeffer


    Initially developed for applications in detector and accelerator physics, the modern Fluka Monte Carlo code is now used in many different areas of nuclear science. Over the last 25 years, the code has evolved to include new features, such as ion beam simulations. Given the growing use of these beams in cancer treatment, Fluka simulations are being used to design treatment plans in several hadron-therapy centres in Europe.   Fluka calculates the dose distribution for a patient treated at CNAO with proton beams. The colour-bar displays the normalized dose values. Fluka is a Monte Carlo code that very accurately simulates electromagnetic and nuclear interactions in matter. In the 1990s, in collaboration with NASA, the code was developed to predict potential radiation hazards received by space crews during possible future trips to Mars. Over the years, it has become the standard tool to investigate beam-machine interactions, radiation damage and radioprotection issues in the CERN accelerator com...

  10. PHOTOS Monte Carlo for precision simulation of QED in decays

    CERN Document Server

    Was, Z; Nanava, G


    Because of properties of QED, the bremsstrahlung corrections to decays of particles or resonances can be calculated, with a good precision, separately from other effects. Thanks to the widespread use of event records such calculations can be embodied into a separate module of Monte Carlo simulation chains, as used in High Energy Experiments of today. The PHOTOS Monte Carlo program is used for this purpose since nearly 20 years now. In the following talk let us review the main ideas and constraints which shaped the program version of today and enabled it widespread use. We will concentrate specially on conflicting requirements originating from the properties of QED matrix elements on one side and degrading (evolving) with time standards of event record(s). These issues, quite common in other modular software applications, become more and more difficult to handle as precision requirements become higher.

  11. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds (United States)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.


    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  12. Application to radiation damage simulation calculation of Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Aruga, Takeo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment


    Recent progress in Monte Carlo calculation for radiation damage simulation of structural materials to be used in fast breeder reactors or thermonuclear fusion reactors under energetic neutron or charged particle bombardment is reviewed. Specifically usefulness of employing Monte Carlo methods in molecular dynamics calculations to understand mechanical properties change such as dimensional change, strength, creep, fatigue, corrosion, and crack growth of materials under irradiation on the basis of atomic collision processes is stressed. Structure and spatial distribution of point defects in iron, gold, or cooper as demonstrative examples at several hundreds of ps after the movement of primary knock-on atom (PKA) takes place are calculated as a function of PKA energy. The results are compared with those obtained by the method developed by Norgett, Robinson and Torrens and the usefulness is discussed. (S. Ohno)

  13. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    Energy Technology Data Exchange (ETDEWEB)

    Alexandru, Andrei [Department of Physics, The George Washington University,Washington, DC 20052 (United States); Başar, Gökçe; Bedaque, Paulo F.; Ridgway, Gregory W.; Warrington, Neill C. [Department of Physics, University of Maryland,College Park, MD 20742 (United States)


    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). We exemplify this approach using a simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.

  14. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas


    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  15. Accelerated Monte Carlo simulations with restricted Boltzmann machines (United States)

    Huang, Li; Wang, Lei


    Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.

  16. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.


    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  17. Bayesian Monte Carlo method for nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Koning, A.J. [Nuclear Research and Consultancy Group NRG, P.O. Box 25, ZG Petten (Netherlands)


    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)

  18. Bayesian Monte Carlo method for nuclear data evaluation (United States)

    Koning, A. J.


    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  19. Monte Carlo simulations of the SLOWPOKE-2 reactor

    Energy Technology Data Exchange (ETDEWEB)

    Tan, A.; Buijs, A., E-mail:, E-mail: [McMaster University, Hamilton, ON (Canada)


    The goal of this project is to study the transient behaviour of the SLOWPOKE-2 reactor using Monte-Carlo simulations. By validating the Monte-Carlo methods in G4-STORK with experimental measurements we hope to extend our understanding of reactor transients as well as further develop our methods to model the transients of the next generation reactor designs. A SLOWPOKE-2 reactor such as the one at RMC is modelled using simulation tools from GEANT4 and data taken from open literature. Simulations in G4-STORK find a neutron flux of order 10{sup 12} n cm{sup -2} s{sup -1} and a control rod worth of (4.9 2.0) mk compared to the experimentally measured worth of 5.45 mk. (author)

  20. Computer Monte Carlo simulation in quantitative resource estimation (United States)

    Root, D.H.; Menzie, W.D.; Scott, W.A.


    The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.

  1. Macro Monte Carlo: Clinical Implementation in a Distributed Computing Environment (United States)

    Neuenschwander, H.; Volken, W.; Frei, D.; Cris, C.; Born, E.; Mini, R.

    The Monte Carlo (MC) method is the most accurate method for the calculation of dose distributions in radiotherapy treatment planning (RTP) for high energy electron beams, if the source of electrons and the patient geometry can be accurately modeled and a sufficiently large number of electron histories are simulated. Due to the long calculation times, MC methods have long been considered as impractical for clinical use. Two main advances have improved the situation and made clinical MC RTP feasible: The development of highly specialized radiotherapy MC systems, and the ever-falling price/performance ratio of computer hardware. Moreover, MC dose calculation codes can easily be parallelized, which allows their implementation as distributed computing systems in networked departments. This paper describes the implementation and clinical validation of the Macro Monte Carlo (MMC) method, a fast method for clinical electron beam treatment planning.

  2. Monte Carlo Ground State Energy for Trapped Boson Systems (United States)

    Rudd, Ethan; Mehta, N. P.


    Diffusion Monte Carlo (DMC) and Green's Function Monte Carlo (GFMC) algorithms were implemented to obtain numerical approximations for the ground state energies of systems of bosons in a harmonic trap potential. Gaussian pairwise particle interactions of the form V0e^-|ri-rj|^2/r0^2 were implemented in the DMC code. These results were verified for small values of V0 via a first-order perturbation theory approximation for which the N-particle matrix element evaluated to N2 V0(1 + 1/r0^2)^3/2. By obtaining the scattering length from the 2-body potential in the perturbative regime (V0φ 1), ground state energy results were compared to modern renormalized models by P.R. Johnson et. al, New J. Phys. 11, 093022 (2009).

  3. Fixed-Node Quantum Monte Carlo for Chemistry (United States)

    Caffarel, Michel; Ramírez-Solís, Alejandro


    In this paper we discuss the application of quantum Monte Carlo (QMC) techniques to the electronic many-body problem as encountered in computational chemistry. The Fixed-Node Diffusion Monte Carlo (FN-DMC) algorithm --the most common QMC scheme for treating molecules--is presented. The impact of the fixed-node error is illustrated through numerical applications including the calculation of the electronic affinity of the chlorine atom, the dissociation barrier of the O4 molecule, and the binding energy of the dichromium molecule, Cr2. Although total energies calculated with FN-DMC are very accurate (more accurate than the best alternative methods available), it is emphasized that the error associated with approximate nodes can lead to important errors in the small differences of total energies, quantities which are particularly important in chemistry.

  4. Ab initio Monte Carlo investigation of small lithium clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Srinivas, S.


    Structural and thermal properties of small lithium clusters are studied using ab initio-based Monte Carlo simulations. The ab initio scheme uses a Hartree-Fock/density functional treatment of the electronic structure combined with a jump-walking Monte Carlo sampling of nuclear configurations. Structural forms of Li{sub 8} and Li{sub 9}{sup +} clusters are obtained and their thermal properties analyzed in terms of probability distributions of the cluster potential energy, average potential energy and configurational heat capacity all considered as a function of the cluster temperature. Details of the gradual evolution with temperature of the structural forms sampled are examined. Temperatures characterizing the onset of structural changes and isomer coexistence are identified for both clusters.

  5. Application of Monte Carlo simulations to improve basketball shooting strategy (United States)

    Min, Byeong June


    The underlying physics of basketball shooting seems to be a straightforward example of Newtonian mechanics that can easily be traced by using numerical methods. However, a human basketball player does not make use of all the possible basketball trajectories. Instead, a basketball player will build up a database of successful shots and select the trajectory that has the greatest tolerance to the small variations of the real world. We simulate the basketball player's shooting training as a Monte Carlo sequence to build optimal shooting strategies, such as the launch speed and angle of the basketball, and whether to take a direct shot or a bank shot, as a function of the player's court position and height. The phase-space volume Ω that belongs to the successful launch velocities generated by Monte Carlo simulations is then used as the criterion to optimize a shooting strategy that incorporates not only mechanical, but also human, factors.

  6. Monte Carlo simulations of the Galileo energetic particle detector

    CERN Document Server

    Jun, I; Garrett, H B; McEntire, R W


    Monte Carlo radiation transport studies have been performed for the Galileo spacecraft energetic particle detector (EPD) in order to study its response to energetic electrons and protons. Three-dimensional Monte Carlo radiation transport codes, MCNP version 4B (for electrons) and MCNPX version 2.2.3 (for protons), were used throughout the study. The results are presented in the form of 'geometric factors' for the high-energy channels studied in this paper: B1, DC2, and DC3 for electrons and B0, DC0, and DC1 for protons. The geometric factor is the energy-dependent detector response function that relates the incident particle fluxes to instrument count rates. The trend of actual data measured by the EPD was successfully reproduced using the geometric factors obtained in this study.

  7. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.


    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

  8. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    CERN Document Server

    Alexandru, Andrei; Bedaque, Paulo F; Ridgway, Gregory W; Warrington, Neill C


    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action. We describe a family of such manifolds that interpolate between the tangent space at one critical point, where the sign problem is milder compared to the real plane but in some cases still severe, and the union of relevant thimbles, where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling. We exemplify this approach using a simple 0 + 1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefshetz thimbles was elusive.

  9. The Monte Carlo code TRAMO - Capabilities and instructions for application; Monte-Carlo Programm TRAMO - Moeglichkeiten und Anleitung zur Nutzung

    Energy Technology Data Exchange (ETDEWEB)

    Barz, H.U.; Konheiser, J.


    The report is intended for readers familiar with the fundamentals of the Monte Carlo method. Those readers might be interested in learning about successful generalisations as well as new ideas for curbing the statistical errors involved. Another intention however is to explain the significant basic features of the multigroup Monte Carlo code TRAMO, including the required input, so that readers will be able to performing the required adjustments to the specific calculation technique and develop their own tools for performing their specific calculations. An indispensable code needed for such TRAMO applications is the TRAWEI Monte Carlo code which calculates he required weightings for applications of the variance reducing Weight Window Method; other codes required are those for generating the neutron cross-section data and the group data. The TRAMO code calculates, with given source distribution of neutrons in multigroup approximation, multigroup flux data, integrated group flux data, and dose values for given partial volumes and surfaces. There are further code versions for calculation of neutron and gamma fluxes, or criticality data, but these are not considered in the report. (orig./CB) [Deutsch] Dieser Bericht ist fuer mit den Grundlagen der Monte-Carlo Methode vertraute Leser bestimmt. Von Interesse fuer solche Leser koennten erfolgreiche Verallgemeinerungen und neue Ideen zur Verbesserung der statistischen Fehler sein. Andererseits sollen die wichtigsten Grundlagen des Vielgruppen-Monte-Carlo Programms TRAMO dargestellt und das Programm einschliesslich notwendiger Eingabe so weit beschrieben werden, dass man nach in jedem Fall notwendige Anpassung an die spezielle Rechentechnik eine wesentliche Grundlage fuer die Durchfuehrung eigenstaendiger Rechnungen hat. Fuer die meisten Probleme unumgaengliches Hilfsmittel fuer TRAMO ist ein Programm, welches fuer die Anwendung der varianzreduzierenden `Weight Window Method` die notwendigen Gewichte berechnet (Monte-Carlo

  10. Testing trivializing maps in the Hybrid Monte Carlo algorithm

    CERN Document Server

    Engel, Georg P


    We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CP^{N-1} model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied.

  11. Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations


    Višnjevac Nenad; Cvijetinović Željko; Bajat Branislav; Radić Boris; Ristić Ratko; Milčanović Vukašin


    Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simul...

  12. Monte Carlo calculation of electron transport in InSb

    CERN Document Server

    Mallick, P S


    The velocity field characteristics of InSb have been obtained at 77 K by the Monte Carlo simulation technique. The results agree with the experimental data and also with those obtained by using the displaced Maxwellian distribution function. The effects of the various simulation parameters as well as that of the ionized impurity concentration on the mobility values for InSb have been discussed and results presented. (author)

  13. Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy. (United States)

    Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L


    Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney.

  14. astroABC: Approximate Bayesian Computation Sequential Monte Carlo sampler (United States)

    Jennings, Elise


    astroABC is a Python implementation of an Approximate Bayesian Computation Sequential Monte Carlo (ABC SMC) sampler for parameter estimation. astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. It has the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available.

  15. Full counting statistics with determinantal quantum Monte Carlo (United States)

    Humeniuk, Stephan

    Within the framework of determinantal quantum Monte Carlo a method is presented for computing the probability distribution of the total particle number and magnetization on a subregion of a system of interacting fermions. Such full counting statistics can be obtained from repeated projective measurements in cold atoms experiments with single-site and single-atom resolution. Applied to the attractive Hubbard model, the full counting statistics reveals the size of a preformed pair or Cooper pair as a function of interaction strength.

  16. Motor simulation via coupled internal models using sequential Monte Carlo


    Dindo H; Zambuto D.; Pezzulo G.


    We describe a generative Bayesian model for action understanding in which inverse-forward internal model pairs are considered 'hypotheses' of plausible action goals that are explored in parallel via an approximate inference mechanism based on sequential Monte Carlo methods. The reenactment of internal model pairs can be considered a form of motor simulation, which supports both perceptual prediction and action understanding at the goal level. However, this procedure is generally considered to...

  17. Monte Carlo calculations for r-process nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Mumpower, Matthew Ryan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    A Monte Carlo framework is developed for exploring the impact of nuclear model uncertainties on the formation of the heavy elements. Mass measurements tightly constrain the macroscopic sector of FRDM2012. For r-process nucleosynthesis, it is necessary to understand the microscopic physics of the nuclear model employed. A combined approach of measurements and a deeper understanding of the microphysics is thus warranted to elucidate the site of the r-process.

  18. Instantaneous GNSS attitude determination: A Monte Carlo sampling approach (United States)

    Sun, Xiucong; Han, Chao; Chen, Pei


    A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.

  19. Distributional monte carlo methods for the boltzmann equation (United States)

    Schrock, Christopher R.

    Stochastic particle methods (SPMs) for the Boltzmann equation, such as the Direct Simulation Monte Carlo (DSMC) technique, have gained popularity for the prediction of flows in which the assumptions behind the continuum equations of fluid mechanics break down; however, there are still a number of issues that make SPMs computationally challenging for practical use. In traditional SPMs, simulated particles may possess only a single velocity vector, even though they may represent an extremely large collection of actual particles. This limits the method to converge only in law to the Boltzmann solution. This document details the development of new SPMs that allow the velocity of each simulated particle to be distributed. This approach has been termed Distributional Monte Carlo (DMC). A technique is described which applies kernel density estimation to Nanbu's DSMC algorithm. It is then proven that the method converges not just in law, but also in solution for Linfinity(R 3) solutions of the space homogeneous Boltzmann equation. This provides for direct evaluation of the velocity density function. The derivation of a general Distributional Monte Carlo method is given which treats collision interactions between simulated particles as a relaxation problem. The framework is proven to converge in law to the solution of the space homogeneous Boltzmann equation, as well as in solution for Linfinity(R3) solutions. An approach based on the BGK simplification is presented which computes collision outcomes deterministically. Each technique is applied to the well-studied Bobylev-Krook-Wu solution as a numerical test case. Accuracy and variance of the solutions are examined as functions of various simulation parameters. Significantly improved accuracy and reduced variance are observed in the normalized moments for the Distributional Monte Carlo technique employing discrete BGK collision modeling.

  20. Diagrammatic Monte Carlo simulations of staggered fermions at finite coupling

    CERN Document Server

    Vairinhos, Helvio


    Diagrammatic Monte Carlo has been a very fruitful tool for taming, and in some cases even solving, the sign problem in several lattice models. We have recently proposed a diagrammatic model for simulating lattice gauge theories with staggered fermions at arbitrary coupling, which extends earlier successful efforts to simulate lattice QCD at finite baryon density in the strong-coupling regime. Here we present the first numerical simulations of our model, using worm algorithms.

  1. Observation of Jet Photoproduction and Comparison to Monte Carlo Simulation. (United States)

    Lincoln, Donald W.

    The photon is the carrier of the electromagnetic force. However in addition to its well known nature, the theories of QCD and quantum mechanics would indicate that the photon can also for brief periods of time split into a qoverline{q} pair (an extended photon). How these constituents share energy and momentum is an interesting question and such a measurement was investigated by scattering photons off protons. The post collision kinematics should reveal pre-collision information. Unfortunately, when these constituents exit the collision point, they undergo subsequent interactions (gluon radiation, fragmentation, etc.) which scramble their kinematics. An algorithm was explored which was shown via Monte Carlo techniques to partially disentangle these post collision interactions and reveal the collision kinematics. The presence or absence of large transverse momenta internal (k_ bot) to the photon has a significant impact on the ability to reconstruct the kinematics of the leading order calculation hard scatter system. Reconstruction of the next to leading order high E_bot partons is more straightforward. Since the photon exhibits this unusual behavior only part of the time, many of the collisions recorded will be with a non-extended (or direct) photon. Unless a method for culling only the extended photons out can be invented, this contamination of direct photons must be accounted for. No such culling method is currently known, and so any measurement will necessarily contain both photon types. Theoretical predictions using Monte Carlo methods are compared with the data and are found to reproduce many experimentally measured distributions quite well. Overall the LUND Monte Carlo reproduces the data better than the HERWIG Monte Carlo. As expected at low jet E_ bot, the data set seems to be dominated by extended photons, with the mix becoming nearly equal at Jet E_bot > 4 GeV. The existence of a large photon k_ bot appears to be favored.

  2. Grid Approach to Path Integral Monte Carlo Calculations

    CERN Document Server

    Stojiljkovic, D; Bogojevic, A R; Belic, A


    Approach taken for the gridification of the developed Monte Carlo code for calculation of path integrals is described. Brief introduction to path integrals and Grids is given, and details on the implementation of SPEEDUP in the Grid environment are described. The numerical results obtained by the gridified version of the application are shortly presented, demonstrating its usefulness in the research in physics and related areas.

  3. A simple introduction to Markov Chain Monte-Carlo sampling. (United States)

    van Ravenzwaaij, Don; Cassey, Pete; Brown, Scott D


    Markov Chain Monte-Carlo (MCMC) is an increasingly popular method for obtaining information about distributions, especially for estimating posterior distributions in Bayesian inference. This article provides a very basic introduction to MCMC sampling. It describes what MCMC is, and what it can be used for, with simple illustrative examples. Highlighted are some of the benefits and limitations of MCMC sampling, as well as different approaches to circumventing the limitations most likely to trouble cognitive scientists.





    Frequently, dynamic hedging strategies minimizing risk exposure are not given in closed form, but need to be approximated numerically. This makes it difficult to estimate residual hedging risk, also called basis risk, when only imperfect hedging instruments are at hand. We propose an easy to implement and computationally efficient least-squares Monte Carlo algorithm to estimate residual hedging risk. The algorithm approximates the variance minimal hedging strategy within general diffusion mod...

  5. Observation of Jet Photoproduction and Comparison to Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Lincoln, Donald W. [Rice Univ., Houston, TX (United States)


    The photon is the carrier of the electromagnetic force. However in addition to its well known nature, the theories of QCD and quantum mechanics would indicate that the photon can also for brief periods of time split into a $q\\bar{q}$ pair (an extended photon.) How these constituents share energy and momentum is an interesting question and such a measurement was investigated by scattering photons off protons. The post collision kinematics should reveal pre-collision information. Unfortunately, when these constituents exit the collision point, they undergo subsequent interactions (gluon radiation, fragmentation, etc.) which scramble their kinematics. An algorithm was explored which was shown via Monte Carlo techniques to partially disentangle these post collision interactions and reveal the collision kinematics. The presence or absence of large transverse momenta internal ($k_\\perp$) to the photon has a significant impact on the ability to reconstruct the kinematics of the leading order calculation hard scatter system. Reconstruction of the next to leading order high $E_\\perp$ partons is more straightforward. Since the photon exhibits this unusual behavior only part of the time, many of the collisions recorded will be with a non-extended (or direct) photon. Unless a method for culling only the extended photons out can be invented, this contamination of direct photons must be accounted for. No such culling method is currently known, and so any measurement will necessarily contain both photon types. Theoretical predictions using Monte Carlo methods are compared with the data and are found to reproduce many experimentally measured distributions quite well. Overall the LUND Monte Carlo reproduces the data better than the HERWIG Monte Carlo. As expected at low jet $E_\\perp$, the data set seems to be dominated by extended photons, with the mix becoming nearly equal at jet $E_\\perp > 4$ GeV. The existence of a large photon $k_\\perp$ appears to be favored.

  6. Programming a Hearthstone agent using Monte Carlo Tree Search


    Andersson, Markus Heikki; Hesselberg, Håkon Helgesen


    This thesis describes the effort of adapting Monte Carlo Tree Search (MCTS) to the game of Hearthstone, a card game with hidden information and stochastic elements. The focus is on discovering the suitability of MCTS for this environment, as well as which domain-specific adaptations are needed. An MCTS agent is developed for a Hearthstone simulator, which is used to conduct experiments to measure the agent's performance both against human and computer players. The implementation includes ...

  7. CMS Monte Carlo production in the WLCG computing grid

    CERN Document Server

    Hernández, J M; Mohapatra, A; Filippis, N D; Weirdt, S D; Hof, C; Wakefield, S; Guan, W; Khomitch, A; Fanfani, A; Evans, D; Flossdorf, A; Maes, J; van Mulders, P; Villella, I; Pompili, A; My, S; Abbrescia, M; Maggi, G; Donvito, G; Caballero, J; Sanches, J A; Kavka, C; Van Lingen, F; Bacchi, W; Codispoti, G; Elmer, P; Eulisse, G; Lazaridis, C; Kalini, S; Sarkar, S; Hammad, G


    Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG).

  8. Cassandra: An open source Monte Carlo package for molecular simulation. (United States)

    Shah, Jindal K; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Keene, Brian P; Khan, Sandip; Paluch, Andrew S; Rai, Neeraj; Romanielo, Lucienne L; Rosch, Thomas W; Yoo, Brian; Maginn, Edward J


    Cassandra is an open source atomistic Monte Carlo software package that is effective in simulating the thermodynamic properties of fluids and solids. The different features and algorithms used in Cassandra are described, along with implementation details and theoretical underpinnings to various methods used. Benchmark and example calculations are shown, and information on how users can obtain the package and contribute to it are provided. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model

    Energy Technology Data Exchange (ETDEWEB)

    D.P. Stotler


    The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model.

  10. Monte Carlo simulation of PET images for injection doseoptimization

    Czech Academy of Sciences Publication Activity Database

    Boldyš, Jiří; Dvořák, Jiří; Skopalová, M.; Bělohlávek, O.


    Roč. 29, č. 9 (2013), s. 988-999 ISSN 2040-7939 R&D Projects: GA MŠk 1M0572 Institutional support: RVO:67985556 Keywords : positron emission tomography * Monte Carlo simulation * biological system modeling * image quality Subject RIV: FD - Oncology ; Hematology Impact factor: 1.542, year: 2013

  11. Monte Carlo reliability simulation of coal shearer machine


    Hoseinie, Hadi; Khalokakaie, Reza; Ataei, Mohammad A.; Ghodrati, Behzad; Kumar, Uday


    In this paper the Kamat-Riley (K-R) event-based Monte Carlo simulation method was used for reliability analysis of longwall shearer machine. Shearer machine consists of six subsystems; water, haulage, electrical, hydraulic, cutting arms and cable systems in a series network configuration. A shearer in the Tabas coal mine was selected as case study and its all failure data were collected and used for reliability analysis of subsystems. With negligible assumption of time to repair, a flowchart ...

  12. Monte Carlo estimation of the electric field in stellarators (United States)

    Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.


    The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable. PMID:16593767

  13. Monte Carlo simulation of NSE at reactor and spallation sources

    Energy Technology Data Exchange (ETDEWEB)

    Zsigmond, G.; Wechsler, D.; Mezei, F. [Hahn-Meitner-Institut Berlin, Berlin (Germany)


    A MC (Monte Carlo) computation study of NSE (Neutron Spin Echo) has been performed by means of VITESS investigating the classic and TOF-NSE options at spallation sources. The use of white beams in TOF-NSE makes the flipper efficiency in function of the neutron wavelength an important issue. The emphasis was put on exact evaluation of flipper efficiencies for wide wavelength-band instruments. (author)

  14. Variational Monte Carlo for spin-orbit interacting systems (United States)

    Ambrosetti, A.; Silvestrelli, P. L.; Toigo, F.; Mitas, L.; Pederiva, F.


    Recently, a diffusion Monte Carlo algorithm was applied to the study of spin-dependent interactions in condensed matter [A. Ambrosetti, F. Pederiva, E. Lipparini, and S. Gandolfi, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.80.125306 80, 125306 (2009)]. Following some of the ideas presented therein, and applied to a Hamiltonian containing a Rashba-like interaction, a general variational Monte Carlo approach is here introduced that treats in an efficient and very accurate way the spin degrees of freedom in atoms when spin-orbit effects are included in the Hamiltonian describing the electronic structure. We illustrate the algorithm on the evaluation of the spin-orbit splittings of isolated C, Tl, Pb, Bi, and Po atoms. In the case of the carbon atom, we investigate the differences between the inclusion of the spin orbit in its realistic and effective spherically symmetrized forms. The method exhibits a very good accuracy in describing the small energy splittings, opening the way for systematic quantum Monte Carlo studies of spin-orbit effects in atomic systems.

  15. ALEPH2 - A general purpose Monte Carlo depletion code

    Energy Technology Data Exchange (ETDEWEB)

    Stankovskiy, A.; Van Den Eynde, G.; Baeten, P. [SCK CEN, Boeretang 200, B-2400 Mol (Belgium); Trakas, C.; Demy, P. M.; Villatte, L. [AREVA NP, Tour AREVA, Pl. J. Millier, 92084 Paris La Defense (France)


    The Monte-Carlo burn-up code ALEPH is being developed at SCK-CEN since 2004. A previous version of the code implemented the coupling between the Monte Carlo transport (any version of MCNP or MCNPX) and the ' deterministic' depletion code ORIGEN-2.2 but had important deficiencies in nuclear data treatment and limitations inherent to ORIGEN-2.2. A new version of the code, ALEPH2, has several unique features making it outstanding among other depletion codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. The last generation general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII and JENDL-4) are fully implemented, including special purpose activation, spontaneous fission, fission product yield and radioactive decay data. The built-in depletion algorithm allows to eliminate the uncertainties associated with obtaining the time-dependent nuclide concentrations. A predictor-corrector mechanism, calculation of nuclear heating, calculation of decay heat, decay neutron sources are available as well. The validation of the code on the results of REBUS experimental program has been performed. The ALEPH2 has shown better agreement with measured data than other depletion codes. (authors)

  16. Stabilization effect of fission source in coupled Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Börge Olsen


    Full Text Available A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.

  17. Valence-bond quantum Monte Carlo algorithms defined on trees. (United States)

    Deschner, Andreas; Sørensen, Erik S


    We present a class of algorithms for performing valence-bond quantum Monte Carlo of quantum spin models. Valence-bond quantum Monte Carlo is a projective T=0 Monte Carlo method based on sampling of a set of operator strings that can be viewed as forming a treelike structure. The algorithms presented here utilize the notion of a worm that moves up and down this tree and changes the associated operator string. In quite general terms, we derive a set of equations whose solutions correspond to a whole class of algorithms. As specific examples of this class of algorithms, we focus on two cases. The bouncing worm algorithm, for which updates are always accepted by allowing the worm to bounce up and down the tree, and the driven worm algorithm, where a single parameter controls how far up the tree the worm reaches before turning around. The latter algorithm involves only a single bounce where the worm turns from going up the tree to going down. The presence of the control parameter necessitates the introduction of an acceptance probability for the update.

  18. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation (United States)

    Restrepo, Carolina; Hurtado, John E.


    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  19. Performance of quantum Monte Carlo for calculating molecular bond lengths (United States)

    Cleland, Deidre M.; Per, Manolo C.


    This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF. The most accurate MAD of 3 ± 2 × 10-3 Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10-3 Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.

  20. Monte Carlo Simulation for Statistical Decay of Compound Nucleus

    Directory of Open Access Journals (Sweden)

    Chadwick M.B.


    Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.

  1. Stock Price Simulation Using Bootstrap and Monte Carlo

    Directory of Open Access Journals (Sweden)

    Pažický Martin


    Full Text Available In this paper, an attempt is made to assessment and comparison of bootstrap experiment and Monte Carlo experiment for stock price simulation. Since the stock price evolution in the future is extremely important for the investors, there is the attempt to find the best method how to determine the future stock price of BNP Paribas′ bank. The aim of the paper is define the value of the European and Asian option on BNP Paribas′ stock at the maturity date. There are employed four different methods for the simulation. First method is bootstrap experiment with homoscedastic error term, second method is blocked bootstrap experiment with heteroscedastic error term, third method is Monte Carlo simulation with heteroscedastic error term and the last method is Monte Carlo simulation with homoscedastic error term. In the last method there is necessary to model the volatility using econometric GARCH model. The main purpose of the paper is to compare the mentioned methods and select the most reliable. The difference between classical European option and exotic Asian option based on the experiment results is the next aim of tis paper.

  2. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li


    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  3. Properties of reactive oxygen species by quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Zen, Andrea [Dipartimento di Fisica, La Sapienza - Università di Roma, Piazzale Aldo Moro 2, 00185 Rome (Italy); Trout, Bernhardt L. [Department of Chemical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, Massachusetts 02139 (United States); Guidoni, Leonardo, E-mail: [Dipartimento di Scienze Fisiche e Chimiche, Università degli studi de L' Aquila, Via Vetoio, 67100 Coppito, L' Aquila (Italy)


    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N{sup 3} − N{sup 4}, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  4. Properties of reactive oxygen species by quantum Monte Carlo (United States)

    Zen, Andrea; Trout, Bernhardt L.; Guidoni, Leonardo


    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of chemistry, biology, and atmospheric science. Nevertheless, the electronic structure of such species is a challenge for ab initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution, and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal Power (JAGP) wave function ansatz, which has been recently shown to effectively describe the statical and dynamical correlation of different molecular systems. In particular, we have studied the oxygen molecule, the superoxide anion, the nitric oxide radical and anion, the hydroxyl and hydroperoxyl radicals and their corresponding anions, and the hydrotrioxyl radical. Overall, the methodology was able to correctly describe the geometrical and electronic properties of these systems, through compact but fully-optimised basis sets and with a computational cost which scales as N3 - N4, where N is the number of electrons. This work is therefore opening the way to the accurate study of the energetics and of the reactivity of large and complex oxygen species by first principles.

  5. Density matrix Monte Carlo modeling of quantum cascade lasers (United States)

    Jirauschek, Christian


    By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.

  6. Proceedings of the first symposium on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)



    The first symposium on Monte Carlo simulation was held at Mitsubishi Research Institute, Otemachi, Tokyo, on 10th and 11st of September, 1998. This symposium was organized by Nuclear Code Research Committee at Japan Atomic Energy Research Institute. In the sessions, were presented orally 21 papers on code development, parallel calculation, reactor physics, burn-up, criticality, shielding safety, dose evaluation, nuclear fusion reactor, thermonuclear fusion plasma, nuclear transmutation, electromagnetic cascade, fuel cycle facility. Those presented papers are compiled in this proceedings. The 21 of the presented papers are indexed individually. (J.P.N.)

  7. Computational radiology and imaging with the MCNP Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Estes, G.P.; Taylor, W.M.


    MCNP, a 3D coupled neutron/photon/electron Monte Carlo radiation transport code, is currently used in medical applications such as cancer radiation treatment planning, interpretation of diagnostic radiation images, and treatment beam optimization. This paper will discuss MCNP`s current uses and capabilities, as well as envisioned improvements that would further enhance MCNP role in computational medicine. It will be demonstrated that the methodology exists to simulate medical images (e.g. SPECT). Techniques will be discussed that would enable the construction of 3D computational geometry models of individual patients for use in patient-specific studies that would improve the quality of care for patients.

  8. Diffusion Monte Carlo study of circular quantum dots (United States)

    Pederiva, Francesco; Umrigar, C. J.; Lipparini, E.


    We present ground- and excited-state energies obtained from diffusion Monte Carlo (DMC) calculations, using accurate multiconfiguration wave functions, for N electrons (N<=13) confined to a circular quantum dot. We compare the density and correlation energies to the predictions of local spin density approximation (LSDA) theory and Hartree-Fock (HF) theory, and analyze the electron-electron pair-correlation functions. The DMC estimated change in electrochemical potential as a function of the number of electrons in the dot is compared to that from LSDA and HF calculations. Hund's first rule is found to be satisfied for all dots except N=4 for which there is a near degeneracy.

  9. Polarizability in Quantum Dots via Correlated Quantum Monte Carlo (United States)

    Colletti, L.; Pederiva, F.; Lipparini, E.; Umrigar, C. J.


    In this paper we review the calculations of charge-density and spin-density polarizabilities in small quantum dots by using a correlated Monte Carlo scheme. In the limit of small external fields, knowledge of polarizability implies, thanks to the commonly used "sum rules", prediction of the excitation energy of the dipole mode. The need of a numerical approach arises when spin-density polarizability is pursued, while the charge-density mode is analytically calculable as long as the confinement is maintained parabolic.

  10. Monte Carlo simulation of a prototype photodetector used in radiotherapy

    CERN Document Server

    Kausch, C; Albers, D; Schmidt, R; Schreiber, B


    The imaging performance of prototype electronic portal imaging devices (EPID) has been investigated. Monte Carlo simulations have been applied to calculate the modulation transfer function (MTF( f )), the noise power spectrum (NPS( f )) and the detective quantum efficiency (DQE( f )) for different new type of EPIDs, which consist of a detector combination of metal or polyethylene (PE), a phosphor layer of Gd sub 2 O sub 2 S and a flat array of photodiodes. The simulated results agree well with measurements. Based on simulated results, possible optimization of these devices is discussed.

  11. Monte Carlo simulation of the Neutrino-4 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Serebrov, A. P., E-mail:; Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N. [National Research Center Kurchatov Institute, Petersburg Nuclear Physics Institute (Russian Federation)


    Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.

  12. Monte-Carlo Tree Search in chess endgames


    Kohne, Andraž


    The Monte-Carlo Tree Search (MCTS) algorithm has in recent years captured the attention of many researchers due to its notable success in the game of Go. In spite of this success, so far it has not been used much in the game of chess. In this thesis, we attempt to apply MCTS to chess endgames. The reason for this is the existence of chess tablebases, i.e. databases that provide an exact value of each chess board position in terms of distance to mate. With this information at disposal we ar...


    Energy Technology Data Exchange (ETDEWEB)



    The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail.

  14. Studying the information content of TMDs using Monte Carlo generators

    Energy Technology Data Exchange (ETDEWEB)

    Avakian, H. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Matevosyan, H. [The Univ. of Adelaide, Adelaide (Australia); Pasquini, B. [Univ. of Pavia, Pavia (Italy); Schweitzer, P. [Univ. of Connecticut, Storrs, CT (United States)


    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  15. Matching NLO with parton shower in Monte Carlo scheme

    CERN Document Server

    Sapeta, Sebastian


    A new method of including NLO QCD corrections to the hard process in the LO Monte Carlo (MC) shower is discussed. The method is based on a recently proposed MC factorization scheme, which dramatically simplifies the NLO coefficient functions. The NLO corrections are introduced by simple reweighing of the events produced by the LO shower with a single, positive MC weight. A practical implementation of the method is presented for the case of electro-weak boson production in the hadron-hadron collision, and the results are compared with well established approaches to NLO+PS matching.

  16. Monte Carlo algorithms for Hardy-Weinberg proportions. (United States)

    Huber, Mark; Chen, Yuguo; Dinwoodie, Ian; Dobra, Adrian; Nicholas, Mike


    The Hardy-Weinberg law is among the most important principles in the study of biological systems. Given its importance, many tests have been devised to determine whether a finite population follows Hardy-Weinberg proportions. Because asymptotic tests can fail, Guo and Thompson developed an exact test; unfortunately, the Monte Carlo method they proposed to evaluate their test has a running time that grows linearly in the size of the population N. Here, we propose a new algorithm whose expected running time is linear in the size of the table produced, and completely independent of N. In practice, this new algorithm can be considerably faster than the original method.

  17. Monte Carlo simulation of particle-induced bit upsets

    Directory of Open Access Journals (Sweden)

    Wrobel Frédéric


    Full Text Available We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER for a given device in a given environment.

  18. Monte Carlo Frameworks Building Customisable High-performance C++ Applications

    CERN Document Server

    Duffy, Daniel J


    This is one of the first books that describe all the steps that are needed in order to analyze, design and implement Monte Carlo applications. It discusses the financial theory as well as the mathematical and numerical background that is needed to write flexible and efficient C++ code using state-of-the art design and system patterns, object-oriented and generic programming models in combination with standard libraries and tools.   Includes a CD containing the source code for all examples. It is strongly advised that you experiment with the code by compiling it and extending it to suit your ne

  19. Monte-Carlo Tree Search for Simulated Car Racing

    DEFF Research Database (Denmark)

    Fischer, Jacob; Falsted, Nikolaj; Vielwerth, Mathias


    Monte Carlo Tree Search (MCTS) has recently seen considerable success in playing certain types of games, most of which are discrete, fully observable zero-sum games. Consequently there is currently considerable interest within the research community in investigating what other games this algorithm...... of the action space. This combination allows the controller to effectively search the tree of potential future states. Results show that it is indeed possible to implement a competent MCTS-based racing controller. The controller generalizes to most road tracks as long as a warm-up period is provided....

  20. Improved diffusion Monte Carlo and the Brownian fan (United States)

    Weare, J.; Hairer, M.


    Diffusion Monte Carlo (DMC) is a workhorse of stochastic computing. It was invented forty years ago as the central component in a Monte Carlo technique for estimating various characteristics of quantum mechanical systems. Since then it has been used in applied in a huge number of fields, often as a central component in sequential Monte Carlo techniques (e.g. the particle filter). DMC computes averages of some underlying stochastic dynamics weighted by a functional of the path of the process. The weight functional could represent the potential term in a Feynman-Kac representation of a partial differential equation (as in quantum Monte Carlo) or it could represent the likelihood of a sequence of noisy observations of the underlying system (as in particle filtering). DMC alternates between an evolution step in which a collection of samples of the underlying system are evolved for some short time interval, and a branching step in which, according to the weight functional, some samples are copied and some samples are eliminated. Unfortunately for certain choices of the weight functional DMC fails to have a meaningful limit as one decreases the evolution time interval between branching steps. We propose a modification of the standard DMC algorithm. The new algorithm has a lower variance per workload, regardless of the regime considered. In particular, it makes it feasible to use DMC in situations where the ``naive'' generalization of the standard algorithm would be impractical, due to an exponential explosion of its variance. We numerically demonstrate the effectiveness of the new algorithm on a standard rare event simulation problem (probability of an unlikely transition in a Lennard-Jones cluster), as well as a high-frequency data assimilation problem. We then provide a detailed heuristic explanation of why, in the case of rare event simulation, the new algorithm is expected to converge to a limiting process as the underlying stepsize goes to 0. This is shown

  1. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos, E-mail: [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ (United Kingdom); Alfè, Dario [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Earth Sciences, University College London, Gower Street, London WC1E 6BT (United Kingdom); Lilienfeld, O. Anatole von [Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, Department of Chemistry, University of Basel, Klingelbergstrasse 80, CH-4056 Basel (Switzerland); Argonne Leadership Computing Facility, Argonne National Laboratories, 9700 S. Cass Avenue Argonne, Lemont, Illinois 60439 (United States)


    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  2. A fitter use of Monte Carlo simulations in regression models

    Directory of Open Access Journals (Sweden)

    Alessandro Ferrarini


    Full Text Available In this article, I focus on the use of Monte Carlo simulations (MCS within regression models, being this application very frequent in biology, ecology and economy as well. I'm interested in enhancing a typical fault in this application of MCS, i.e. the inner correlations among independent variables are not used when generating random numbers that fit their distributions. By means of an illustrative example, I provide proof that the misuse of MCS in regression models produces misleading results. Furthermore, I also provide a solution for this topic.

  3. Advanced Markov chain Monte Carlo methods learning from past samples

    CERN Document Server

    Liang, Faming; Carrol, Raymond J


    This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

  4. Experimental validation of plutonium ageing by Monte Carlo correlated sampling

    Energy Technology Data Exchange (ETDEWEB)

    Litaize, O.; Bernard, D.; Santamarina, A. [Commissariat a l' Energie Atomique CEA, Centre d' Etudes de Cadarache, 13108 Saint-Paul lez Durance (France)


    Integral measurements of Plutonium Ageing were performed in two homogeneous MOX cores (MISTRAL2 and MISTRALS) of the French MISTRAL Programme between 1996 and year 2000. The analysis of the MISTRAL2 experiment with JEF-2.2 nuclear data library high-lightened an underestimation of {sup 241}Am capture cross section. The next experiment (MISTRALS) did not conclude in the same way. This paper present a new analysis performed with the recent JEFF-3.1 library and a Monte Carlo perturbation method (correlated sampling) available in the French TRIPOLI4 code. (authors)

  5. Monte Carlo methods for medical physics a practical introduction

    CERN Document Server

    Schuemann, Jan; Paganetti, Harald


    The Monte Carlo (MC) method, established as the gold standard to predict results of physical processes, is now fast becoming a routine clinical tool for applications that range from quality control to treatment verification. This book provides a basic understanding of the fundamental principles and limitations of the MC method in the interpretation and validation of results for various scenarios. It shows how user-friendly and speed optimized MC codes can achieve online image processing or dose calculations in a clinical setting. It introduces this essential method with emphasis on applications in hardware design and testing, radiological imaging, radiation therapy, and radiobiology.

  6. Application of Monte Carlo methods in tomotherapy and radiation biophysics (United States)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published

  7. New electron multiple scattering distributions for Monte Carlo transport simulation

    Energy Technology Data Exchange (ETDEWEB)

    Chibani, Omar (Haut Commissariat a la Recherche (C.R.S.), 2 Boulevard Franz Fanon, Alger B.P. 1017, Alger-Gare (Algeria)); Patau, Jean Paul (Laboratoire de Biophysique et Biomathematiques, Faculte des Sciences Pharmaceutiques, Universite Paul Sabatier, 35 Chemin des Maraichers, 31062 Toulouse cedex (France))


    New forms of electron (positron) multiple scattering distributions are proposed. The first is intended for use in the conditions of validity of the Moliere theory. The second distribution takes place when the electron path is so short that only few elastic collisions occur. These distributions are adjustable formulas. The introduction of some parameters allows impositions of the correct value of the first moment. Only positive and analytic functions were used in constructing the present expressions. This makes sampling procedures easier. Systematic tests are presented and some Monte Carlo simulations, as benchmarks, are carried out. ((orig.))

  8. A Monte Carlo Study of Seven Homogeneity of Variance Tests


    Howard B. Lee; Gary S. Katz; Alberto F. Restori


    Problem statement: The decision by SPSS (now PASW) to use the unmodified Levene test to test homogeneity of variance was questioned. It was compared to six other tests. In total, seven homogeneity of variance tests used in Analysis Of Variance (ANOVA) were compared on robustness and power using Monte Carlo studies. The homogeneity of variance tests were (1) Levene, (2) modified Levene, (3) Z-variance, (4) Overall-Woodward Modified Z-variance, (5) OBrien, (6) Samiuddin Cube Root and (7) F-Max....

  9. Monte Carlo simulation of charge mediated magnetoelectricity in multiferroic bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz-Álvarez, H.H. [Universidad de Caldas, Manizales (Colombia); Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Bedoya-Hincapié, C.M. [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Universidad Santo Tomás, Bogotá (Colombia); Restrepo-Parra, E., E-mail: [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia)


    Simulations of a bilayer ferroelectric/ferromagnetic multiferroic system were carried out, based on the Monte Carlo method and Metropolis dynamics. A generic model was implemented with a Janssen-like Hamiltonian, taking into account magnetoelectric interactions due to charge accumulation at the interface. Two different magnetic exchange constants were considered for accumulation and depletion states. Several screening lengths were also included. Simulations exhibit considerable magnetoelectric effects not only at low temperature, but also at temperature near to the transition point of the ferromagnetic layer. The results match experimental observations for this kind of structure and mechanism.

  10. Monte Carlo simulation of AB-copolymers with saturating bonds

    DEFF Research Database (Denmark)

    Chertovich, A.C.; Ivanov, V.A.; Khokhlov, A.R.


    Structural transitions in a single AB-copolymer chain where saturating bonds can be formed between A- and B-units are studied by means of Monte Carlo computer simulations using the bond fluctuation model. Three transitions are found, coil-globule, coil-hairpin and globule-hairpin, depending...... on the nature of a particular AB-sequence: statistical random sequence, diblock sequence and 'random-complementary' sequence (one-half of such an AB-sequence is random with Bernoulli statistics while the other half is complementary to the first one). The properties of random-complementary sequences are closer...

  11. Estimativa da produtividade em soldagem pelo Método de Monte Carlo Productivity estimation in welding by Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    José Luiz Ferreira Martins


    Full Text Available O objetivo deste artigo é o de analisar a viabilidade da utilização do método de Monte Carlo para estimar a produtividade na soldagem de tubulações industriais de aço carbono com base em amostras pequenas. O estudo foi realizado através de uma análise de uma amostra de referência contendo dados de produtividade de 160 juntas soldadas pelo processo Eletrodo Revestido na REDUC (refinaria de Duque de Caxias, utilizando o software ControlTub 5.3. A partir desses dados foram retiradas de forma aleatória, amostras com, respectivamente, 10, 15 e 20 elementos e executadas simulações pelo método de Monte Carlo. Comparando-se os resultados da amostra com 160 elementos e os dados gerados por simulação se observa que bons resultados podem ser obtidos usando o método de Monte Carlo para estimativa da produtividade da soldagem. Por outro lado, na indústria da construção brasileira o valor da média de produtividade é normalmente usado como um indicador de produtividade e é baseado em dados históricos de outros projetos coletados e avaliados somente após a conclusão do projeto, o que é uma limitação. Este artigo apresenta uma ferramenta para avaliação da execução em tempo real, permitindo ajustes nas estimativas e monitoramento de produtividade durante o empreendimento. Da mesma forma, em licitações, orçamentos e estimativas de prazo, a utilização desta técnica permite a adoção de outras estimativas diferentes da produtividade média, que é comumente usada e como alternativa, se sugerem três critérios: produtividade otimista, média e pessimista.The aim of this article is to analyze the feasibility of using Monte Carlo method to estimate productivity in industrial pipes welding of carbon steel based on small samples. The study was carried out through an analysis of a reference sample containing productivity data of 160 welded joints by SMAW process in REDUC (Duque de Caxias Refinery, using ControlTub 5.3 software

  12. A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools (United States)


    AFRL-OSR-VA-TR-2015-0176 A dedicated computational platform for Cellular Monte Carlo T- CAD software tools Marco Saraniti ARIZONA STATE UNIVERSITY...TITLE AND SUBTITLE A Dedicated Computational Platform for Cellular Monte Carlo T- CAD Software Tools 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-14...with an optimized architecture for the Cellular Monte Carlo particle-based T- CAD simulation tools developed by our group. Such code is used for the

  13. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick


    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  14. MATLAB platform for Monte Carlo planning and dosimetry experimental evaluation; Plataforma Matlab para planificacion Monte Carlo y evaluacion dosimetrica experimental

    Energy Technology Data Exchange (ETDEWEB)

    Baeza, J. A.; Ureba, A.; Jimenez-Ortega, E.; Pereira-Barbeiro, A. R.; Leal, A.


    A new platform for the full Monte Carlo planning and an independent experimental evaluation that it can be integrated into clinical practice. The tool has proved its usefulness and efficiency and now forms part of the flow of work of our research group, the tool used for the generation of results, which are to be suitably revised and are being published. This software is an effort of integration of numerous algorithms of image processing, along with planning optimization algorithms, allowing the process of MCTP planning from a single interface. In addition, becomes a flexible and accurate tool for the evaluation of experimental dosimetric data for the quality control of actual treatments. (Author)

  15. Monte Carlo simulations and dosimetric studies of an irradiation facility (United States)

    Belchior, A.; Botelho, M. L.; Vaz, P.


    There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.

  16. A pure-sampling quantum Monte Carlo algorithm (United States)

    Ospadov, Egor; Rothstein, Stuart M.


    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  17. A pure-sampling quantum Monte Carlo algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ospadov, Egor; Rothstein, Stuart M., E-mail: [Departments of Chemistry and Physics, Brock University, St. Catharines, Ontario L2S 3A1 (Canada)


    The objective of pure-sampling quantum Monte Carlo is to calculate physical properties that are independent of the importance sampling function being employed in the calculation, save for the mismatch of its nodal hypersurface with that of the exact wave function. To achieve this objective, we report a pure-sampling algorithm that combines features of forward walking methods of pure-sampling and reptation quantum Monte Carlo (RQMC). The new algorithm accurately samples properties from the mixed and pure distributions simultaneously in runs performed at a single set of time-steps, over which extrapolation to zero time-step is performed. In a detailed comparison, we found RQMC to be less efficient. It requires different sets of time-steps to accurately determine the energy and other properties, such as the dipole moment. We implement our algorithm by systematically increasing an algorithmic parameter until the properties converge to statistically equivalent values. As a proof in principle, we calculated the fixed-node energy, static α polarizability, and other one-electron expectation values for the ground-states of LiH and water molecules. These quantities are free from importance sampling bias, population control bias, time-step bias, extrapolation-model bias, and the finite-field approximation. We found excellent agreement with the accepted values for the energy and a variety of other properties for those systems.

  18. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo


    Computational fluid dynamics (CFD) simulations of pore-scale transport processes in porous media have recently gained large popularity. However the geometrical details of the pore structures can be known only in a very low number of samples and the detailed flow computations can be carried out only on a limited number of cases. The explicit introduction of randomness in the geometry and in other setup parameters can be crucial for the optimization of pore-scale investigations for random homogenization. Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost of estimating quantities of interest within a prescribed accuracy constraint. Random samples of pore geometries with a hierarchy of geometrical complexities and grid refinements, are synthetically generated and used to propagate the uncertainties in the flow simulations and compute statistics of macro-scale effective parameters.

  19. Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations

    Directory of Open Access Journals (Sweden)

    Višnjevac Nenad


    Full Text Available Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simulations to estimate the uncertainty of flow accumulation. The case study area included the broader area of the Municipality of Čačak, where Monte Carlo stochastic simulations were applied in order to create one hundred possible outputs of flow accumulation. A statistical analysis was performed on the basis of these versions, and the "most likely" version of flow accumulation in association with its confidence bounds (standard deviation was created. Further, this paper describes the most important phases in the process of estimating uncertainty, such as variogram modelling and chooses the right number of simulations. Finally, it makes suggestions on how to effectively use and discuss the results and their practical significance.

  20. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays


    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  1. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.


    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  2. Infinite variance in fermion quantum Monte Carlo calculations. (United States)

    Shi, Hao; Zhang, Shiwei


    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  3. Spatial distribution of reflected gamma rays by Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jehouani, A. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco)], E-mail:; Merzouki, A. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco); Remote Sensing and Geomatics of the Environment Laboratory, Ottawa-Carleton Geoscience Centre, Marion Hall, 140 Louis Pasteur, Ottawa, ON, KIN 6N5 (Canada); Boutadghart, F.; Ghassoun, J. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco)


    In nuclear facilities, the reflection of gamma rays of the walls and metals constitutes an unknown origin of radiation. These reflected gamma rays must be estimated and determined. This study concerns reflected gamma rays on metal slabs. We evaluated the spatial distribution of the reflected gamma rays spectra by using the Monte Carlo method. An appropriate estimator for the double differential albedo is used to determine the energy spectra and the angular distribution of reflected gamma rays by slabs of iron and aluminium. We took into the account the principal interactions of gamma rays with matter: photoelectric, coherent scattering (Rayleigh), incoherent scattering (Compton) and pair creation. The Klein-Nishina differential cross section was used to select direction and energy of scattered photons after each Compton scattering. The obtained spectra show peaks at 0.511{sup *} MeV for higher source energy. The Results are in good agreement with those obtained by the TRIPOLI code [J.C. Nimal et al., TRIPOLI02: Programme de Monte Carlo Polycinsetique a Trois dimensions, CEA Rapport, Commissariat a l'Energie Atomique. ].

  4. Uncertainty of NURBS surface fit by Monte Carlo simulations (United States)

    Koch, Karl-Rudolf


    A free-form surface expressed by NURBS (nonuniform rational B-splines) is fitted to the measured coordinates of points by the lofting method. The unknown control points of the free-form surface are therefore not simultaneously estimated but determined by cross-sectional curve fits. This uses much less computer time than the simultaneous estimation and gives identical results. The free-form surface should be determined with an uncertainty which does not considerably surpass the uncertainty of the measurements. This is investigated here for the example of a free-form surface for a pothole in a road determined by the measurements of a laserscanner. The uncertainties are expressed by standard deviations and confidence intervals. They are computed using Monte Carlo simulations for the positioning of a point by the measured coordinates and by fitting a free-form surface. The resulting uncertainties agree. In addition, the uncertainties of quantities characterizing the shape and the slope of the surface are determined by Monte Carlo simulations. It turns out that the uncertainties resulting from the measurements and from the free-form surface fit are approximately identical.

  5. Tally efficiency analysis for Monte Carlo Wielandt method

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hyung Jin, E-mail: [Korea Atomic Energy Research Institute, 1045 Daedeokdaero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Kim, Chang Hyo [Seoul National University, 599 Gwanakro, Gwanak-gu, Seoul 151-742 (Korea, Republic of)


    The Monte Carlo Wielandt method has the potential to eliminate most of a variance bias because it can reduce the dominance ratio by properly controlling the estimated eigenvalue (k{sub e}). However, it requires increasingly more computation time to simulate additional fission neutrons as the estimated eigenvalue becomes closer to the effective multiplication factor (k{sub eff}). Therefore, its advantage over the conventional Monte Carlo (MC) power method in the calculation efficiency may not always be ensured. Its efficiency of the tally estimation needs to be assessed in terms of a figure of merit based on a real variance as a function of k{sub e}. In this paper, the real variance is estimated by using an inter-cycle correlation of the fission source distribution for the MC Wielandt calculations. Then, the tally efficiency of the MC Wielandt method is analyzed for a 2 x 2 fission matrix system and weakly coupled fissile array problems with different dominance ratios (DRs). It is shown that the tally efficiency of the MC Wielandt method depends strongly on k{sub e}, there is a k{sub e} value resulting in the best efficiency for a problem with a large DR, and the efficiency curve as a function of L, the average number of fission neutrons per history, follows a long tail after the best efficiency.

  6. The ATLAS Fast Monte Carlo Production Chain Project (United States)

    Jansky, Roland


    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. The thereby achieved possible speed-up in detector simulation of up to a factor 100 makes subsequent digitization and reconstruction the dominant contributions to the Monte Carlo (MC) production CPU cost. The slowest components of both digitization and reconstruction are inside the Inner Detector due to the complex signal modeling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitization modules, which achieves a very high accuracy in describing the standard output. For the Inner Detector track reconstruction, a Monte Carlo generator information based trajectory building has been deployed with the aim of bypassing the CPU intensive pattern recognition. Together with the ISF all components have been integrated into a new fast MC production chain, aiming to produce fast MC simulated data with sufficient agreement with fully simulated and reconstructed data at a processing time of seconds per event, compared to several minutes for full simulation.

  7. Quantum Monte Carlo with very large multideterminant wavefunctions

    CERN Document Server

    Scemama, Anthony; Giner, Emmanuel; Caffarel, Michel


    An algorithm to compute efficiently the first two derivatives of (very) large multideterminant wavefunctions for quantum Monte Carlo calculations is presented. The trial wavefunction being written as a sum of determinants, the computational time needed at each Monte Carlo step is expected to scale linearly in the number of determinants. In this work, we express the multideterminant expansion as a bilinear form in terms of the spin-specific determinants and show that the cost of the leading O$(N_{\\rm det})$ contribution ($N_{\\rm det}$, total number of determinants) can be greatly reduced. In practical applications, it is so reduced that it has been found to have a marginal impact in the total computational cost, at least up to one million of determinants. The practical scaling is thus proportional to the number of spin-specific determinants of order $O(\\sqrt{N_{\\rm det}})$. The calculation of determinants is performed by using the Sherman-Morrison formula. Introducing a suitably chosen encoding and ordering of...

  8. Monte Carlo modeling of human tooth optical coherence tomography imaging (United States)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen


    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth.

  9. Proton therapy Monte Carlo SRNA-VOX code

    Directory of Open Access Journals (Sweden)

    Ilić Radovan D.


    Full Text Available The most powerful feature of the Monte Carlo method is the possibility of simulating all individual particle interactions in three dimensions and performing numerical experiments with a preset error. These facts were the motivation behind the development of a general-purpose Monte Carlo SRNA program for proton transport simulation in technical systems described by standard geometrical forms (plane, sphere, cone, cylinder, cube. Some of the possible applications of the SRNA program are: (a a general code for proton transport modeling, (b design of accelerator-driven systems, (c simulation of proton scattering and degrading shapes and composition, (d research on proton detectors; and (e radiation protection at accelerator installations. This wide range of possible applications of the program demands the development of various versions of SRNA-VOX codes for proton transport modeling in voxelized geometries and has, finally, resulted in the ISTAR package for the calculation of deposited energy distribution in patients on the basis of CT data in radiotherapy. All of the said codes are capable of using 3-D proton sources with an arbitrary energy spectrum in an interval of 100 keV to 250 MeV.

  10. Flow in Random Microstructures: a Multilevel Monte Carlo Approach

    KAUST Repository

    Icardi, Matteo


    In this work we are interested in the fast estimation of effective parameters of random heterogeneous materials using Multilevel Monte Carlo (MLMC). MLMC is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrisation of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random heterogeneous materials. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context, making use of a hierarchy of physical resolution scales, solvers, models and other numerical/geometrical discretisation parameters. Modifications of the classical MLMC estimators are proposed to further reduce variance in cases where analytical convergence rates and asymptotic regimes are not available. Spheres, ellipsoids and general convex-shaped grains are placed randomly in the domain with different placing/packing algorithms and the effective properties of the heterogeneous medium are computed. These are, for example, effective diffusivities, conductivities, and reaction rates. The implementation of the Monte-Carlo estimators, the statistical samples and each single solver is done efficiently in parallel. The method is tested and applied for pore-scale simulations of random sphere packings.

  11. Monte Carlo model for electron degradation in methane

    CERN Document Server

    Bhardwaj, Anil


    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...

  12. Brachytherapy structural shielding calculations using Monte Carlo generated, monoenergetic data

    Energy Technology Data Exchange (ETDEWEB)

    Zourari, K.; Peppa, V.; Papagiannis, P., E-mail: [Medical Physics Laboratory, Medical School, University of Athens, 75 Mikras Asias, 11527 Athens (Greece); Ballester, Facundo [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel 24105 (Germany)


    Purpose: To provide a method for calculating the transmission of any broad photon beam with a known energy spectrum in the range of 20–1090 keV, through concrete and lead, based on the superposition of corresponding monoenergetic data obtained from Monte Carlo simulation. Methods: MCNP5 was used to calculate broad photon beam transmission data through varying thickness of lead and concrete, for monoenergetic point sources of energy in the range pertinent to brachytherapy (20–1090 keV, in 10 keV intervals). The three parameter empirical model introduced byArcher et al. [“Diagnostic x-ray shielding design based on an empirical model of photon attenuation,” Health Phys. 44, 507–517 (1983)] was used to describe the transmission curve for each of the 216 energy-material combinations. These three parameters, and hence the transmission curve, for any polyenergetic spectrum can then be obtained by superposition along the lines of Kharrati et al. [“Monte Carlo simulation of x-ray buildup factors of lead and its applications in shielding of diagnostic x-ray facilities,” Med. Phys. 34, 1398–1404 (2007)]. A simple program, incorporating a graphical user interface, was developed to facilitate the superposition of monoenergetic data, the graphical and tabular display of broad photon beam transmission curves, and the calculation of material thickness required for a given transmission from these curves. Results: Polyenergetic broad photon beam transmission curves of this work, calculated from the superposition of monoenergetic data, are compared to corresponding results in the literature. A good agreement is observed with results in the literature obtained from Monte Carlo simulations for the photon spectra emitted from bare point sources of various radionuclides. Differences are observed with corresponding results in the literature for x-ray spectra at various tube potentials, mainly due to the different broad beam conditions or x-ray spectra assumed. Conclusions

  13. The macro response Monte Carlo method for electron transport (United States)

    Svatos, Michelle Marie


    This thesis proves the feasibility of basing depth dose calculations for electron radiotherapy on first- principles single scatter physics, in an amount of time that is comparable to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. This is possible because MRMC is a Local-to- Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or 'kugel'. A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV-8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry, which in this case is a CT (computed tomography) scan of a patient or phantom. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code against EGS4 and MCNP for depth dose in simple phantoms having density inhomogeneities. The energy deposition algorithms for spreading dose across 5-10 zones per kugel were tested. Most resulting depth dose calculations were within 2-3% of well-benchmarked codes, with one excursion to 4%. This thesis shows that the concept of using single scatter-based physics in clinical radiation

  14. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y


    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  15. Monte Carlo applied to calculation of shields of facilities used in industrial radiography; Monte Carlo aplicado al calculo de blindajes de instalaciones usadas en radiografia industrial

    Energy Technology Data Exchange (ETDEWEB)

    Martinez Ovalle, S. A.; Olaya Davila, H.; Reyes Caballero, F.


    The main objective of this work is to verify through Monte Carlo, the dimensions most appropriate in the shielding of an installation designed for Industrial radiography with a Co-60 Irradiator. (Author)

  16. Evaluation of a special pencil ionization chamber by the Monte Carlo method; Avaliacao de uma camara de ionizacao tipo lapis especial pelo metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P., E-mail: [Universidade Federal de Uberlandia (INFIS/UFU), Uberlandia, MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)


    A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)

  17. Diffraction enhanced breast imaging through Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Cunha, D.M. [Departamento de Fisica e Matematica, FFCLRP, 14040-901 Universidade de Sao Paulo, Ribeirao Preto, SP (Brazil); Instituto de Fisica, Universidade Federal de Uberlandia, 38400-902, Uberlandia, MG (Brazil); Tomal, A. [Departamento de Fisica e Matematica, FFCLRP, 14040-901 Universidade de Sao Paulo, Ribeirao Preto, SP (Brazil); Poletti, M.E., E-mail: [Departamento de Fisica e Matematica, FFCLRP, 14040-901 Universidade de Sao Paulo, Ribeirao Preto, SP (Brazil)


    In this work, the potential use of diffraction effects from elastic scattering for breast imaging through Monte Carlo (MC) simulations was studied. The geometrical model of the compressed breast consisted of a semi-infinite layer, composed of a mixture of adipose and glandular tissue, with five spherical objects within it, simulating different tissue compositions. A pencil beam scanned the breast surface, impinging normally on it. Two receptors were placed under the breast: the first one detected primary photons, while the other detected the scattered photons. Two images of the breast were then obtained, a primary and a scatter image. Results showed that the scatter image provided values of contrast greater than that of primary image, with the possibility to enhance the contribution of a specific breast tissue to image formation. Nevertheless, scatter images also show considerably higher noise. The results obtained indicate that elastic scattering has a great potential to aid in the enhancement of the mammographic image.

  18. Determinant Diagrammatic Monte Carlo Algorithm in the Thermodynamic Limit (United States)

    Rossi, Riccardo


    We present a simple trick that allows us to consider the sum of all connected Feynman diagrams at fixed position of interaction vertices for general fermionic models, such that the thermodynamic limit can be taken analytically. With our approach one can achieve superior performance compared to conventional diagrammatic Monte Carlo algorithm, while rendering the algorithmic part dramatically simpler. By considering the sum of all connected diagrams at once, we allow for massive cancellations between different diagrams, greatly reducing the sign problem. In the end, the computational effort increases only exponentially with the order of the expansion, which should be contrasted with the factorial growth of the standard diagrammatic technique. We illustrate the efficiency of the technique for the two-dimensional Fermi-Hubbard model.

  19. Multiparameter estimation along quantum trajectories with sequential Monte Carlo methods (United States)

    Ralph, Jason F.; Maskell, Simon; Jacobs, Kurt


    This paper proposes an efficient method for the simultaneous estimation of the state of a quantum system and the classical parameters that govern its evolution. This hybrid approach benefits from efficient numerical methods for the integration of stochastic master equations for the quantum system, and efficient parameter estimation methods from classical signal processing. The classical techniques use sequential Monte Carlo (SMC) methods, which aim to optimize the selection of points within the parameter space, conditioned by the measurement data obtained. We illustrate these methods using a specific example, an SMC sampler applied to a nonlinear system, the Duffing oscillator, where the evolution of the quantum state of the oscillator and three Hamiltonian parameters are estimated simultaneously.

  20. Multi-Determinant Wave-functions in Quantum Monte Carlo

    CERN Document Server

    Morales, M A; Clark, B K; Kim, J; Scuseria, G; 10.1021/ct3003404


    Quantum Monte Carlo (QMC) methods have received considerable attention over the last decades due to their great promise for providing a direct solution to the many-body Schrodinger equation in electronic systems. Thanks to their low scaling with number of particles, QMC methods present a compelling competitive alternative for the accurate study of large molecular systems and solid state calculations. In spite of such promise, the method has not permeated the quantum chemistry community broadly, mainly because of the fixed-node error, which can be large and whose control is difficult. In this Perspective, we present a systematic application of large scale multi-determinant expansions in QMC, and report on its impressive performance with first row dimers and the 55 molecules of the G1 test set. We demonstrate the potential of this strategy for systematically reducing the fixed-node error in the wave function and for achieving chemical accuracy in energy predictions. When compared to traditional quantum chemistr...

  1. Monte Carlo shell model studies with massively parallel supercomputers (United States)

    Shimizu, Noritaka; Abe, Takashi; Honma, Michio; Otsuka, Takaharu; Togashi, Tomoaki; Tsunoda, Yusuke; Utsuno, Yutaka; Yoshida, Tooru


    We present an overview of the advanced Monte Carlo shell model (MCSM), including its recent applications to no-core shell-model calculations and to large-scale shell-model calculations (LSSM) in the usual sense. For the ab initio no-core MCSM we show recent methodological developments, which include the evaluation of energy eigenvalues in an infinitely large model space by an extrapolation method. As an example of the application of the no-core MCSM, the cluster structure of Be isotopes is discussed. Regarding LSSM applications, the triple shape coexistence in 68Ni and 70Ni and the shape transition of Zr isotopes are clarified with the visualization of the intrinsic deformation of the MCSM wave function. General aspects of the code development of the MCSM on massively parallel computers are also briefly described.

  2. Monte Carlo simulation to analyze the performance of CPV modules (United States)

    Herrero, Rebeca; Antón, Ignacio; Sala, Gabriel; De Nardis, Davide; Araki, Kenji; Yamaguchi, Masafumi


    A model to evaluate the performance of high concentrator photovoltaics (HCPV) modules (that generates current-voltage curves) has been applied together with a Monte Carlo approach to obtain a distribution of modules with a given set of characteristics (e.g., receivers electrical properties and misalignments within elementary units in modules) related to a manufacturing scenario. In this paper, the performance of CPV systems (tracker and inverter) that contain the set of simulated modules is evaluated depending on different system characteristics: inverter configuration, sorting of modules and bending of the tracker frame. Thus, the study of the HCPV technology regarding its angular constrains is fully covered by analyzing all the possible elements affecting the generated electrical power.

  3. Magnetic properties of checkerboard lattice: a Monte Carlo study (United States)

    Jabar, A.; Masrour, R.; Hamedoun, M.; Benyoussef, A.


    The magnetic properties of ferrimagnetic mixed-spin Ising model in the checkerboard lattice are studied using Monte Carlo simulations. The variation of total magnetization and magnetic susceptibility with the crystal field has been established. We have obtained a transition from an order to a disordered phase in some critical value of the physical variables. The reduced transition temperature is obtained for different exchange interactions. The magnetic hysteresis cycles have been established. The multiples hysteresis cycle in checkerboard lattice are obtained. The multiples hysteresis cycle have been established. The ferrimagnetic mixed-spin Ising model in checkerboard lattice is very interesting from the experimental point of view. The mixed spins system have many technological applications such as in domain opto-electronics, memory, nanomedicine and nano-biological systems. The obtained results show that that crystal field induce long-range spin-spin correlations even bellow the reduced transition temperature.

  4. Improved version of the PHOBOS Glauber Monte Carlo (United States)

    Loizides, C.; Nagle, J.; Steinberg, P.


    ;Glauber; models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. Experimental heavy-ion collaborations, in particular at RHIC and LHC, use Glauber Model calculations for various geometric observables for determination of the collision centrality. In this document, we describe the assumptions inherent to the approach, and provide an updated implementation (v2) of the Monte Carlo based Glauber Model calculation, which originally was used by the PHOBOS collaboration. The main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium, Helium-3, and Uranium, as well as the treatment of deformed nuclei and Glauber-Gribov fluctuations of the proton in p +A collisions. A users' guide (updated to reflect changes in v2) is provided for running various calculations.

  5. The ATLAS Fast Monte Carlo Production Chain Project

    CERN Document Server

    Jansky, Roland Wolfgang; The ATLAS collaboration


    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. The thereby achieved possible speed-up in detector simulation of up to a factor 100 makes subsequent digitization and reconstruction the dominant contributions to the Monte Carlo (MC) production CPU cost. The slowest components of both digitization and reconstruction are inside the Inner Detector due to the complex signal modeling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitization modules, which achieves a very high accuracy in describing the standard output. For the Inner Detector track...


    Directory of Open Access Journals (Sweden)

    Toth Reka


    Full Text Available In this paper, we have presented a corporate valuation model. The model combine several valuation methods in order to get more accurate results. To determine the corporate asset value we have used the Gordon-like two-stage asset valuation model based on the calculation of the free cash flow to the firm. We have used the free cash flow to the firm to determine the corporate market value, which was calculated with use of the Black-Scholes option pricing model in frame of the two-dimensional Monte Carlo simulation method. The combined model and the use of the two-dimensional simulation model provides a better opportunity for the corporate value estimation.

  7. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik


    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  8. Magnetic properties of checkerboard lattice: a Monte Carlo study (United States)

    Jabar, A.; Masrour, R.; Hamedoun, M.; Benyoussef, A.


    The magnetic properties of ferrimagnetic mixed-spin Ising model in the checkerboard lattice are studied using Monte Carlo simulations. The variation of total magnetization and magnetic susceptibility with the crystal field has been established. We have obtained a transition from an order to a disordered phase in some critical value of the physical variables. The reduced transition temperature is obtained for different exchange interactions. The magnetic hysteresis cycles have been established. The multiples hysteresis cycle in checkerboard lattice are obtained. The multiples hysteresis cycle have been established. The ferrimagnetic mixed-spin Ising model in checkerboard lattice is very interesting from the experimental point of view. The mixed spins system have many technological applications such as in domain opto-electronics, memory, nanomedicine and nano-biological systems. The obtained results show that that crystal field induce long-range spin-spin correlations even bellow the reduced transition temperature.

  9. A study of Monte Carlo radiative transfer through fractal clouds

    Energy Technology Data Exchange (ETDEWEB)

    Gautier, C.; Lavallec, D.; O`Hirok, W.; Ricchiazzi, P. [Univ. of California, Santa Barbara, CA (United States)] [and others


    An understanding of radiation transport (RT) through clouds is fundamental to studies of the earth`s radiation budget and climate dynamics. The transmission through horizontally homogeneous clouds has been studied thoroughly using accurate, discreet ordinates radiative transfer models. However, the applicability of these results to general problems of global radiation budget is limited by the plane parallel assumption and the fact that real clouds fields show variability, both vertically and horizontally, on all size scales. To understand how radiation interacts with realistic clouds, we have used a Monte Carlo radiative transfer model to compute the details of the photon-cloud interaction on synthetic cloud fields. Synthetic cloud fields, generated by a cascade model, reproduce the scaling behavior, as well as the cloud variability observed and estimated from cloud satellite data.

  10. Monte Carlo simulations of nanoscale focused neon ion beam sputtering. (United States)

    Timilsina, Rajendra; Rack, Philip D


    A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed.

  11. Low-energy sputterings with the Monte Carlo Program ACAT (United States)

    Yamamura, Y.; Mizuno, Y.


    The Monte Carlo program ACAT was developed to determine the total sputtering yields and angular distributions of sputtered atoms in physical processes. From computer results of the incident-energy dependent sputterings for various ion-target combinations the mass-ratio dependence and the bombarding-angle dependence of sputtering thresholds was obtained with the help of the Matsunami empirical formula for sputtering yields. The mass-ratio dependence of sputtering thresholds is in good agreement with recent theoretical results. The threshold energy of light-ion sputtering is a slightly increasing function of angle of incidence, while that of heavy-ion sputtering has a minimum value near theta = 60 deg. The angular distributions of sputtered atoms are also calculated for heavy ions, medium ions, and light ions, and reasonable agreements between calculated angular distributions and experimental results are obtained.

  12. Monte Carlo simulations of solid-state photoswitches

    Energy Technology Data Exchange (ETDEWEB)

    Rambo, P.W.; Denavit, J.


    Large increases in conductivity induced in GaAs and other semiconductors by photoionization allow fast switching by laser light with applications to pulse-power technology and microwave generation. Experiments have shown that under high-field conditions (10 to 50 kV/cm), conductivity may occur either in the linear mode where it is proportional to the absorbed light, in the {open_quotes}lock-on{close_quotes} mode, where it persists after termination of the laser pulse or in the avalanche mode where multiple carriers are generated. We have assembled a self-consistent Monte Carlo code to study these phenomena and in particular to model hot electron effects, which are expected to be important at high field strengths. This project has also brought our expertise acquired in advanced particle simulation of plasmas to bear on the modeling of semiconductor devices, which has broad industrial applications.

  13. Optical monitoring of rheumatoid arthritis: Monte Carlo generated reconstruction kernels (United States)

    Minet, O.; Beuthan, J.; Hielscher, A. H.; Zabarylo, U.


    Optical imaging in biomedicine is governed by the light absorption and scattering interaction on microscopic and macroscopic constituents in the medium. Therefore, light scattering characteristics of human tissue correlate with the stage of some diseases. In the near infrared range the scattering event with the coefficient approximately two orders of magnitude greater than absorption plays a dominant role. When measuring the optical parameters variations were discovered that correlate with the rheumatoid arthritis of a small joint. The potential of an experimental setup for transillumination the finger joint with a laser diode and the pattern of the stray light detection are demonstrated. The scattering caused by skin contains no useful information and it can be removed by a deconvolution technique to enhance the diagnostic value of this non-invasive optical method. Monte Carlo simulations ensure both the construction of the corresponding point spread function and both the theoretical verification of the stray light picture in rather complex geometry.

  14. Improving multivariate Horner schemes with Monte Carlo tree search (United States)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.


    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  15. Monte Carlo Registration and Its Application with Autonomous Robots

    Directory of Open Access Journals (Sweden)

    Christian Rink


    Full Text Available This work focuses on Monte Carlo registration methods and their application with autonomous robots. A streaming and an offline variant are developed, both based on a particle filter. The streaming registration is performed in real-time during data acquisition with a laser striper allowing for on-the-fly pose estimation. Thus, the acquired data can be instantly utilized, for example, for object modeling or robot manipulation, and the laser scan can be aborted after convergence. Curvature features are calculated online and the estimated poses are optimized in the particle weighting step. For sampling the pose particles, uniform, normal, and Bingham distributions are compared. The methods are evaluated with a high-precision laser striper attached to an industrial robot and with a noisy Time-of-Flight camera attached to service robots. The shown applications range from robot assisted teleoperation, over autonomous object modeling, to mobile robot localization.

  16. Markov Chain Monte Carlo Bayesian Learning for Neural Networks (United States)

    Goodrich, Michael S.


    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  17. Introduction to quasi-Monte Carlo integration and applications

    CERN Document Server

    Leobacher, Gunther


    This textbook introduces readers to the basic concepts of quasi-Monte Carlo methods for numerical integration and to the theory behind them. The comprehensive treatment of the subject with detailed explanations comprises, for example, lattice rules, digital nets and sequences and discrepancy theory. It also presents methods currently used in research and discusses practical applications with an emphasis on finance-related problems. Each chapter closes with suggestions for further reading and with exercises which help students to arrive at a deeper understanding of the material presented. The book is based on a one-semester, two-hour undergraduate course and is well-suited for readers with a basic grasp of algebra, calculus, linear algebra and basic probability theory. It provides an accessible introduction for undergraduate students in mathematics or computer science.

  18. Monte Carlo simulations of nematic and chiral nematic shells. (United States)

    Wand, Charlie R; Bates, Martin A


    We present a systematic Monte Carlo simulation study of thin nematic and cholesteric shells with planar anchoring using an off-lattice model. The results obtained using the simple model correspond with previously published results for lattice-based systems, with the number, type, and position of defects observed dependent on the shell thickness with four half-strength defects in a tetrahedral arrangement found in very thin shells and a pair of defects in a bipolar (boojum) configuration observed in thicker shells. A third intermediate defect configuration is occasionally observed for intermediate thickness shells, which is stabilized in noncentrosymmetric shells of nonuniform thickness. Chiral nematic (cholesteric) shells are investigated by including a chiral term in the potential. Decreasing the pitch of the chiral nematic leads to a twisted bipolar (chiral boojum) configuration with the director twist increasing from the inner to the outer surface.

  19. A Comparison of Experimental EPMA Data and Monte Carlo Simulations (United States)

    Carpenter, P. K.


    Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.

  20. Monte Carlo Benchmark Calculations for HTR-10 Initial Core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Chul; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)


    These days, pebble-bed and other high-temperature gas-cooled reactor (HTGR) designs are once again in vogue in connection with hydrogen production. In this study, as a part of establishing Monte Carlo computation system for HTGR core analysis, some benchmark calculations for pebble-type HTGR were carried out using MCNP code. As a benchmark model, the initial core of the 10MW High Temperature Gas-cooled Reactor-Test Module (HTR-10) in China was selected. After the detailed MCNP modeling of the whole facility, benchmark calculations were performed. This study deals with the core physics benchmark problems proposed for HTR-10 reactor initial core. Results to benchmark problems have been obtained by MCNP5 Code.

  1. Quantum Monte Carlo study of the Rabi-Hubbard model (United States)

    Flottat, Thibaut; Hébert, Frédéric; Rousseau, Valéry G.; Batrouni, George Ghassan


    We study, using quantum Monte Carlo (QMC) simulations, the ground state properties of a one dimensional Rabi-Hubbard model. The model consists of a lattice of Rabi systems coupled by a photon hopping term between near neighbor sites. For large enough coupling between photons and atoms, the phase diagram generally consists of only two phases: a coherent phase and a compressible incoherent one separated by a quantum phase transition (QPT). We show that, as one goes deeper in the coherent phase, the system becomes unstable exhibiting a divergence of the number of photons. The Mott phases which are present in the Jaynes-Cummings-Hubbard model are not observed in these cases due to the presence of non-negligible counter-rotating terms. We show that these two models become equivalent only when the detuning is negative and large enough, or if the counter-rotating terms are small enough

  2. Monte Carlo Simulation Tool Installation and Operation Guide

    Energy Technology Data Exchange (ETDEWEB)

    Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.


    This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.

  3. Monte Carlo simulations for design of the KFUPM PGNAA facility

    CERN Document Server

    Naqvi, A A; Maslehuddin, M; Kidwai, S


    Monte Carlo simulations were carried out to design a 2.8 MeV neutron-based prompt gamma ray neutron activation analysis (PGNAA) setup for elemental analysis of cement samples. The elemental analysis was carried out using prompt gamma rays produced through capture of thermal neutrons in sample nuclei. The basic design of the PGNAA setup consists of a cylindrical cement sample enclosed in a cylindrical high-density polyethylene moderator placed between a neutron source and a gamma ray detector. In these simulations the predominant geometrical parameters of the PGNAA setup were optimized, including moderator size, sample size and shielding of the detector. Using the results of the simulations, an experimental PGNAA setup was then fabricated at the 350 kV Accelerator Laboratory of this University. The design calculations were checked experimentally through thermal neutron flux measurements inside the PGNAA moderator. A test prompt gamma ray spectrum of the PGNAA setup was also acquired from a Portland cement samp...

  4. Monte Carlo modeling of spallation targets containing uranium and americium (United States)

    Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter


    Neutron production and transport in spallation targets made of uranium and americium are studied with a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems). A good agreement of MCADS results with experimental data on neutron- and proton-induced reactions on 241Am and 243Am nuclei allows to use this model for simulations with extended Am targets. It was demonstrated that MCADS model can be used for calculating the values of critical mass for 233,235U, 237Np, 239Pu and 241Am. Several geometry options and material compositions (U, U + Am, Am, Am2O3) are considered for spallation targets to be used in Accelerator Driven Systems. All considered options operate as deep subcritical targets having neutron multiplication factor of k∼0.5. It is found that more than 4 kg of Am can be burned in one spallation target during the first year of operation.

  5. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  6. HYDRA: a Java library for Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Gregory R. Warnes


    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  7. Spatial distribution sampling and Monte Carlo simulation of radioactive isotopes

    CERN Document Server

    Krainer, Alexander Michael


    This work focuses on the implementation of a program for random sampling of uniformly spatially distributed isotopes for Monte Carlo particle simulations and in specific FLUKA. With FLUKA it is possible to calculate the radio nuclide production in high energy fields. The decay of these nuclide, and therefore the resulting radiation field, however can only be simulated in the same geometry. This works gives the tool to simulate the decay of the produced nuclide in other geometries. With that the radiation field from an irradiated object can be simulated in arbitrary environments. The sampling of isotope mixtures was tested by simulating a 50/50 mixture of $Cs^{137}$ and $Co^{60}$. These isotopes are both well known and provide therefore a first reliable benchmark in that respect. The sampling of uniformly distributed coordinates was tested using the histogram test for various spatial distributions. The advantages and disadvantages of the program compared to standard methods are demonstrated in the real life ca...

  8. Academic Training: Monte Carlo generators for the LHC

    CERN Multimedia

    Françoise Benz


    2004-2005 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 4, 5, 6, 7 April from 11.00 to 12.00 hrs - Main Auditorium, bldg. 500 Monte Carlo generators for the LHC T. SJOSTRAND / CERN-PH, Lund Univ. SE Event generators today are indispensable as tools for the modelling of complex physics processes, that jointly lead to the production of hundreds of particles per event at LHC energies. Generators are used to set detector requirements, to formulate analysis strategies, or to calculate acceptance corrections. These lectures describe the physics that goes into the construction of an event generator, such as hard processes, initial- and final-state radiation, multiple interactions and beam remnants, hadronization and decays, and how these pieces come together. The current main generators are introduced, and are used to illustrate uncertainties in the physics modelling. Some trends for the future are outlined. ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127

  9. A Monte Carlo Method for Low Pressure Radio Frequency Discharges

    Directory of Open Access Journals (Sweden)

    Lahouaria Settaouti


    Full Text Available There is increasing interest in glow discharges because of their importance to a large number of application fields, like the microelectronics industry, flat plasma display panel technology, the laser and light industry and analytical spectrochemistry. To improve the capabilities of rf glow discharges, a good understanding of the discharge physics is highly desirable. The typical calculated results include the radio frequency (rf voltage, the electrical field distribution, the density of argon ions and electrons, the electron energy distribution function and information about the collision processes of the electrons with the Monte Carlo model. These results are presented throughout the discharge axis and as a function of time in the rf cycle. Moreover, we have investigated how many rf cycles have to be followed before a periodic steady state is reached.

  10. Accurate Monte Carlo critical exponents for Ising lattices (United States)

    García, Jorge; Gonzalo, Julio A.


    A careful Monte Carlo investigation of the phase transition very close to the critical point ( T→ Tc, H→0) in relatively large d=3, s= {1}/{2} Ising lattices did produce critical exponents β3 D=0.3126(4)≅5/16, δ3 D-1=0.1997(4)≅1/5 and γ3 D=1.253(4)≅5/4. Our results indicate that, within experimental error, they are given by simple fractions corresponding to the linear interpolations between the respective two dimensional (Onsager) and four dimensional (mean field) critical exponents. An analysis of our inverse susceptibility data χ-1( T) vs. | T- Tc| shows that these data lead to a value of γ3 D compatible with γ‧= γ and Tc=4.51152(12), while γ values obtained recently by high and low temperature series expansions and renormalization group methods are not.

  11. Monte Carlo modelling of Schottky diode for rectenna simulation (United States)

    Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.


    Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.

  12. Radiative heat transfer by the Monte Carlo method

    CERN Document Server

    Hartnett †, James P; Cho, Young I; Greene, George A; Taniguchi, Hiroshi; Yang, Wen-Jei; Kudo, Kazuhiko


    This book presents the basic principles and applications of radiative heat transfer used in energy, space, and geo-environmental engineering, and can serve as a reference book for engineers and scientists in researchand development. A PC disk containing software for numerical analyses by the Monte Carlo method is included to provide hands-on practice in analyzing actual radiative heat transfer problems.Advances in Heat Transfer is designed to fill the information gap between regularly scheduled journals and university level textbooks by providing in-depth review articles over a broader scope than journals or texts usually allow.Key Features* Offers solution methods for integro-differential formulation to help avoid difficulties* Includes a computer disk for numerical analyses by PC* Discusses energy absorption by gas and scattering effects by particles* Treats non-gray radiative gases* Provides example problems for direct applications in energy, space, and geo-environmental engineering

  13. Exploring Neutrino Oscillation Parameter Space with a Monte Carlo Algorithm (United States)

    Espejel, Hugo; Ernst, David; Cogswell, Bernadette; Latimer, David


    The χ2 (or likelihood) function for a global analysis of neutrino oscillation data is first calculated as a function of the neutrino mixing parameters. A computational challenge is to obtain the minima or the allowed regions for the mixing parameters. The conventional approach is to calculate the χ2 (or likelihood) function on a grid for a large number of points, and then marginalize over the likelihood function. As the number of parameters increases with the number of neutrinos, making the calculation numerically efficient becomes necessary. We implement a new Monte Carlo algorithm (D. Foreman-Mackey, D. W. Hogg, D. Lang and J. Goodman, Publications of the Astronomical Society of the Pacific, 125 306 (2013)) to determine its computational efficiency at finding the minima and allowed regions. We examine a realistic example to compare the historical and the new methods.

  14. Exploring a Parasite-Host Model with Monte Carlo Simulations (United States)

    Breecher, Nyles; Dong, Jiajia


    We explore parasite-host interactions, a less investigated subset of the well-established predator-prey model. In particular, it is not well known how the numerous parameters of the system affect its characteristics. Parasite-host systems rely on their spatial interaction, as a parasite must make physical contact with the host to reproduce. Using C++ to program a Monte Carlo simulation, we study how the speed and type of movement of the host affect the spatial and temporal distribution of the parasites. By drawing on mean-field theoretics, we find the exact solution for the parasite distribution with a stationary host at the center and analyze the distributions for a moving host. The findings of the study provide rich behavior of a non-equilibrium system and bring insights to pest-control and, on a larger scale, epidemics spreading.

  15. Interacting multiagent systems kinetic equations and Monte Carlo methods

    CERN Document Server

    Pareschi, Lorenzo


    The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...

  16. Adaptive multilevel splitting for Monte Carlo particle transport

    Directory of Open Access Journals (Sweden)

    Louvin Henri


    Full Text Available In the Monte Carlo simulation of particle transport, and especially for shielding applications, variance reduction techniques are widely used to help simulate realisations of rare events and reduce the relative errors on the estimated scores for a given computation time. Adaptive Multilevel Splitting (AMS is one of these variance reduction techniques that has recently appeared in the literature. In the present paper, we propose an alternative version of the AMS algorithm, adapted for the first time to the field of particle transport. Within this context, it can be used to build an unbiased estimator of any quantity associated with particle tracks, such as flux, reaction rates or even non-Boltzmann tallies like pulse-height tallies and other spectra. Furthermore, the efficiency of the AMS algorithm is shown not to be very sensitive to variations of its input parameters, which makes it capable of significant variance reduction without requiring extended user effort.

  17. Adaptive Multilevel Splitting for Monte Carlo particle transport (United States)

    Louvin, Henri; Dumonteil, Eric; Lelièvre, Tony; Rousset, Mathias; Diop, Cheikh M.


    In the Monte Carlo simulation of particle transport, and especially for shielding applications, variance reduction techniques are widely used to help simulate realisations of rare events and reduce the relative errors on the estimated scores for a given computation time. Adaptive Multilevel Splitting is one of these variance reduction techniques that has recently appeared in the literature. In the present paper, we propose an alternative version of the AMS algortihm, adapted for the first time to the field of particle tranport. Within this context, it can be used to build an unbiased estimator of any quantity associated with particle tracks, such as flux, reaction rates or even non-Boltzmann tallies. Furthermore, the effciency of the AMS algorithm is shown not to be very sensitive to variations of its input parameters, which makes it capable of significant variance reduction without requiring extended user effort.

  18. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    The present paper considers the sequential decision optimization problem. This is an important class of decision problems in engineering. Important examples include decision problems on the quality control of manufactured products and engineering components, timing of the implementation of climate...... change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...... is proposed by Longstaff and Schwartz (2001) for pricing of American options. The present paper formulates the decision problem in a more general manner and explains how the solution scheme proposed by Anders and Nishijima (2011) is implemented for the optimization of the formulated decision problem...


    Directory of Open Access Journals (Sweden)



    Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.

  20. Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine

    CERN Document Server

    Sgouros, George


    This book examines the applications of Monte Carlo (MC) calculations in therapeutic nuclear medicine, from basic principles to computer implementations of software packages and their applications in radiation dosimetry and treatment planning. It is written for nuclear medicine physicists and physicians as well as radiation oncologists, and can serve as a supplementary text for medical imaging, radiation dosimetry and nuclear engineering graduate courses in science, medical and engineering faculties. With chapters is written by recognised authorities in that particular field, the book covers the entire range of MC applications in therapeutic medical and health physics, from its use in imaging prior to therapy to dose distribution modelling targeted radiotherapy. The contributions discuss the fundamental concepts of radiation dosimetry, radiobiological aspects of targeted radionuclide therapy and the various components and steps required for implementing a dose calculation and treatment planning methodology in ...

  1. Monte-Carlo simulation of a stochastic differential equation (United States)

    Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG


    For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.

  2. A Monte Carlo tool to simulate breast cancer screening programmes (United States)

    Forastero, C.; Zamora, L. I.; Guirado, D.; Lallena, A. M.


    A Monte Carlo tool which permits the simulation of screening mammography programmes is developed. Various statistical distributions describing different parameters involved in the problem are used: the characteristics of the population under study, a tumour growth model and a model for tumour detection based on parameters such as sensitivity and specificity which depends on the woman's age. We reproduce results of different actual programmes. The model enables us to find out the configuration (the age of the women who attend the screening trials and screening frequency) which produces maximum benefits with minimum risks. In addition, the model has permitted us to validate some of the assumed hypothesis, such as the probability distribution of the tumour detection as a function of the tumour size, the frequency of the histological types and the transition probability between different histological types.

  3. Top quark mass calibration for Monte Carlo event generators

    Energy Technology Data Exchange (ETDEWEB)

    Butenschoen, Mathias [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Dehnadi, Bahman; Preisser, Moritz [Vienna Univ. (Austria). Faculty of Physics; Hoang, Andre H. [Vienna Univ. (Austria). Faculty of Physics; Vienna Univ. (Austria). Erwin Schroedinger International Inst. for Mathematical Physics; Mateu, Vicent [Univ. Autonoma de Madrid (Spain). Dept. de Fisica Teorica y Inst. de Fisica Teorica; Stewart, Iain W. [Massachusetts Institute of Technology, Cambridge, MA (United States). Center for Theoretical Physics


    The most precise top quark mass measurements use kinematic reconstruction methods, determining the top mass parameter of a Monte Carlo event generator, m{sub t}{sup MC}. Due to hadronization and parton shower dynamics, relating m{sub t}{sup MC} to a field theory mass is difficult. We present a calibration procedure to determine this relation using hadron level QCD predictions for observables with kinematic mass sensitivity. Fitting e{sup +}e{sup -} 2-Jettiness calculations at NLL/NNLL order to Pythia 8.205, m{sub t}{sup MC} differs from the pole mass by 900/600 MeV, and agrees with the MSR mass within uncertainties, m{sub t}{sup MC}≅m{sub t,1} {sub GeV}{sup MSR}.

  4. MDTS: automatic complex materials design using Monte Carlo tree search. (United States)

    M Dieb, Thaer; Ju, Shenghong; Yoshizoe, Kazuki; Hou, Zhufeng; Shiomi, Junichiro; Tsuda, Koji


    Complex materials design is often represented as a black-box combinatorial optimization problem. In this paper, we present a novel python library called MDTS (Materials Design using Tree Search). Our algorithm employs a Monte Carlo tree search approach, which has shown exceptional performance in computer Go game. Unlike evolutionary algorithms that require user intervention to set parameters appropriately, MDTS has no tuning parameters and works autonomously in various problems. In comparison to a Bayesian optimization package, our algorithm showed competitive search efficiency and superior scalability. We succeeded in designing large Silicon-Germanium (Si-Ge) alloy structures that Bayesian optimization could not deal with due to excessive computational cost. MDTS is available at

  5. Treatment planning in radiosurgery: parallel Monte Carlo simulation software

    Energy Technology Data Exchange (ETDEWEB)

    Scielzo, G. [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F. [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M.; Felici, R. [Electronic Data System, Rome (Italy); Surridge, M. [University of South Hampton (United Kingdom). Parallel Apllication Centre


    The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.

  6. Adaptive Multilevel Splitting for Monte Carlo particle transport

    Directory of Open Access Journals (Sweden)

    Louvin Henri


    Full Text Available In the Monte Carlo simulation of particle transport, and especially for shielding applications, variance reduction techniques are widely used to help simulate realisations of rare events and reduce the relative errors on the estimated scores for a given computation time. Adaptive Multilevel Splitting is one of these variance reduction techniques that has recently appeared in the literature. In the present paper, we propose an alternative version of the AMS algortihm, adapted for the first time to the field of particle tranport. Within this context, it can be used to build an unbiased estimator of any quantity associated with particle tracks, such as flux, reaction rates or even non-Boltzmann tallies. Furthermore, the effciency of the AMS algorithm is shown not to be very sensitive to variations of its input parameters, which makes it capable of significant variance reduction without requiring extended user effort.

  7. Evidence for Stable Square Ice from Quantum Monte Carlo

    CERN Document Server

    Chen, Ji; Brandenburg, Jan Gerit; Alfè, Dario; Michaelides, Angelos


    Recent experiments on ice formed by water under nanoconfinement provide evidence for a two-dimensional (2D) `square ice' phase. However, the interpretation of the experiments has been questioned and the stability of square ice has become a matter of debate. Partially this is because the simulation approaches employed so far (force fields and density functional theory) struggle to accurately describe the very small energy differences between the relevant phases. Here we report a study of 2D ice using an accurate wave-function based electronic structure approach, namely Diffusion Monte Carlo (DMC). We find that at relatively high pressure square ice is indeed the lowest enthalpy phase examined, supporting the initial experimental claim. Moreover, at lower pressures a `pentagonal ice' phase (not yet observed experimentally) has the lowest enthalpy, and at ambient pressure the `pentagonal ice' phase is degenerate with a `hexagonal ice' phase. Our DMC results also allow us to evaluate the accuracy of various densi...

  8. The neutron instrument Monte Carlo library MCLIB: Recent developments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.; Thelliez, T.G.


    A brief review is given of the developments since the ICANS-XIII meeting made in the neutron instrument design codes using the Monte Carlo library MCLIB. Much of the effort has been to assure that the library and the executing code MC{_}RUN connect efficiently with the World Wide Web application MC-WEB as part of the Los Alamos Neutron Instrument Simulation Package (NISP). Since one of the most important features of MCLIB is its open structure and capability to incorporate any possible neutron transport or scattering algorithm, this document describes the current procedure that would be used by an outside user to add a feature to MCLIB. Details of the calling sequence of the core subroutine OPERATE are discussed, and questions of style are considered and additional guidelines given. Suggestions for standardization are solicited, as well as code for new algorithms.

  9. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti


    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  10. Monte Carlo simulations and benchmark studies at CERN's accelerator chain

    CERN Document Server

    AUTHOR|(CDS)2083190; Brugger, Markus


    Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.

  11. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan


    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  12. GATE Monte Carlo simulation in a cloud computing environment (United States)

    Rowedder, Blake Austin

    The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.

  13. Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro


    Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

  14. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method (United States)

    Saini, P. Sri; Prince, Shanthi


    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  15. Practical Schemes for Accurate Forces in Quantum Monte Carlo. (United States)

    Moroni, S; Saccani, S; Filippi, C


    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of theory. Algorithms to compute exact DMC forces have been proposed in the past, and one such scheme is also put forward in this work, but remain rather impractical due to their high computational cost. As a practical route to DMC forces, we therefore revisit here an approximate method, originally developed in the context of correlated sampling and named here the Variational Drift-Diffusion (VD) approach. We thoroughly investigate its accuracy by checking the consistency between the approximate VD force and the derivative of the DMC potential energy surface for the SiH and C2 molecules and employ a wide range of wave functions optimized in VMC to assess its robustness against the choice of trial function. We find that, for all but the poorest wave function, the discrepancy between force and energy is very small over all interatomic distances, affecting the equilibrium bond length obtained with the VD forces by less than 0.004 au. Furthermore, when the VMC forces are approximate due to the use of a partially optimized wave function, the DMC forces have smaller errors and always lead to an equilibrium distance in better agreement with the experimental value. We also show that the cost of computing the VD forces is only slightly larger than the cost of calculating the DMC energy. Therefore, the VD approximation represents a robust and efficient approach to compute accurate DMC forces, superior to the VMC counterparts.

  16. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon


    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  17. Multi-pass Monte Carlo simulation method in nuclear transmutations. (United States)

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M


    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 1025 or 1026 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/1025. Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 1028 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Validation of Monte Carlo Geant4 code for a

    Directory of Open Access Journals (Sweden)

    Jaafar EL Bakkali


    Full Text Available This study is aimed at validating the Monte Carlo Geant4.9.4 code for a 6 MV Varian linac configuring a 10 × 10 cm2 radiation field. For this purpose a user-friendly Geant4 code called G4Linac has been developed from scratch allowing an accurate modeling of a 6 MV Varian linac head and performing dose calculation in a homogeneous water phantom. Discarding the other accelerator parts where electrons are created, accelerated and deviated, a virtual source of 6 MeV electrons was considered. The parameters associated with this virtual source are often unknown. Those parameters are mean energy, sigma and its full width at half maximum has been adjusted by following our own methodology that has been developed in such a manner that the optimization phase will be fast and efficient, in fact, a small number of Monte Carlo simulations has been conducted simultaneously on a cluster of computers thanks to the Rocks cluster software. The calculated dosimetric functions in a 40 × 40 × 40 cm3 water phantom were compared to the measured ones thanks to the Gamma Index method, where the gamma criterion was fixed within 2%–1 mm accuracy. After optimization, it was observed that the proper mean energy, sigma and its full width at half maximum are 5.6 MeV, 0.42 MeV and 1.177 mm, respectively. Furthermore, we have made some changes in an existing bremsstrahlung splitting technique, due to which we have succeeded to reduce the CPU time spent by the treatment head simulation about five times.

  19. Quantifying Monte Carlo uncertainty in ensemble Kalman filter

    Energy Technology Data Exchange (ETDEWEB)

    Thulin, Kristian; Naevdal, Geir; Skaug, Hans Julius; Aanonsen, Sigurd Ivar


    This report is presenting results obtained during Kristian Thulin PhD study, and is a slightly modified form of a paper submitted to SPE Journal. Kristian Thulin did most of his portion of the work while being a PhD student at CIPR, University of Bergen. The ensemble Kalman filter (EnKF) is currently considered one of the most promising methods for conditioning reservoir simulation models to production data. The EnKF is a sequential Monte Carlo method based on a low rank approximation of the system covariance matrix. The posterior probability distribution of model variables may be estimated fram the updated ensemble, but because of the low rank covariance approximation, the updated ensemble members become correlated samples from the posterior distribution. We suggest using multiple EnKF runs, each with smaller ensemble size to obtain truly independent samples from the posterior distribution. This allows a point-wise confidence interval for the posterior cumulative distribution function (CDF) to be constructed. We present a methodology for finding an optimal combination of ensemble batch size (n) and number of EnKF runs (m) while keeping the total number of ensemble members ( m x n) constant. The optimal combination of n and m is found through minimizing the integrated mean square error (MSE) for the CDFs and we choose to define an EnKF run with 10.000 ensemble members as having zero Monte Carlo error. The methodology is tested on a simplistic, synthetic 2D model, but should be applicable also to larger, more realistic models. (author). 12 refs., figs.,tabs

  20. Monte Carlo Simulation and Experimental Characterization of a Dual Head Gamma Camera

    CERN Document Server

    Rodrigues, S; Abreu, M C; Santos, N; Rato-Mendes, P; Peralta, L


    The GEANT4 Monte Carlo simulation and experimental characterization of the Siemens E.Cam Dual Head gamma camera hosted in the Particular Hospital of Algarve have been done. Imaging tests of thyroid and other phantoms have been made "in situ" and compared with the results obtained with the Monte Carlo simulation.

  1. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.


    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  2. Doppler Monte Carlo simulations of light scattering in tissue to support laser-Doppler perfusion measurements

    NARCIS (Netherlands)

    de Mul, F.F.M.; Steenbergen, Wiendelt; Greve, Jan


    Doppler Monte Carlo (DMC) simulations of the transport of light through turbid media, e.g., tissue, can be used to predict or to interpret measurements of the blood perfusion of tissue by laser‐Doppler perfusion flowmetry. We describe the physical and mathematical background of Doppler Monte Carlo


    Directory of Open Access Journals (Sweden)



    Full Text Available Value at Risk (VaR is the maximum potential loss on a portfolio based on the probability at a certain time.  In this research, portfolio VaR values calculated from historical data and Monte Carlo simulation data. Historical data is processed so as to obtain stock returns, variance, correlation coefficient, and variance-covariance matrix, then the method of Markowitz sought proportion of each stock fund, and portfolio risk and return portfolio. The data was then simulated by Monte Carlo simulation, Exact Monte Carlo Simulation and Expected Monte Carlo Simulation. Exact Monte Carlo simulation have same returns and standard deviation  with historical data, while the Expected Monte Carlo Simulation satistic calculation similar to historical data. The results of this research is the portfolio VaR  with time horizon T=1, T=10, T=22 and the confidence level of 95 %, values obtained VaR between historical data and Monte Carlo simulation data with the method exact and expected. Value of VaR from both Monte Carlo simulation is greater than VaR historical data.

  4. Lower confidence bounds for prediction accuracy in high dimensions via AROHIL Monte Carlo. (United States)

    Dobbin, Kevin K; Cooke, Stephanie


    Implementation and development of statistical methods for high-dimensional data often require high-dimensional Monte Carlo simulations. Simulations are used to assess performance, evaluate robustness, and in some cases for implementation of algorithms. But simulation in high dimensions is often very complex, cumbersome and slow. As a result, performance evaluations are often limited, robustness minimally investigated and dissemination impeded by implementation challenges. This article presents a method for converting complex, slow high-dimensional Monte Carlo simulations into simpler, faster lower dimensional simulations. We implement the method by converting a previous Monte Carlo algorithm into this novel Monte Carlo, which we call AROHIL Monte Carlo. AROHIL Monte Carlo is shown to exactly or closely match pure Monte Carlo results in a number of examples. It is shown that computing time can be reduced by several orders of magnitude. The confidence bound method implemented using AROHIL outperforms the pure Monte Carlo method. Finally, the utility of the method is shown by application to a number of real microarray datasets.

  5. Monte Carlo Simulation of Emission Tomography and other Medical Imaging Techniques (United States)

    Harrison, Robert L.


    As an introduction to Monte Carlo simulation of emission tomography, this paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.

  6. Efficient Application of Continuous Fractional Component Monte Carlo in the Reaction Ensemble

    NARCIS (Netherlands)

    Poursaeidesfahani, A.; Hens, R.; Rahbari, A.; Ramdin, M.; Dubbeldam, D.; Vlugt, T.J.H.


    A new formulation of the Reaction Ensemble Monte Carlo technique (RxMC) combined with the Continuous Fractional Component Monte Carlo method is presented. This method is denoted by serial Rx/CFC. The key ingredient is that fractional molecules of either reactants or reaction products are present and

  7. A Monte Carlo Study of Marginal Maximum Likelihood Parameter Estimates for the Graded Model. (United States)

    Ankenmann, Robert D.; Stone, Clement A.

    Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…

  8. How-to: Write a parton-level Monte Carlo event generator

    CERN Document Server

    Papaefstathiou, Andreas


    This article provides an introduction to the principles of particle physics event generators that are based on the Monte Carlo method. Following some preliminaries, instructions on how to built a basic parton-level Monte Carlo event generator are given through exercises.

  9. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)


    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  10. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.


    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.


    NARCIS (Netherlands)



    A novel method, condensed Monte Carlo simulation, is presented that applies the results of a single Monte Carlo simulation for a given albedo mu(s)/(mu(a) + mu(s)) to obtaining results for other albedos; mu(s) and mu(a) are the scattering and absorption coefficients, respectively. The method

  12. A valence force field-Monte Carlo algorithm for quantum dot growth modeling

    DEFF Research Database (Denmark)

    Barettin, Daniele; Kadkhodazadeh, Shima; Pecchia, Alessandro


    We present a novel kinetic Monte Carlo version for the atomistic valence force fields algorithm in order to model a self-assembled quantum dot growth process. We show our atomistic model is both computationally favorable and capture more details compared to traditional kinetic Monte Carlo models...

  13. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael


    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  14. Monte Carlo simulation of dense polymer melts using event chain algorithms (United States)

    Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan


    We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.

  15. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico


    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  16. Optical coherence tomography: Monte Carlo simulation and improvement by optical amplification

    DEFF Research Database (Denmark)

    Tycho, Andreas


    distribution of the light from the sample and the reference beam. To adequately estimate the intensity distributions, a novel method of modeling a focused Gaussian beam using Monte Carlo simulation is developed. This method is then combined with the derived expression for the OCT signal into a new Monte Carlo......An advanced novel Monte Carlo simulation model of the detection process of an optical coherence tomography (OCT) system is presented. For the first time it is shown analytically that the applicability of the incoherent Monte Carlo approach to model the heterodyne detection process of an OCT system...... flexibility of Monte Carlo simulations, this new model is demonstrated to be excellent as a numerical phantom, i.e., as a substitute for otherwise difficult experiments. Finally, a new model of the signal-to-noise ratio (SNR) of an OCT system with optical amplification of the light reflected from the sample...

  17. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  18. The use of Monte Carlo codes in internal dosimetry; Utilisation des codes de Monte Carlo en dosimetrie interne

    Energy Technology Data Exchange (ETDEWEB)

    Ricard, M.; Coulot, J. [Institut Gustave-Roussy, Service de Physique, 94 - Villejuif (France)


    Internal dosimetry concerns the radiation sources inside human body. It contributes to determine the energy depositions in a living organism following the accidental or medical irradiation. In the case of an accidental irradiation, the aim is to evaluate the risk estimation; in the case of a medical use the dosimetry data are used in a radiation protection purpose. In any case, it is necessary to have references methods in order to know the dose absorbed bound to the radioactive product incorporation. Three levels have to be considered: the organ level in radiation protection, the cellular and tissue levels for application in radiotherapy. The analytical methods become rapidly difficult to use so the Monte Carlo methods give now a correct statistical precision. The advantages of this way of doing are developed in this article. (N.C.)

  19. Pipeline integrity management using Monte Carlo simulation; A aplicacao do metodo de Monte Carlo no gerenciamento da integridade de dutos

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Claudio; Costa, Artur; Bittencourt, Euclides [TRANSPETRO - PETROBRAS Transporte, Rio de Janeiro, RJ (Brazil)


    Due to the growing relevance of safety and environmental protection policies in PETROBRAS and its subsidiaries, as well as official regulatory agencies and population requirements, integrity management of oil and gas pipelines became a priority activity in TRANSPETRO, involving several sectors of the company's Support Management Department. Inspection activities using intelligent PIGs, field correlations and replacement of pipeline segments are known as high cost operations and request complex logistics. Thus, it is imperative the adoption of management tools that optimize the available resources. This study presents Monte Carlo simulation method as an additional tool for evaluation and management of pipeline structural integrity. The method consists in foreseeing future physical conditions of most significant defects found in intelligent PIG In Line Inspections based on a probabilistic approach. Through Monte Carlo simulation, probability functions of failure for each defect are produced, helping managers to decide which repairs should be executed in order to reach the desired or accepted risk level. The case that illustrates this study refers to the reconditioning of ORSOL 14'' (35,56 mm) pipeline. This pipeline was constructed to transfer petroleum from Urucu's production fields to Solimoes port, in Coari, city in Brazilian Amazon Region. The result of this analysis indicated critical points for repair, in addition to the results obtained by the conventional evaluation (deterministic ASME B-31G method). Due to the difficulties to mobilize staff and execute necessary repairs in remote areas like Amazon forest, the probabilistic tool was extremely useful, improving pipeline integrity level and avoiding future additional costs. (author)

  20. Spectrophotometric study of Saturn's main rings by means of Monte Carlo ray-tracing and Hapke's theory (United States)

    Ciarniello, Mauro; Filacchione, Gianrico; D'Aversa, Emiliano; Cuzzi, Jeffrey N.; Capaccioni, Fabrizio; Hedman, Matthew M.; Dalle Ore, Cristina M.; Nicholson, Philip D.; Clark, Roger Nelson; Brown, Robert H.; Cerroni, Priscilla; Spilker, Linda


    This work is devoted to the investigation of the spectrophotometric properties of Saturn's rings from Cassini-VIMS (Visible and Infrared Mapping Spectrometer) observations. The dataset used for this analysis is represented by ten radial spectrograms of the rings which have been derived in Filacchione et al. (2014) by radial mosaics produced by VIMS. Spectrograms report the measured radiance factor of the main Saturn's rings as a function of both radial distance (from 73.500 to 141.375 km) and wavelength (0.35-5.1 µm) for different observation geometries (phase angle ranging in the 1.9°-132.2° interval). We take advantage of a Monte Carlo ray-tracing routine to characterize the photometric behavior of the rings at each wavelength and derive the spectral Bond albedo of rings particles. This quantity is used to infer the composition of the regolith covering rings particles by applying Hapke's theory. Four different regions, characterized by different optical depths, and respectively located in the C ring, inner B ring, mid B ring and A ring, have been investigated. Results from spectral modeling indicate that rings spectrum can be described by water ice with minimal inclusion of organic materials (tholin, investigated regions, being maximum in the thinnest C ring and minimum in the thickest mid B ring. This distribution of the neutral absorber is interpreted as the result of a contamination by exogenous material, which is more effective in the less dense regions of the rings because of their lower content of pure water ice.

  1. Monte Carlo Computational Modeling of Atomic Oxygen Interactions (United States)

    Banks, Bruce A.; Stueber, Thomas J.; Miller, Sharon K.; De Groh, Kim K.


    Computational modeling of the erosion of polymers caused by atomic oxygen in low Earth orbit (LEO) is useful for determining areas of concern for spacecraft environment durability. Successful modeling requires that the characteristics of the environment such as atomic oxygen energy distribution, flux, and angular distribution be properly represented in the model. Thus whether the atomic oxygen is arriving normal to or inclined to a surface and whether it arrives in a consistent direction or is sweeping across the surface such as in the case of polymeric solar array blankets is important to determine durability. When atomic oxygen impacts a polymer surface it can react removing a certain volume per incident atom (called the erosion yield), recombine, or be ejected as an active oxygen atom to potentially either react with other polymer atoms or exit into space. Scattered atoms can also have a lower energy as a result of partial or total thermal accommodation. Many solutions to polymer durability in LEO involve protective thin films of metal oxides such as SiO2 to prevent atomic oxygen erosion. Such protective films also have their own interaction characteristics. A Monte Carlo computational model has been developed which takes into account the various types of atomic oxygen arrival and how it reacts with a representative polymer (polyimide Kapton H) and how it reacts at defect sites in an oxide protective coating, such as SiO2 on that polymer. Although this model was initially intended to determine atomic oxygen erosion behavior at defect sites for the International Space Station solar arrays, it has been used to predict atomic oxygen erosion or oxidation behavior on many other spacecraft components including erosion of polymeric joints, durability of solar array blanket box covers, and scattering of atomic oxygen into telescopes and microwave cavities where oxidation of critical component surfaces can take place. The computational model is a two dimensional model

  2. Quantum Monte Carlo calculations with chiral effective field theory interactions

    Energy Technology Data Exchange (ETDEWEB)

    Tews, Ingo


    The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By

  3. Quantum Monte Carlo Calculations Applied to Magnetic Molecules

    Energy Technology Data Exchange (ETDEWEB)

    Engelhardt, Larry [Iowa State Univ., Ames, IA (United States)


    We have calculated the equilibrium thermodynamic properties of Heisenberg spin systems using a quantum Monte Carlo (QMC) method. We have used some of these systems as models to describe recently synthesized magnetic molecules, and-upon comparing the results of these calculations with experimental data-have obtained accurate estimates for the basic parameters of these models. We have also performed calculations for other systems that are of more general interest, being relevant both for existing experimental data and for future experiments. Utilizing the concept of importance sampling, these calculations can be carried out in an arbitrarily large quantum Hilbert space, while still avoiding any approximations that would introduce systematic errors. The only errors are statistical in nature, and as such, their magnitudes are accurately estimated during the course of a simulation. Frustrated spin systems present a major challenge to the QMC method, nevertheless, in many instances progress can be made. In this chapter, the field of magnetic molecules is introduced, paying particular attention to the characteristics that distinguish magnetic molecules from other systems that are studied in condensed matter physics. We briefly outline the typical path by which we learn about magnetic molecules, which requires a close relationship between experiments and theoretical calculations. The typical experiments are introduced here, while the theoretical methods are discussed in the next chapter. Each of these theoretical methods has a considerable limitation, also described in Chapter 2, which together serve to motivate the present work. As is shown throughout the later chapters, the present QMC method is often able to provide useful information where other methods fail. In Chapter 3, the use of Monte Carlo methods in statistical physics is reviewed, building up the fundamental ideas that are necessary in order to understand the method that has been used in this work. With these

  4. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki


    Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and

  5. Multi-index Monte Carlo: when sparsity meets sampling

    KAUST Repository

    Haji Ali, Abdul Lateef


    We propose and analyze a novel multi-index Monte Carlo (MIMC) method for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The MIMC method is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Inspired by Giles’s seminal work, we use in MIMC high-order mixed differences instead of using first-order differences as in MLMC to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles’s MLMC analysis and which increase the domain of the problem parameters for which we achieve the optimal convergence, O(TOL−2). Moreover, in MIMC, the rate of increase of required memory with respect to TOL is independent of the number of directions up to a logarithmic term which allows far more accurate solutions to be calculated for higher dimensions than what is possible when using MLMC. We motivate the setting of MIMC by first focusing on a simple full tensor index set. We then propose a systematic construction of optimal sets of indices for MIMC based on properly defined profits that in turn depend on the average cost per sample and the corresponding weak error and variance. Under standard assumptions on the convergence rates of the weak error, variance and work per sample, the optimal index set turns out to be the total degree type. In some cases, using optimal index sets, MIMC achieves a better rate for the computational complexity than the corresponding rate when using full tensor index sets. We also show the asymptotic normality of the statistical error in the resulting MIMC estimator and justify in this way our error estimate, which allows both the required accuracy and the confidence level in our computational

  6. Monte Carlo Simulation for LINAC Standoff Interrogation of Nuclear Material

    Energy Technology Data Exchange (ETDEWEB)

    Clarke, Shaun D [ORNL; Flaska, Marek [ORNL; Miller, Thomas Martin [ORNL; Protopopescu, Vladimir A [ORNL; Pozzi, Sara A [ORNL


    The development of new techniques for the interrogation of shielded nuclear materials relies on the use of Monte Carlo codes to accurately simulate the entire system, including the interrogation source, the fissile target and the detection environment. The objective of this modeling effort is to develop analysis tools and methods-based on a relevant scenario-which may be applied to the design of future systems for active interrogation at a standoff. For the specific scenario considered here, the analysis will focus on providing the information needed to determine the type and optimum position of the detectors. This report describes the results of simulations for a detection system employing gamma rays to interrogate fissile and nonfissile targets. The simulations were performed using specialized versions of the codes MCNPX and MCNP-PoliMi. Both prompt neutron and gamma ray and delayed neutron fluxes have been mapped in three dimensions. The time dependence of the prompt neutrons in the system has also been characterized For this particular scenario, the flux maps generated with the Monte Carlo model indicate that the detectors should be placed approximately 50 cm behind the exit of the accelerator, 40 cm away from the vehicle, and 150 cm above the ground. This position minimizes the number of neutrons coming from the accelerator structure and also receives the maximum flux of prompt neutrons coming from the source. The lead shielding around the accelerator minimizes the gamma-ray background from the accelerator in this area. The number of delayed neutrons emitted from the target is approximately seven orders of magnitude less than the prompt neutrons emitted from the system. Therefore, in order to possibly detect the delayed neutrons, the detectors should be active only after all prompt neutrons have scattered out of the system. Preliminary results have shown this time to be greater than 5 ?s after the accelerator pulse. This type of system is illustrative of a

  7. GPU-Monte Carlo based fast IMRT plan optimization

    Directory of Open Access Journals (Sweden)

    Yongbao Li


    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  8. Quantum Monte Carlo for electronic structure: Recent developments and applications

    Energy Technology Data Exchange (ETDEWEB)

    Rodriquez, Maria Milagos Soto [Lawrence Berkeley Lab. and Univ. of California, Berkeley, CA (United States). Dept. of Chemistry


    Quantum Monte Carlo (QMC) methods have been found to give excellent results when applied to chemical systems. The main goal of the present work is to use QMC to perform electronic structure calculations. In QMC, a Monte Carlo simulation is used to solve the Schroedinger equation, taking advantage of its analogy to a classical diffusion process with branching. In the present work the author focuses on how to extend the usefulness of QMC to more meaningful molecular systems. This study is aimed at questions concerning polyatomic and large atomic number systems. The accuracy of the solution obtained is determined by the accuracy of the trial wave function`s nodal structure. Efforts in the group have given great emphasis to finding optimized wave functions for the QMC calculations. Little work had been done by systematically looking at a family of systems to see how the best wave functions evolve with system size. In this work the author presents a study of trial wave functions for C, CH, C2H and C2H2. The goal is to study how to build wave functions for larger systems by accumulating knowledge from the wave functions of its fragments as well as gaining some knowledge on the usefulness of multi-reference wave functions. In a MC calculation of a heavy atom, for reasonable time steps most moves for core electrons are rejected. For this reason true equilibration is rarely achieved. A method proposed by Batrouni and Reynolds modifies the way the simulation is performed without altering the final steady-state solution. It introduces an acceleration matrix chosen so that all coordinates (i.e., of core and valence electrons) propagate at comparable speeds. A study of the results obtained using their proposed matrix suggests that it may not be the optimum choice. In this work the author has found that the desired mixing of coordinates between core and valence electrons is not achieved when using this matrix. A bibliography of 175 references is

  9. Evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Sang Keun; Kim, Wook; Park, Yong Sung; Kang, Joo Hyun; Lee, Yong Jin [Korea Institute of Radiological and Medical Sciences, KIRAMS, Seoul (Korea, Republic of); Cho, Doo Wan; Lee, Hong Soo; Han, Su Cheol [Jeonbuk Department of Inhalation Research, Korea Institute of toxicology, KRICT, Jeongeup (Korea, Republic of)


    These absorbed dose can calculated using the Monte Carlo transport code MCNP (Monte Carlo N-particle transport code). Internal radiotherapy absorbed dose was calculated using conventional software, such as OLINDA/EXM or Monte Carlo simulation. However, the OLINDA/EXM does not calculate individual absorbed dose and non-standard organ, such as tumor. While the Monte Carlo simulation can calculated non-standard organ and specific absorbed dose using individual CT image. External radiotherapy, absorbed dose can calculated by specific absorbed energy in specific organs using Monte Carlo simulation. The specific absorbed energy in each organ was difference between species or even if the same species. Since they have difference organ sizes, position, and density of organs. The aim of this study was to individually evaluated cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. We evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. The absorbed energy in each organ compared with mouse heart was 54.6 fold higher than monkey absorbed energy in heart. Likewise lung was 88.4, liver was 16.0, urinary bladder was 29.4 fold higher than monkey. It means that the distance of each organs and organ mass was effects of the absorbed energy. This result may help to can calculated absorbed dose and more accuracy plan for external radiation beam therapy and internal radiotherapy.

  10. Feasibility Study of Core Design with a Monte Carlo Code for APR1400 Initial core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinsun; Chang, Do Ik; Seong, Kibong [KEPCO NF, Daejeon (Korea, Republic of)


    The Monte Carlo calculation becomes more popular and useful nowadays due to the rapid progress in computing power and parallel calculation techniques. There have been many attempts to analyze a commercial core by Monte Carlo transport code using the enhanced computer capability, recently. In this paper, Monte Carlo calculation of APR1400 initial core has been performed and the results are compared with the calculation results of conventional deterministic code to find out the feasibility of core design using Monte Carlo code. SERPENT, a 3D continuous-energy Monte Carlo reactor physics burnup calculation code is used for this purpose and the KARMA-ASTRA code system, which is used for a deterministic code of comparison. The preliminary investigation for the feasibility of commercial core design with Monte Carlo code was performed in this study. Simplified core geometry modeling was performed for the reactor core surroundings and reactor coolant model is based on two region model. The reactivity difference at HZP ARO condition between Monte Carlo code and the deterministic code is consistent with each other and the reactivity difference during the depletion could be reduced by adopting the realistic moderator temperature. The reactivity difference calculated at HFP, BOC, ARO equilibrium condition was 180 ±9 pcm, with axial moderator temperature of a deterministic code. The computing time will be a significant burden at this time for the application of Monte Carlo code to the commercial core design even with the application of parallel computing because numerous core simulations are required for actual loading pattern search. One of the remedy will be a combination of Monte Carlo code and the deterministic code to generate the physics data. The comparison of physics parameters with sophisticated moderator temperature modeling and depletion will be performed for a further study.

  11. Evaluation of measurement uncertainty and its numerical calculation by a Monte Carlo method (United States)

    Wübbeler, Gerd; Krystek, Michael; Elster, Clemens


    The Guide to the Expression of Uncertainty in Measurement (GUM) is the de facto standard for the evaluation of measurement uncertainty in metrology. Recently, evaluation of measurement uncertainty has been proposed on the basis of probability density functions (PDFs) using a Monte Carlo method. The relation between this PDF approach and the standard method described in the GUM is outlined. The Monte Carlo method required for the numerical calculation of the PDF approach is described and illustrated by its application to two examples. The results obtained by the Monte Carlo method for the two examples are compared to the corresponding results when applying the GUM.

  12. Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations

    DEFF Research Database (Denmark)

    Pettersen, E. E.; Demazire, C.; Jareteg, K.


    that corresponds to the real part of the neutron balance, and one that corresponds to the imaginary part. The two equivalent problems are in nature similar to two subcritical systems driven by external neutron sources, and can thus be treated as such in a Monte Carlo framework. The definition of these two...... of light water reactor conditions in an infinite lattice of fuel pins surrounded by water. The test case highlights flux gradients that are steeper in the Monte Carlo-based transport solution than in the diffusion-based solution. Compared to other Monte Carlo-based methods earlier proposed for carrying out...

  13. Flat histogram diagrammatic Monte Carlo method: calculation of the Green's function in imaginary time. (United States)

    Diamantis, Nikolaos G; Manousakis, Efstratios


    The diagrammatic Monte Carlo (DiagMC) method is a numerical technique which samples the entire diagrammatic series of the Green's function in quantum many-body systems. In this work, we incorporate the flat histogram principle in the diagrammatic Monte Carlo method, and we term the improved version the "flat histogram diagrammatic Monte Carlo" method. We demonstrate the superiority of this method over the standard DiagMC in extracting the long-imaginary-time behavior of the Green's function, without incorporating any a priori knowledge about this function, by applying the technique to the polaron problem.

  14. Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations (United States)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias


    Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.

  15. AMS-02 Monte Carlo Production in Science Operation Centre at Southeast University (United States)

    Luo, Junzhou; Zhang, Jinghui; Dong, Fang; Song, Aibo; Xiong, Runqun; Shi, Jiyuan; Huang, Feiqiao; Shi, Renli; Liu, Zijian; Choutko, Vitaly; Egorov, Alexander; Eline, Alexandre


    Southeast University (SEU) Science Operation Centre (SOC) is one of the computing centres of the Alpha Magnetic Spectrometer (AMS-02) experiment. It provides 2016 CPU cores for AMS Monte Carlo production and a dedicated ∼1Gbps Long Fat Network (LFN) for AMS data transmission between SEU and CERN. In this paper, the development and deployment of SEU SOC’s automated Monte Carlo production management system is discussed in detail. Data transmission optimizations are further introduced in order to speed up the data transfer in LFN between SEU SOC and CERN. In addition, monitoring tool for SEU SOC’s Monte Carlo production is also presented.


    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL


    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  17. Monte Carlo calculations of thermodynamic properties of deuterium under high pressures

    Energy Technology Data Exchange (ETDEWEB)

    Levashov, P R; Filinov, V S; BoTan, A; Fortov, V E [Joint Institute for High Temperatures, Izhorskaya 13-2, Moscow 125412 (Russian Federation); Bonitz, M [Cristian-Albrechts-Universitaet zu Kiel, ITPA, Leibnizstr. 15, 24098 Kiel (Germany)], E-mail:


    Two different numerical approaches have been applied for calculations of shock Hugoniots and compression isentrope of deuterium: direct path integral Monte Carlo and reactive Monte Carlo. The results show good agreement between two methods at intermediate pressure which is an indication of correct accounting of dissociation effects in the direct path integral Monte Carlo method. Experimental data on both shock and quasi-isentropic compression of deuterium are well described by calculations. Thus dissociation of deuterium molecules in these experiments together with interparticle interaction play significant role.

  18. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K. [Argonne National Laboratory, Mathematics and Computer Science Division, 9700 South Cass Avenue, Lemont, Illinois 60439; Lund, Amanda L. [Argonne National Laboratory, Mathematics and Computer Science Division, 9700 South Cass Avenue, Lemont, Illinois 60439; Siegel, Andrew R. [Argonne National Laboratory, Mathematics and Computer Science Division, 9700 South Cass Avenue, Lemont, Illinois 60439


    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulation that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.

  19. Monte Carlo in radiotherapy: experience in a distributed computational environment (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.


    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  20. Monte Carlo simulation of gamma ray tomography for image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Guedes, Karlos A.N.; Moura, Alex; Dantas, Carlos; Melo, Silvio; Lima, Emerson, E-mail: [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Meric, Ilker [University of Bergen (Norway)


    The Monte Carlo simulations of known density and shape object was validate with Gamma Ray Tomography in static experiments. An aluminum half-moon piece placed inside a steel pipe was the MC simulation test object that was also measured by means of gamma ray transmission. Wall effect of the steel pipe due to irradiation geometry in a single pair source-detector tomography was evaluated by comparison with theoretical data. MCNPX code requires a defined geometry to each photon trajectory which practically prevents this usage for tomography reconstruction simulation. The solution was found by writing a program in Delphi language to create input files automation code. Simulations of tomography data by automated MNCPX code were carried out and validated by experimental data. Working in this sequence the produced data needed a databank to be stored. Experimental setup used a Cesium-137 isotopic radioactive source (7.4 × 109 Bq), and NaI(Tl) scintillation detector of (51 × 51) × 10−3 m crystal size coupled to a multichannel analyzer. A stainless steel tubes of 0,154 m internal diameter, 0.014 m thickness wall. The results show that the MCNPX simulation code adapted to automated input file is useful for generating a matrix data M(θ,t), of a computerized gamma ray tomography for any known density and regular shape object. Experimental validation used RMSE from gamma ray paths and from attenuation coefficient data. (author)

  1. Monte Carlo simulation of photon way in clinical laser therapy (United States)

    Ionita, Iulian; Voitcu, Gabriel


    The multiple scattering of light can increase efficiency of laser therapy of inflammatory diseases enlarging the treated area. The light absorption is essential for treatment while scattering dominates. Multiple scattering effects must be introduced using the Monte Carlo method for modeling light transport in tissue and finally to calculate the optical parameters. Diffuse reflectance measurements were made on high concentrated live leukocyte suspensions in similar conditions as in-vivo measurements. The results were compared with the values determined by MC calculations, and the latter have been adjusted to match the specified values of diffuse reflectance. The principal idea of MC simulations applied to absorption and scattering phenomena is to follow the optical path of a photon through the turbid medium. The concentrated live cell solution is a compromise between homogeneous layer as in MC model and light-live cell interaction as in-vivo experiments. In this way MC simulation allow us to compute the absorption coefficient. The values of optical parameters, derived from simulation by best fitting of measured reflectance, were used to determine the effective cross section. Thus we can compute the absorbed radiation dose at cellular level.

  2. Multi-Index Monte Carlo (MIMC) When sparsity meets sampling

    KAUST Repository

    Tempone, Raul


    This talk focuses into our newest method: Multi Index Monte Carlo (MIMC). The MIMC method uses a stochastic combination technique to solve the given approximation problem, generalizing the notion of standard MLMC levels into a set of multi indices that should be properly chosen to exploit the available regularity. Indeed, instead of using first-order differences as in standard MLMC, MIMC uses high-order differences to reduce the variance of the hierarchical differences dramatically. This in turn gives a new improved complexity result that increases the domain of the problem parameters for which the method achieves the optimal convergence rate, O(TOL-2). Using optimal index sets that we determined, MIMC achieves a better rate for the computational complexity does not depend on the dimensionality of the underlying problem, up to logarithmic factors. We present numerical results related to a three dimensional PDE with random coefficients to substantiate some of the derived computational complexity rates. Finally, using the Lindeberg-Feller theorem, we also show the asymptotic normality of the statistical error in the MIMC estimator and justify in this way our error estimate that allows prescribing both the required accuracy and confidence in the final result

  3. Three-dimensional Monte Carlo calculation of some nuclear parameters (United States)

    Günay, Mehtap; Şeker, Gökmen


    In this study, a fusion-fission hybrid reactor system was designed by using 9Cr2WVTa Ferritic steel structural material and the molten salt-heavy metal mixtures 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2, as fluids. The fluids were used in the liquid first wall, blanket and shield zones of a fusion-fission hybrid reactor system. Beryllium (Be) zone with the width of 3 cm was used for the neutron multiplication between the liquid first wall and blanket. This study analyzes the nuclear parameters such as tritium breeding ratio (TBR), energy multiplication factor (M), heat deposition rate, fission reaction rate in liquid first wall, blanket and shield zones and investigates effects of reactor grade Pu content in the designed system on these nuclear parameters. Three-dimensional analyses were performed by using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.

  4. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati


    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  5. Quantum Monte Carlo tunneling from quantum chemistry to quantum annealing (United States)

    Mazzola, Guglielmo; Smelyanskiy, Vadim N.; Troyer, Matthias


    Quantum tunneling is ubiquitous across different fields, from quantum chemical reactions and magnetic materials to quantum simulators and quantum computers. While simulating the real-time quantum dynamics of tunneling is infeasible for high-dimensional systems, quantum tunneling also shows up in quantum Monte Carlo (QMC) simulations, which aim to simulate quantum statistics with resources growing only polynomially with the system size. Here we extend the recent results obtained for quantum spin models [Phys. Rev. Lett. 117, 180402 (2016), 10.1103/PhysRevLett.117.180402], and we study continuous-variable models for proton transfer reactions. We demonstrate that QMC simulations efficiently recover the scaling of ground-state tunneling rates due to the existence of an instanton path, which always connects the reactant state with the product. We discuss the implications of our results in the context of quantum chemical reactions and quantum annealing, where quantum tunneling is expected to be a valuable resource for solving combinatorial optimization problems.

  6. Asteroid mass estimation using Markov-chain Monte Carlo (United States)

    Siltala, Lauri; Granvik, Mikael


    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.

  7. Monte Carlo analysis of radiative transport in oceanographic lidar measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale


    The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is

  8. Monte-Carlo Tree Search in Settlers of Catan (United States)

    Szita, István; Chaslot, Guillaume; Spronck, Pieter

    Games are considered important benchmark opportunities for artificial intelligence research. Modern strategic board games can typically be played by three or more people, which makes them suitable test beds for investigating multi-player strategic decision making. Monte-Carlo Tree Search (MCTS) is a recently published family of algorithms that achieved successful results with classical, two-player, perfect-information games such as Go. In this paper we apply MCTS to the multi-player, non-deterministic board game Settlers of Catan. We implemented an agent that is able to play against computer-controlled and human players. We show that MCTS can be adapted successfully to multi-agent environments, and present two approaches of providing the agent with a limited amount of domain knowledge. Our results show that the agent has a considerable playing strength when compared to game implementation with existing heuristics. So, we may conclude that MCTS is a suitable tool for achieving a strong Settlers of Catan player.

  9. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Energy Technology Data Exchange (ETDEWEB)

    Caccia, B [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Mattia, M [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Amati, G [Inter-University Consortium for the Application of Super-Computing for Universities and Research (CASPUR), Rome (Italy); Andenna, C [Istituto Superiore Prevenzione e Sicurezza del Lavoro (ISPESL), Rome (Italy); Benassi, M [Medical Physics Department, Istituto Regina Elena, Rome (Italy); D' Angelo, A [Medical Physics Department, Istituto Regina Elena, Rome (Italy); Frustagli, G [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Iaccarino, G [Medical Physics Department, Istituto Regina Elena, Rome (Italy); Occhigrossi, A [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy); Valentini, S [Istituto Superiore di Sanita (ISS) and Istituto Nazionale di Fisica Nucleare (INFN), Rome (Italy)


    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below {+-}2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  10. Learning About Ares I from Monte Carlo Simulation (United States)

    Hanson, John M.; Hall, Charlie E.


    This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.

  11. Challenges for large scale ab initio Quantum Monte Carlo (United States)

    Kent, Paul


    Ab initio Quantum Monte Carlo is an electronic structure method that is highly accurate, well suited to large scale computation, and potentially systematically improvable in accuracy. Due to increases in computer power, the method has been applied to systems where established electronic structure methods have difficulty reaching the accuracies desired to inform experiment without empiricism, a necessary step in the design of materials and a helpful step in the improvement of cheaper and less accurate methods. Recent applications include accurate phase diagrams of simple materials through to phenomena in transition metal oxides. Nevertheless there remain significant challenges to achieving a methodology that is robust and systematically improvable in practice, as well as capable of exploiting the latest generation of high-performance computers. In this talk I will describe the current state of the art, recent applications, and several significant challenges for continued improvement. Supported through the Predictive Theory and Modeling for Materials and Chemical Science program by the Office of Basic Energy Sciences (BES), Department of Energy (DOE).

  12. Quantum Monte Carlo Calculations of Excitations in Hydrogenated Germanium Clusters (United States)

    Vincent, Jordan; Kim, Jeongnim; Martin, Richard


    Quantum Monte Carlo (QMC) calculations are presented for energies of ground and excited states of Ge atom and hydrogen passivated closed-shell molecules and clusters: GeH4, Ge2H6, Ge5H12, Ge10H16 and Ge29H36. We compare the results for excitations with previous QMC and time-dependant Density Functional Theory (TD- DFT) done for the corresponding Silicon clusters [1,2]; in particular; we find that preliminary results for lowest excitation enregy of Ge29H36 5.08[29]eV is lower than the gap 5.4eV reported for Si[2]. Core-valence partitioning for Ge is implemented by replacing the core-states with a Hartree-Fock pseudopotential plus a Core Polarization Potential (CPP)[3]. Core-valence correlation treated by the CPP is shown to be essential for accurate atomic energies and significant for the molecules, but smaller in the clusters. [1] Porter et. al., PRB 64, 035320 (2001). [2] Williamson et. al., PRL 89, 196803 (2002). [3] Shirley and Martin, PRB 47, 15413 (1993)

  13. Random Number Generation for Petascale Quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ashok Srinivasan


    The quality of random number generators can affect the results of Monte Carlo computations, especially when a large number of random numbers are consumed. Furthermore, correlations present between different random number streams in a parallel computation can further affect the results. The SPRNG software, which the author had developed earlier, has pseudo-random number generators (PRNGs) capable of producing large numbers of streams with large periods. However, they had been empirically tested on only thousand streams earlier. In the work summarized here, we tested the SPRNG generators with over a hundred thousand streams, involving over 10^14 random numbers per test, on some tests. We also tested the popular Mersenne Twister. We believe that these are the largest tests of PRNGs, both in terms of the numbers of streams tested and the number of random numbers tested. We observed defects in some of these generators, including the Mersenne Twister, while a few generators appeared to perform well. We also corrected an error in the implementation of one of the SPRNG generators.

  14. Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL


    This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Three of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.

  15. Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis

    Directory of Open Access Journals (Sweden)

    Hyung Jin Shim


    Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.

  16. Monte Carlo studies on neutron interactions in radiobiological experiments (United States)

    Shahmohammadi Beni, Mehrdad; Hau, Tak Cheong; Krstic, D.; Nikezic, D.


    Monte Carlo method was used to study the characteristics of neutron interactions with cells underneath a water medium layer with varying thickness. The following results were obtained. (1) The fractions of neutron interaction with 1H, 12C, 14N and 16O nuclei in the cell layer were studied. The fraction with 1H increased with increasing medium thickness, while decreased for 12C, 14N and 16O nuclei. The bulges in the interaction fractions with 12C, 14N and 16O nuclei were explained by the resonance spikes in the interaction cross-section data. The interaction fraction decreased in the order: 1H > 16O > 12C > 14N. (2) In general, as the medium thickness increased, the number of “interacting neutrons” which exited the medium and then further interacted with the cell layer increased. (3) The area under the angular distributions for “interacting neutrons” decreased with increasing incident neutron energy. Such results would be useful for deciphering the reasons behind discrepancies among existing results in the literature. PMID:28704557

  17. Non-analog Monte Carlo estimators for radiation momentum deposition

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffery D [Los Alamos National Laboratory; Hykes, Joshua M [Los Alamos National Laboratory


    The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.

  18. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter


    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  19. Monte Carlo simulations of ionization potential depression in dense plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Stransky, M., E-mail: [Department of Radiation and Chemical Physics, Institute of Physics ASCR, Na Slovance 2, 182 21 Prague 8 (Czech Republic)


    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.

  20. Scalable Metropolis Monte Carlo for simulation of hard shapes (United States)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.


    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  1. Advances in the application of diffusion Monte Carlo to solids (United States)

    Shulenburger, L.; Mattsson, T. R.


    The need for high fidelity electronic structure calculations has catalyzed an explosion in the development of new techniques. Improvements in DFT functionals, many body perturbation theory and dynamical mean field theory are starting to make significant headway towards reaching the accuracy required for a true predictive capability. One technique that is undergoing a resurgence is diffusion Monte Carlo (DMC). The early calculations with this method were of unquestionable accuracy (providing a valuable reference for DFT functionals) but were largely limited to model systems because of their high computational cost. Algorithmic advances and improvements in computer power have reached the point where this is no longer an insurmountable obstacle. In this talk I will present a broad study of DMC applied to condensed matter (arXiv:1310.1047). We have shown excellent agreement for the bulk modulus and lattice constant of solids exhibiting several different types of binding, including ionic, covalent and van der Waals. We will discuss both the opportunities for application of this method as well as opportunities for further theoretical improvements. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract No. DE-AC04-94AL85000.

  2. Next to Leading Logarithms and the PHOTOS Monte Carlo

    CERN Document Server

    Golonka, P


    With the approaching start-up of the experiments at LHC, the urgency to quantify systematic uncertainties of the generators, used in the interpretation of the data, is becoming pressing. The PHOTOS Monte Carlo program is often used for the simulationof experimental, selection-sensitive, QED radiative corrections in decays of Z bosons and other heavy resonances and particles. Thanks to its complete phase-space coverage it is possible, with no approximations for any decay channel, to implement the matrix-element. The present paper will be devoted to those parts of the next-to-leading order corrections for Z decays which are normally missing in PHOTOS. The analytical form of the exact and truncated (standard) kernel used in PHOTOS will be explicitly given. The correction, being the ratio of the exact to the approximate kernel, can be activated as an optional contribution to the internal weight of PHOTOS. To calculate the weight, the information on the effective Born-level Z/gamma* couplings and even directions o...

  3. Evaluation of Monte Carlo tools for high energy atmospheric physics (United States)

    Rutjes, Casper; Sarria, David; Broberg Skeltved, Alexander; Luque, Alejandro; Diniz, Gabriel; Østgaard, Nikolai; Ebert, Ute


    The emerging field of high energy atmospheric physics (HEAP) includes terrestrial gamma-ray flashes, electron-positron beams and gamma-ray glows from thunderstorms. Similar emissions of high energy particles occur in pulsed high voltage discharges. Understanding these phenomena requires appropriate models for the interaction of electrons, positrons and photons of up to 40 MeV energy with atmospheric air. In this paper, we benchmark the performance of the Monte Carlo codes Geant4, EGS5 and FLUKA developed in other fields of physics and of the custom-made codes GRRR and MC-PEPTITA against each other within the parameter regime relevant for high energy atmospheric physics. We focus on basic tests, namely on the evolution of monoenergetic and directed beams of electrons, positrons and photons with kinetic energies between 100 keV and 40 MeV through homogeneous air in the absence of electric and magnetic fields, using a low energy cutoff of 50 keV. We discuss important differences between the results of the different codes and provide plausible explanations. We also test the computational performance of the codes. The Supplement contains all results, providing a first benchmark for present and future custom-made codes that are more flexible in including electrodynamic interactions.

  4. Monte Carlo simulation of AB-copolymers with saturating bonds

    CERN Document Server

    Chertovich, A V; Khokhlov, A R; Bohr, J


    Structural transitions in a single AB-copolymer chain where saturating bonds can be formed between A-and B-units are studied by means of Monte Carlo computer simulations using the bond fluctuation model. Three transitions are found, coil-globule, coil-hairpin and globule-hairpin, depending on the nature of a particular AB-sequence: statistical random sequence, diblock sequence and 'random-complementary' sequence (one-half of such an AB-sequence is random with Bernoulli statistics while the other half is complementary to the first one). The properties of random-complementary sequences are closer to those of diblock sequences than to the properties of random sequences. The model (although quite rough) is expected to represent some basic features of real RNA molecules, i.e. the formation of secondary structure of RNA due to hydrogen bonding of corresponding bases and stacking interactions of the base pairs in helixes. We introduce the notation of RNA-like copolymers and discuss in what sense the sequences studie...

  5. Virtual Network Embedding via Monte Carlo Tree Search. (United States)

    Haeri, Soroush; Trajkovic, Ljiljana


    Network virtualization helps overcome shortcomings of the current Internet architecture. The virtualized network architecture enables coexistence of multiple virtual networks (VNs) on an existing physical infrastructure. VN embedding (VNE) problem, which deals with the embedding of VN components onto a physical network, is known to be -hard. In this paper, we propose two VNE algorithms: MaVEn-M and MaVEn-S. MaVEn-M employs the multicommodity flow algorithm for virtual link mapping while MaVEn-S uses the shortest-path algorithm. They formalize the virtual node mapping problem by using the Markov decision process (MDP) framework and devise action policies (node mappings) for the proposed MDP using the Monte Carlo tree search algorithm. Service providers may adjust the execution time of the MaVEn algorithms based on the traffic load of VN requests. The objective of the algorithms is to maximize the profit of infrastructure providers. We develop a discrete event VNE simulator to implement and evaluate performance of MaVEn-M, MaVEn-S, and several recently proposed VNE algorithms. We introduce profitability as a new performance metric that captures both acceptance and revenue to cost ratios. Simulation results show that the proposed algorithms find more profitable solutions than the existing algorithms. Given additional computation time, they further improve embedding solutions.

  6. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL


    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  7. Towards overcoming the Monte Carlo sign problem with tensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Banuls, Mari Carmen; Cirac, J. Ignacio; Kuehn, Stefan [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, Hana [AISIN AW Co., Ltd., Aichi (Japan)


    The study of lattice gauge theories with Monte Carlo simulations is hindered by the infamous sign problem that appears under certain circumstances, in particular at non-zero chemical potential. So far, there is no universal method to overcome this problem. However, recent years brought a new class of non-perturbative Hamiltonian techniques named tensor networks, where the sign problem is absent. In previous work, we have demonstrated that this approach, in particular matrix product states in 1+1 dimensions, can be used to perform precise calculations in a lattice gauge theory, the massless and massive Schwinger model. We have computed the mass spectrum of this theory, its thermal properties and real-time dynamics. In this work, we review these results and we extend our calculations to the case of two flavours and non-zero chemical potential. We are able to reliably reproduce known analytical results for this model, thus demonstrating that tensor networks can tackle the sign problem of a lattice gauge theory at finite density.

  8. Phase transitions in chiral magnets from Monte Carlo simulations (United States)

    Belemuk, A. M.; Stishov, S. M.


    Motivated by the unusual temperature dependence of the specific heat in MnSi, comprising a combination of a sharp first-order feature accompanied by a broad hump, we study the extended Heisenberg model with competing exchange J and anisotropic Dzyaloshinskii-Moriya D interactions in a broad range of ratio D /J . Utilizing classical Monte Carlo simulations we find an evolution of the temperature dependence of the specific heat and magnetic susceptibility with variation of D /J . Combined with an analysis of the Bragg intensity patterns, we clearly demonstrate that the observed puzzling hump in the specific heat of MnSi originates from smearing out of the virtual ferromagnetic second-order phase transition by helical fluctuations which manifest themselves in the transient multiple spiral state. These fluctuations finally condense into the helical ordered phase via a first-order phase transition, as is indicated by the specific heat peak. Thus the model demonstrates a crossover from a second-order to a first-order transition with increasing D /J . Upon further increasing D /J another crossover from a first-order to a second-order transition takes place in the system. Moreover, the results of the calculations clearly indicate that these competing interactions are the primary factors responsible for the appearance of first-order phase transitions in helical magnets with the Dzyaloshinskii-Moriya interaction.

  9. Magnetic properties in stacked triangular lattice: Monte Carlo approach (United States)

    Masrour, R.; Jabar, A.


    We study the magnetic properties of mixed spins σ = 5 / 2 and S = 2 Ising in stacked triangular lattice (STL) using Monte Carlo approach. We have also give the grounds state phase diagrams of mixed spins-5/2 and 2. The different magnetic phases are detected under effect of different physical parameters. The diagrams show some key features: coexistence between regions, points where three, four and five states can coexist. The reduced critical temperatures have been also determined for different exchange interactions in each layer with a fixed value of exchange interactions between two layers. The total magnetization with reduced exchange interactions and crystal-fields of mixed spins is obtained. The multiple hysteresis and the superparamagnetic behavior are established around the reduced critical temperatures. The magnetic coercive field and magnetization remanent increase with increasing the exchange interactions and decreases with increasing the temperature values. The obtained results are similar to those obtained by experiment and theoretically results. Above reduced transition temperature the system shows superparamagnetic behavior which makes the material desirable for biomedical applications.

  10. Evaluation of Monte Carlo tools for high energy atmospheric physics

    Directory of Open Access Journals (Sweden)

    C. Rutjes


    Full Text Available The emerging field of high energy atmospheric physics (HEAP includes terrestrial gamma-ray flashes, electron–positron beams and gamma-ray glows from thunderstorms. Similar emissions of high energy particles occur in pulsed high voltage discharges. Understanding these phenomena requires appropriate models for the interaction of electrons, positrons and photons of up to 40 MeV energy with atmospheric air. In this paper, we benchmark the performance of the Monte Carlo codes Geant4, EGS5 and FLUKA developed in other fields of physics and of the custom-made codes GRRR and MC-PEPTITA against each other within the parameter regime relevant for high energy atmospheric physics. We focus on basic tests, namely on the evolution of monoenergetic and directed beams of electrons, positrons and photons with kinetic energies between 100 keV and 40 MeV through homogeneous air in the absence of electric and magnetic fields, using a low energy cutoff of 50 keV. We discuss important differences between the results of the different codes and provide plausible explanations. We also test the computational performance of the codes. The Supplement contains all results, providing a first benchmark for present and future custom-made codes that are more flexible in including electrodynamic interactions.

  11. Lattice Monte Carlo calculations of finite temperature QCD (United States)

    Degrand, T.

    The status of the lattice description of the deconfinement transition and the properties of hadronic matter at high (and low) temperature T are discussed. An ultimate goal of these investigations is to learn whether or not QCD actually predicts the naive phase diagram. A more realistic goal, which is at present partially within grasp, is to compute the static properties of QCD matter at T 0 from first principles. These include the order of phase transitions, critical temperatures T/sub c/, critical exponents or latent heat, but not dynamical critical properties, such as the behavior of Green's functions near T/sub c/. No first- principles discussions of non-equilibrium properties of QCD, which would be required for a description of the experiments are known. In fact, experimentalists should think of the world studied by lattice or Monte Carlo methods as a little crystal in an oven whose temperature is kept constant in time. A short description is given of how to set up the finite-temperature field theory on a lattice to display the important parts of the calculation without going too much into details. Then recent progress in the understanding of the glue world - pure gauge theories is discussed and end by discussing the physically relevant case of fermions and gauge fields.

  12. Monte-Carlo event generation for the LHC

    CERN Document Server

    Siegert, Frank

    This thesis discusses recent developments for the simulation of particle physics in the light of the start-up of the Large Hadron Collider. Simulation programs for fully exclusive events, dubbed Monte-Carlo event generators, are improved in areas related to the perturbative as well as non-perturbative regions of strong interactions. A short introduction to the main principles of event generation is given to serve as a basis for the following discussion. An existing algorithm for the correction of parton-shower emissions with the help of exact tree-level matrix elements is revisited and significantly improved as attested by first results. In a next step, an automated implementation of the POWHEG method is presented. It allows for the combination of parton showers with full next-to-leading order QCD calculations and has been tested in several processes. These two methods are then combined into a more powerful framework which allows to correct a parton shower with full next-to-leading order matrix elements and h...

  13. Monte Carlo simulation of three-dimensional islands (United States)

    Tan, Sovirith; Lam, Pui-Man


    The usual kinetic Monte Carlo method is adapted, to treat off-lattice problems of multilayer growth (coverage θ>1) by molecular-beam epitaxy. This method takes into account the Schwoebel barrier, which comes out as a result of the choice of the potential interaction between the atoms. This method allows a free choice of the lattice mismatch, temperature, deposition flux rate, and interfacial energies. A particular choice of these parameters leads to the three-dimensional (3D) (Volmer-Weber) growth mode, whereas another choice of these parameters leads to the 2D-3D growth mode (Stranski-Krastanov). The 3D islands seem to obey scaling only approximately. Using this method, the surface stress inside a substrate and a (pyramidal) coherent 3D island is computed. Strong relaxations appear, not only at the edges of the 3D island (which is expected), but also in the proximity of the edges, and inside the 3D island. These particular sites inside the 3D island are located just beneath a step site of the upper layer. Moreover, these particular sites develop strong corrugations, which later are propagating along the layer. Strain-induced modulation of layers is thermally activated, so the steps could act as defects and nucleation sites for propagating roughness, in agreement with some theories and experimental facts.

  14. Effect of paper porosity on OCT images: Monte Carlo study (United States)

    Kirillin, Mikhail Yu.; Priezzhev, Alexander V.; Myllylä, Risto


    Non-invasive measurement of paper porosity is an important problem for papermaking industry. Presently used techniques are invasive and require long time for processing the sample. In recent years optical coherence tomography (OCT) has been proved to be an effective tool for non-invasive study of optically non-uniform scattering media including paper. The aim of present work is to study the potential ability of OCT for sensing the porosity of a paper sample by means of numerical simulations. The paper sample is characterized by variation of porosity along the sample while numerical simulations allow one to consider the samples with constant porosity which is useful for evaluation of the technique abilities. The calculations were performed implementing Monte Carlo-based technique developed earlier for simulation of OCT signals from multilayer paper models. A 9-layer model of paper consisting of five fiber layers and four air layers with non-planar boundaries was considered. The porosity of the samples was varied from 30 to 80% by varying the thicknesses of the layers. The simulations were performed for model paper samples without and with optical clearing agents (benzyl alcohol, 1-pentanol, isopropanol) applied. It was shown that the simulated OCT images of model paper with various porosities significantly differ revealing the potentiality of the OCT technique for sensing the porosity. When obtaining the images of paper samples with optical clearing agents applied, the inner structure of the samples is also revealed providing additional information about the samples under study.

  15. Monte Carlo dose calculation algorithm on a distributed system (United States)

    Chauvie, Stéphane; Dominoni, Matteo; Marini, Piergiorgio; Stasi, Michele; Pia, Maria Grazia; Scielzo, Giuseppe


    The main goal of modern radiotherapy, such as 3D conformal radiotherapy and intensity-modulated radiotherapy is to deliver a high dose to the target volume sparing the surrounding healthy tissue. The accuracy of dose calculation in a treatment planning system is therefore a critical issue. Among many algorithms developed over the last years, those based on Monte Carlo proven to be very promising in terms of accuracy. The most severe obstacle in application to clinical practice is the high time necessary for calculations. We have studied a high performance network of Personal Computer as a realistic alternative to a high-costs dedicated parallel hardware to be used routinely as instruments of evaluation of treatment plans. We set-up a Beowulf Cluster, configured with 4 nodes connected with low-cost network and installed MC code Geant4 to describe our irradiation facility. The MC, once parallelised, was run on the Beowulf Cluster. The first run of the full simulation showed that the time required for calculation decreased linearly increasing the number of distributed processes. The good scalability trend allows both statistically significant accuracy and good time performances. The scalability of the Beowulf Cluster system offers a new instrument for dose calculation that could be applied in clinical practice. These would be a good support particularly in high challenging prescription that needs good calculation accuracy in zones of high dose gradient and great dishomogeneities.

  16. Use of the GATE Monte Carlo package for dosimetry applications

    Energy Technology Data Exchange (ETDEWEB)

    Visvikis, D. [INSERM U650, LaTIM, University Hospital Medical School, F 29609 Brest (France)]. E-mail:; Bardies, M. [INSERM U601, CHU Nantes, F 44093 Nantes (France); Chiavassa, S. [INSERM U601, CHU Nantes, F 44093 Nantes (France); Danford, C. [Department of Medical Physics, MSKCC, New York (United States); Kirov, A. [Department of Medical Physics, MSKCC, New York (United States); Lamare, F. [INSERM U650, LaTIM, University Hospital Medical School, F 29609 Brest (France); Maigne, L. [Departement de Curietherapie-Radiotherapie, Centre Jean Perrin, F 63000 Clemont-Ferrand (France); Staelens, S. [UGent-ELIS, St-Pietersnieuwstraat, 41, B 9000 Gent (Belgium); Taschereau, R. [CRUMP Institute for Molecular Imaging, UCLA, Los Angeles (United States)


    One of the roles for Monte Carlo (MC) simulation studies is in the area of dosimetry. A number of different codes dedicated to dosimetry applications are available and widely used today, such as MCNP, EGSnrc and PTRAN. However, such codes do not easily facilitate the description of complicated 3D sources or emission tomography systems and associated data flow, which may be useful in different dosimetry application domains. Such problems can be overcome by the use of specific MC codes such as GATE (GEANT4 Application to Tomographic Emission), which is based on Geant4 libraries, providing a scripting interface with a number of advantages for the simulation of SPECT and PET systems. Despite this potential, its major disadvantage is in terms of efficiency involving long execution times for applications such as dosimetry. The strong points and disadvantages of GATE in comparison to other dosimetry specific codes are discussed and illustrated in terms of accuracy, efficiency and flexibility. A number of features, such as the use of voxelised and moving sources, as well as developments such as advanced visualization tools and the development of dose estimation maps allowing GATE to be used for dosimetry applications are presented. In addition, different examples from dosimetry applications with GATE are given. Finally, future directions with respect to the use of GATE for dosimetry applications are outlined.

  17. Monte Carlo Sampling of Negative-temperature Plasma States

    Energy Technology Data Exchange (ETDEWEB)

    John A. Krommes; Sharadini Rath


    A Monte Carlo procedure is used to generate N-particle configurations compatible with two-temperature canonical equilibria in two dimensions, with particular attention to nonlinear plasma gyrokinetics. An unusual feature of the problem is the importance of a nontrivial probability density function R0(PHI), the probability of realizing a set {Phi} of Fourier amplitudes associated with an ensemble of uniformly distributed, independent particles. This quantity arises because the equilibrium distribution is specified in terms of {Phi}, whereas the sampling procedure naturally produces particles states gamma; {Phi} and gamma are related via a gyrokinetic Poisson equation, highly nonlinear in its dependence on gamma. Expansion and asymptotic methods are used to calculate R0(PHI) analytically; excellent agreement is found between the large-N asymptotic result and a direct numerical calculation. The algorithm is tested by successfully generating a variety of states of both positive and negative temperature, including ones in which either the longest- or shortest-wavelength modes are excited to relatively very large amplitudes.

  18. Monte Carlo simulation of light fluence calculation during pleural PDT (United States)

    Meo, Julia L.; Zhu, Timothy


    A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.

  19. Charge-4 e superconductors: A Majorana quantum Monte Carlo study (United States)

    Jiang, Yi-Fan; Li, Zi-Xiang; Kivelson, Steven A.; Yao, Hong


    Many features of charge-4 e superconductors remain unknown because even the "mean-field Hamiltonian" describing them is an interacting model. Here we introduce an interacting model to describe a charge-4 e superconductor (SC) deep in the superconducting phase and explore its properties using quantum Monte Carlo (QMC) simulations. The QMC is sign-problem-free but only when a Majorana representation is employed. As a function of the chemical potential we observe two sharply-distinct behaviors: a "strong" quarteting phase in which charge-4 e quartets are tightly bound (like molecules) so that charge-2 e pairing does not occur even in the temperature T →0 limit, and a "weak" quarteting phase in which a further transition to a charge-2 e superconducting phase occurs at a lower critical temperature. Analogous issues arise in a putative Z4 spin liquid with a pseudo-Fermi surface and other interacting models with composite order parameters. Under certain circumstances, we also identified a stable T =0 charge-4 e SC phase with gapless nodal quasiparticles. We further discuss possible relevance of our results to various experimental observations in 1/8 -doped LBCO.

  20. Monte Carlo modelling of an extended DXA technique (United States)

    Michael, G. J.; Henderson, C. J.


    The precision achieved in measuring bone mineral density (BMD) by commercial dual-energy x-ray absorptiometry (DXA) machines is typically better than 1%, but accuracy is considerably worse. Errors, due to inhomogeneous distributions of fat, of up to 10% have been reported. These errors arise because the DXA technique assumes a two-component model for the human body, i.e. bone mineral and soft tissue. This paper describes an extended DXA technique that uses a three-component model of human tissue and significantly reduces errors due to inhomogeneous fat distribution. In addition to two x-ray transmission measurements, a measurement of the path length of the x-ray beam within the patient is required. This provides a third equation, i.e. where T, , and are the total, lean soft tissue, bone mineral and fatty tissue thicknesses respectively. Monte Carlo modelling was undertaken to make a comparison of the standard and extended DXA techniques in the presence of inhomogeneous fat distribution. Two geometries of varying complexity were simulated. In each case the extended DXA technique produced BMD measurements that were independent of soft tissue composition whereas the standard technique produced BMD measurements that were strongly dependent on soft tissue composition. For example, in one case, the gradients of the plots of BMD versus fractional fat content were for standard DXA and for extended DXA . In all cases the extended DXA method produced more accurate but less precise results than the standard DXA technique.

  1. Monte Carlo model for electron degradation in xenon gas

    CERN Document Server

    Mukundan, Vrinda


    We have developed a Monte Carlo model for studying the local degradation of electrons in the energy range 9-10000 eV in xenon gas. Analytically fitted form of electron impact cross sections for elastic and various inelastic processes are fed as input data to the model. Two dimensional numerical yield spectrum, which gives information on the number of energy loss events occurring in a particular energy interval, is obtained as output of the model. Numerical yield spectrum is fitted analytically, thus obtaining analytical yield spectrum. The analytical yield spectrum can be used to calculate electron fluxes, which can be further employed for the calculation of volume production rates. Using yield spectrum, mean energy per ion pair and efficiencies of inelastic processes are calculated. The value for mean energy per ion pair for Xe is 22 eV at 10 keV. Ionization dominates for incident energies greater than 50 eV and is found to have an efficiency of 65% at 10 keV. The efficiency for the excitation process is 30%...

  2. Streamlining resummed QCD calculations using Monte Carlo integration

    Energy Technology Data Exchange (ETDEWEB)

    Farhi, David; Feige, Ilya; Freytsis, Marat; Schwartz, Matthew D. [Center for the Fundamental Laws of Nature, Harvard University,17 Oxford St., Cambridge, MA 02138 (United States)


    Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MADGRAPH, ALPGEN or SHERPA In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including e{sup +}e{sup −} two- and four-jet event shapes, n-jettiness and jet-mass related observables at hadron colliders at next-to-leading-log (NLL) matched to leading order (LO). Attached code can be used to modify MADGRAPH to export the relevant LO hard functions and color structures for arbitrary processes.

  3. Optimizing the HLT Buffer Strategy with Monte Carlo Simulations

    CERN Document Server



    This project aims to optimize the strategy of utilizing the disk buffer for the High Level Trigger (HLT) of the LHCb experiment with the help of Monte-Carlo simulations. A method is developed, which simulates the Event Filter Farm (EFF) -- a computing cluster for the High Level Trigger -- as a compound of nodes with different performance properties. In this way, the behavior of the computing farm can be analyzed at a deeper level than before. It is demonstrated that the current operating strategy might be improved when data taking is reaching a mid-year scheduled stop or the year-end technical stop. The processing time of the buffered data can be lowered by distributing the detector data according to the processing power of the nodes instead of the relative disk size as long as the occupancy level of the buffer is low enough. Moreover, this ensures that data taken and stored on the buffer at the same time is processed by different nodes nearly simultaneously, which reduces load on the infrastructure.

  4. The Monte Carlo simulation of the Borexino detector (United States)

    Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.


    We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.

  5. Spherical Hamiltonian Monte Carlo for Constrained Target Distributions. (United States)

    Lan, Shiwei; Zhou, Bo; Shahbaba, Babak


    Statistical models with constrained probability distributions are abundant in machine learning. Some examples include regression models with norm constraints (e.g., Lasso), probit models, many copula models, and Latent Dirichlet Allocation (LDA) models. Bayesian inference involving probability distributions confined to constrained domains could be quite challenging for commonly used sampling algorithms. For such problems, we propose a novel Markov Chain Monte Carlo (MCMC) method that provides a general and computationally efficient framework for handling boundary conditions. Our method first maps the D-dimensional constrained domain of parameters to the unit ball [Formula: see text], then augments it to a D-dimensional sphere SD such that the original boundary corresponds to the equator of SD . This way, our method handles the constraints implicitly by moving freely on the sphere generating proposals that remain within boundaries when mapped back to the original space. To improve the computational efficiency of our algorithm, we divide the dynamics into several parts such that the resulting split dynamics has a partial analytical solution as a geodesic flow on the sphere. We apply our method to several examples including truncated Gaussian, Bayesian Lasso, Bayesian bridge regression, and a copula model for identifying synchrony among multiple neurons. Our results show that the proposed method can provide a natural and efficient framework for handling several types of constraints on target distributions.

  6. Decision Assistance in Risk Assessment – Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)



    Full Text Available High security must be a primary and permanent concern of the leadership of an organization and it must be ensured at any time. For this, a risk analysis is compulsory and imperative to be done during the risk management cycle. Security risk analysis and security risk management components mostly use estimative data during the whole extensive process. The further evolution of the events might not be reflected in the obtained results. If we were to think about the fact that hazard must be modeled, this concern is absolutely normal. Though, we must find a way to model the events that a company is exposed to, events that damage the informational security. In the following lines of this paper we will use the Monte-Carlo method in order to model a set of security parameters that are used in security risk analysis. The frequency of unwanted events, damages and their impact will represent our main focus and will be applied to both the quantitative and qualitative security risk analysis approach. The obtained results will act as a guide for experts to better allocation of resources for decreasing or eliminating the risk and will also represent a warning for the leadership about certain absolutely necessary investments.

  7. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill


    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  8. Monte Carlo simulations for generic granite repository studies

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL


    In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.

  9. Titration of hydrophobic polyelectrolytes using Monte Carlo simulations (United States)

    Ulrich, Serge; Laguecir, Abohachem; Stoll, Serge


    The conformation and titration curves of weak (or annealed) hydrophobic polyelectrolytes have been examined using Monte Carlo simulations with screened Coulomb potentials in the grand canonical ensemble. The influence of the ionic concentration pH and presence of hydrophobic interactions has been systematically investigated. A large number of conformations such as extended, pearl-necklace, cigar-shape, and collapsed structures resulting from the subtle balance of short-range hydrophobic attractive interactions and long-range electrostatic repulsive interactions between the monomers have been observed. Titration curves were calculated by adjusting the pH-pK0 values (pK0 represents the intrinsic dissociation constant of an isolated monomer) and then calculating the ionization degree α of the polyelectrolyte. Important transitions related to cascades of conformational changes were observed in the titration curves, mainly at low ionic concentration and with the presence of strong hydrophobic interactions. We demonstrated that the presence of hydrophobic interactions plays an important role in the acid-base properties of a polyelectrolyte in promoting the formation of compact conformations and hence decreasing the polyelectrolyte degree of ionization for a given pH-pK0 value.

  10. Monte Carlo simulation for radiation dose in children radiology; Simulacao Monte Carlo da dose para radiologia pediatrica

    Energy Technology Data Exchange (ETDEWEB)

    Mendes, Hitalo R.; Tomal, Alessandra [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Instituto de Fisica Gleb Wataghin


    The dosimetry in pediatric radiology is essential due to the higher risk that children have in comparison to adults. The focus of this study is to present how the dose varies depending on the depth in a 10 year old and a newborn, for this purpose simulations are made using the Monte Carlo method. Potential differences were considered 70 and 90 kVp for the 10 year old and 70 and 80 kVp for the newborn. The results show that in both cases, the dose at the skin surface is larger for smaller potential value, however, it decreases faster for larger potential values. Another observation made is that because the newborn is less thick the ratio between the initial dose and the final is lower compared to the case of a 10 year old, showing that it is possible to make an image using a smaller entrance dose in the skin, keeping the same level of exposure at the detector. (author)

  11. Monte Carlo simulation on teaching of luminescence and excited states decay kinetics; Simulacao Monte Carlo no ensino de luminescencia e cinetica de decaimento de estado excitado

    Energy Technology Data Exchange (ETDEWEB)

    Winnischofer, Herbert; Araujo, Marcio Peres de; Dias Junior, Lauro Camargo; Novo, Joao Batista Marques [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil)


    A software based in the Monte Carlo method have been developed aiming the teaching of important cases of mechanisms found in luminescence and in excited states decay kinetics, including: multiple decays, consecutive decays and coupled systems decays. The Monte Carlo Method allows the student to easily simulate and visualize the luminescence mechanisms, focusing on the probabilities of the related steps. The software CINESTEX was written for FreeBASIC compiler; it assumes first-order kinetics and any number of excited states, where the pathways are allowed with probabilities assigned by the user. (author)

  12. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)


    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  13. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; The ATLAS collaboration; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A


    The huge success of Run 1 of the LHC for particle physics would not have been possible without detailed detector simulation of the experiments. However, the outstanding performance of the accelerator with a delivered luminosity of $L = 25 fb^{1}$ has created an unprecedented demand for Monte Carlo statistics. This has stretched the possibilities of the experiments due to the constraint of their computing infrastructure and available resources. Ensuring high quality Monte Carlo simulation samples with sufficient statistics became one the major focus points of experimental high energy physics and first analyses saw the advent of limitations in sensitivity or precision due to the lack of available Monte Carlo samples. Modern, concurrent computing techniques optimized for new processor hardware are being exploit to boost future computing resources, but even most optimistic scenarios predict that additional action needs to be taken to guarantee sufficient Monte Carlo production for high quality physics results dur...

  14. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.

  15. Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization

    KAUST Repository

    Shao, Jing


    Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.

  16. Comparative evaluation of photon cross section libraries for materials of interest in PET Monte Carlo simulations

    CERN Document Server

    Zaidi, H


    the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...



    Taisiya N. Mironenko; Ekaterina I. Bragina


    The article considers one of the prioritiesin the field of artificial intelligence - Markovchains. The problem of forecasting the strategic investment directions in industrial policy by using a Monte Carlo method issolved.

  18. Monte Carlo simulation of diffuse attenuation coefficient in presence of non uniform profiles

    Digital Repository Service at National Institute of Oceanography (India)

    Desa, E.S.; Desai, R.G.P.; Desa, B.A.E.

    This paper presents a Monte Carlo simulation of the vertical depth structure of the downward attenuation coefficient (K sub(d)), and the irradiance reflectance (R) for a given profile of chlorophyll. The results are in quantitaive agreement...

  19. Multi-sensor three-dimensional Monte Carlo localization for long-term aerial robot navigation

    National Research Council Canada - National Science Library

    Perez-Grau, Francisco J; Caballero, Fernando; Viguria, Antidio; Ollero, Anibal


    This article presents an enhanced version of the Monte Carlo localization algorithm, commonly used for robot navigation in indoor environments, which is suitable for aerial robots moving in a three...

  20. Two Photon Physics at LEP2, including data Monte-Carlo comparison

    CERN Document Server

    Miller, D.J.


    A partisan review of some of the most important $\\gamma\\gamma$ channels accessible at LEP 2, with special stress on the measurement of the photon structure function $F_{2}^{\\gamma}$ and on associated problems with Monte Carlo modelling.