WorldWideScience

Sample records for carlo method implemented

  1. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  2. MontePython: Implementing Quantum Monte Carlo using Python

    OpenAIRE

    J.K. Nilsen

    2006-01-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.

  3. Qualitative Simulation of Photon Transport in Free Space Based on Monte Carlo Method and Its Parallel Implementation

    Directory of Open Access Journals (Sweden)

    Xueli Chen

    2010-01-01

    Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.

  4. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  5. Criticality calculations on pebble-bed HTR-PROTEUS configuration as a validation for the pseudo-scattering tracking method implemented in the MORET 5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Forestier, Benoit; Miss, Joachim; Bernard, Franck; Dorval, Aurelien [Institut de Radioprotection et Surete Nucleaire, Fontenay aux Roses (France); Jacquet, Olivier [Independent consultant (France); Verboomen, Bernard [Belgian Nuclear Research Center - SCK-CEN (Belgium)

    2008-07-01

    The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (k{sub eff}) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method, known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by mean of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model, as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time, will also be presented. (authors)

  6. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    International Nuclear Information System (INIS)

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors

  7. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  8. Monte Carlo methods for particle transport

    CERN Document Server

    Haghighat, Alireza

    2015-01-01

    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  9. Monte Carlo Implementation of Polarized Hadronization

    CERN Document Server

    Matevosyan, Hrayr H; Thomas, Anthony W

    2016-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...

  10. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  11. Clinical implementation of full Monte Carlo dose calculation in proton beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Paganetti, Harald; Jiang, Hongyu; Parodi, Katia; Slopsema, Roelf; Engelsman, Martijn [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)

    2008-09-07

    The goal of this work was to facilitate the clinical use of Monte Carlo proton dose calculation to support routine treatment planning and delivery. The Monte Carlo code Geant4 was used to simulate the treatment head setup, including a time-dependent simulation of modulator wheels (for broad beam modulation) and magnetic field settings (for beam scanning). Any patient-field-specific setup can be modeled according to the treatment control system of the facility. The code was benchmarked against phantom measurements. Using a simulation of the ionization chamber reading in the treatment head allows the Monte Carlo dose to be specified in absolute units (Gy per ionization chamber reading). Next, the capability of reading CT data information was implemented into the Monte Carlo code to model patient anatomy. To allow time-efficient dose calculation, the standard Geant4 tracking algorithm was modified. Finally, a software link of the Monte Carlo dose engine to the patient database and the commercial planning system was established to allow data exchange, thus completing the implementation of the proton Monte Carlo dose calculation engine ('DoC++'). Monte Carlo re-calculated plans are a valuable tool to revisit decisions in the planning process. Identification of clinically significant differences between Monte Carlo and pencil-beam-based dose calculations may also drive improvements of current pencil-beam methods. As an example, four patients (29 fields in total) with tumors in the head and neck regions were analyzed. Differences between the pencil-beam algorithm and Monte Carlo were identified in particular near the end of range, both due to dose degradation and overall differences in range prediction due to bony anatomy in the beam path. Further, the Monte Carlo reports dose-to-tissue as compared to dose-to-water by the planning system. Our implementation is tailored to a specific Monte Carlo code and the treatment planning system XiO (Computerized Medical

  12. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  13. Implementing the Generalised Hybrid Monte-Carlo Algorithm

    CERN Document Server

    Sroczynski, Z; Pickles, S M

    1998-01-01

    UKQCD's dynamical fermion project uses the Generalised Hybrid Monte-Carlo (GHMC) algorithm to generate QCD gauge configurations for a non-perturbatively O(a) improved Wilson action with two degenerate sea-quark flavours. We describe our implementation of the algorithm on the Cray-T3E, concentrating on issues arising from code verification and performance optimisation, such as parameter tuning, reversibility, the effect of precision, the choice of matrix inverter and the behaviour of different molecular dynamics integration schemes.

  14. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  15. The Moment Guided Monte Carlo Method

    OpenAIRE

    Degond, Pierre; Dimarco, Giacomo; Pareschi, Lorenzo

    2009-01-01

    In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the p...

  16. A general framework for implementing NLO calculations in shower Monte Carlo programs. The POWHEG BOX

    Energy Technology Data Exchange (ETDEWEB)

    Alioli, Simone [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Nason, Paolo [INFN, Milano-Bicocca (Italy); Oleari, Carlo [INFN, Milano-Bicocca (Italy); Milano-Bicocca Univ. (Italy); Re, Emanuele [Durham Univ. (United Kingdom). Inst. for Particle Physics Phenomenology

    2010-02-15

    In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it. (orig.)

  17. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  18. Monte Carlo methods for preference learning

    DEFF Research Database (Denmark)

    Viappiani, P.

    2012-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....

  19. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  20. by means of FLUKA Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Ermis Elif Ebru

    2015-01-01

    Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.

  1. The Moment Guided Monte Carlo Method

    CERN Document Server

    Degond, Pierre; Pareschi, Lorenzo

    2009-01-01

    In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the particle positions and velocities through moment equations so that the concurrent solution of the moment and kinetic models furnishes the same macroscopic quantities.

  2. Reactor perturbation calculations by Monte Carlo methods

    International Nuclear Information System (INIS)

    Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)

  3. Monte Carlo method in radiation transport problems

    International Nuclear Information System (INIS)

    In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media

  4. Introduction to Monte-Carlo method

    International Nuclear Information System (INIS)

    We recall first some well known facts about random variables and sampling. Then we define the Monte-Carlo method in the case where one wants to compute a given integral. Afterwards, we ship to discrete Markov chains for which we define random walks, and apply to finite difference approximations of diffusion equations. Finally we consider Markov chains with continuous state (but discrete time), transition probabilities and random walks, which are the main piece of this work. The applications are: diffusion and advection equations, and the linear transport equation with scattering

  5. A new method for commissioning Monte Carlo treatment planning systems

    Science.gov (United States)

    Aljarrah, Khaled Mohammed

    2005-11-01

    The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.

  6. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  7. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    Science.gov (United States)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2016-03-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.

  8. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    KAUST Repository

    Hoel, Hakon

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  9. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    Science.gov (United States)

    He, Tongming Tony

    In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based

  10. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Granero Cabanero, D.

    2015-07-01

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  11. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  12. Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems

    Science.gov (United States)

    Martin, W. R.

    1993-01-01

    This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.

  13. Recent Developments in Quantum Monte Carlo: Methods and Applications

    Science.gov (United States)

    Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.

    2007-12-01

    The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.

  14. Combinatorial nuclear level density by a Monte Carlo method

    OpenAIRE

    Cerf, N.

    1993-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning t...

  15. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  16. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules

    CERN Document Server

    Lester, William A; Reynolds, PJ

    1994-01-01

    This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n

  17. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  18. Inference in Kingman's Coalescent with Particle Markov Chain Monte Carlo Method

    OpenAIRE

    Chen, Yifei; Xie, Xiaohui

    2013-01-01

    We propose a new algorithm to do posterior sampling of Kingman's coalescent, based upon the Particle Markov Chain Monte Carlo methodology. Specifically, the algorithm is an instantiation of the Particle Gibbs Sampling method, which alternately samples coalescent times conditioned on coalescent tree structures, and tree structures conditioned on coalescent times via the conditional Sequential Monte Carlo procedure. We implement our algorithm as a C++ package, and demonstrate its utility via a ...

  19. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    Rajeeva L Karandikar

    2006-04-01

    Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  20. A Particle Population Control Method for Dynamic Monte Carlo

    Science.gov (United States)

    Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony

    2014-06-01

    A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.

  1. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  2. Monte Carlo methods and applications in nuclear physics

    International Nuclear Information System (INIS)

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs

  3. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)

    2011-07-01

    This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)

  4. A New Method for the Calculation of Diffusion Coefficients with Monte Carlo

    Science.gov (United States)

    Dorval, Eric

    2014-06-01

    This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.

  5. Application of biasing techniques to the contributon Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, A.; Gerstl, S.A.W.

    1980-01-01

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  6. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

    International Nuclear Information System (INIS)

    Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

  7. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  8. Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling

    OpenAIRE

    Euget, Thomas

    2012-01-01

    This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie...

  9. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf

    2010-01-01

    Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat

  10. New simpler method of matching NLO corrections with parton shower Monte Carlo

    OpenAIRE

    Jadach, S.; Placzek, W.; Sapeta, S.(CERN PH-TH, CH-1211, Geneva 23, Switzerland); Siodmok, A.; Skrzypek, M.

    2016-01-01

    Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higg...

  11. New simpler method of matching NLO corrections with parton shower Monte Carlo

    CERN Document Server

    Jadach, S; Sapeta, S; Siodmok, A; Skrzypek, M

    2016-01-01

    Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higgs-boson production process are also presented.

  12. Comparison between Monte Carlo method and deterministic method

    International Nuclear Information System (INIS)

    A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.)

  13. Monte Carlo method application to shielding calculations

    International Nuclear Information System (INIS)

    CANDU spent fuel discharged from the reactor core contains Pu, so it must be stressed in two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculations in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. To perform photon dose rates calculations the Monte Carlo MORSE-SGC code incorporated in SAS4 sequence from SCALE system was used. The paper objective was to obtain the photon dose rates to the spent fuel transport cask wall, both in radial and axial directions. As source of radiation one spent CANDU fuel bundle was used. All the geometrical and material data related to the transport cask were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. (authors)

  14. Monte Carlo Methods for Tempo Tracking and Rhythm Quantization

    CERN Document Server

    Cemgil, A T; 10.1613/jair.1121

    2011-01-01

    We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...

  15. Auxiliary-field quantum Monte Carlo methods in nuclei

    CERN Document Server

    Alhassid, Y

    2016-01-01

    Auxiliary-field quantum Monte Carlo methods enable the calculation of thermal and ground state properties of correlated quantum many-body systems in model spaces that are many orders of magnitude larger than those that can be treated by conventional diagonalization methods. We review recent developments and applications of these methods in nuclei using the framework of the configuration-interaction shell model.

  16. Quantum Monte Carlo diagonalization method as a variational calculation

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1997-05-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  17. Observations on variational and projector Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Umrigar, C. J., E-mail: CyrusUmrigar@cornell.edu [Laboratory of Atomic and Solid State Physics, Cornell University, Ithaca, New York 14853 (United States)

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  18. The Monte Carlo method in quantum field theory

    CERN Document Server

    Morningstar, C

    2007-01-01

    This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.

  19. Frequency domain optical tomography using a Monte Carlo perturbation method

    Science.gov (United States)

    Yamamoto, Toshihiro; Sakamoto, Hiroki

    2016-04-01

    A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.

  20. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  1. Library Design in Combinatorial Chemistry by Monte Carlo Methods

    OpenAIRE

    Falcioni, Marco; Michael W. Deem

    2000-01-01

    Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities.

  2. Applications of quantum Monte Carlo methods in condensed systems

    CERN Document Server

    Kolorenc, Jindrich

    2010-01-01

    The quantum Monte Carlo methods represent a powerful and broadly applicable computational tool for finding very accurate solutions of the stationary Schroedinger equation for atoms, molecules, solids and a variety of model systems. The algorithms are intrinsically parallel and are able to take full advantage of the present-day high-performance computing systems. This review article concentrates on the fixed-node/fixed-phase diffusion Monte Carlo method with emphasis on its applications to electronic structure of solids and other extended many-particle systems.

  3. Monte Carlo Form-Finding Method for Tensegrity Structures

    Science.gov (United States)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  4. Extending the alias Monte Carlo sampling method to general distributions

    International Nuclear Information System (INIS)

    The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

  5. MOSFET GATE CURRENT MODELLING USING MONTE-CARLO METHOD

    OpenAIRE

    Voves, J.; Vesely, J.

    1988-01-01

    The new technique for determining the probability of hot-electron travel through the gate oxide is presented. The technique is based on the Monte Carlo method and is used in MOSFET gate current modelling. The calculated values of gate current are compared with experimental results from direct measurements on MOSFET test chips.

  6. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part I: Model description and application to the La Merced site

    Directory of Open Access Journals (Sweden)

    F. M. San Martini

    2006-01-01

    Full Text Available The equilibrium inorganic aerosol model ISORROPIA was embedded in a Markov Chain Monte Carlo algorithm to develop a powerful tool to analyze aerosol data and predict gas phase concentrations where these are unavailable. The method directly incorporates measurement uncertainty, prior knowledge, and provides a formal framework to combine measurements of different quality. The method was applied to particle- and gas-phase precursor observations taken at La Merced during the Mexico City Metropolitan Area (MCMA 2003 Field Campaign and served to discriminate between diverging gas-phase observations of ammonia and predict gas-phase concentrations of hydrochloric acid. The model reproduced observations of particle-phase ammonium, nitrate, and sulfate well. The most likely concentrations of ammonia were found to vary between 4 and 26 ppbv, while the range for nitric acid was 0.1 to 55 ppbv. During periods where the aerosol chloride observations were consistently above the detection limit, the model was able to reproduce the aerosol chloride observations well and predicted the most likely gas-phase hydrochloric acid concentration varied between 0.4 and 5 ppbv. Despite the high ammonia concentrations observed and predicted by the model, when the aerosols were assumed to be in the efflorescence branch they are predicted to be acidic (pH~3.

  7. Analysis of the uranium price predicted to 24 months, implementing neural networks and the Monte Carlo method like predictive tools; Analisis del precio del uranio pronosticado a 24 meses, implementando redes neuronales y el metodo de Monte Carlo como herramientas predictivas

    Energy Technology Data Exchange (ETDEWEB)

    Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C., E-mail: jaime.esquivel@fi.uaemex.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2011-11-15

    The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author)

  8. Implementation of variance-reduction techniques for Monte Carlo nuclear logging calculations with neutron sources

    NARCIS (Netherlands)

    Maucec, M

    2005-01-01

    Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th

  9. A separable shadow Hamiltonian hybrid Monte Carlo method

    Science.gov (United States)

    Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.

    2009-11-01

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  10. Implementation Method of Stable Model

    Directory of Open Access Journals (Sweden)

    Shasha Wu

    2008-01-01

    Full Text Available Software Stability Modeling (SSM is a promising software development methodology based on object-oriented programming to achieve model level stability and reusability. Among the three critical categories of objects proposed by SSM, the business objects play a critical role in connecting the stable problem essentials (enduringbusiness themes and the unstable object implementations (industry objects. The business objects are especially difficult to implement and often raise confusion in the implementation because of their unique characteristics: externally stable and internally unstable. The implementation and code level stability is not the major concern. How to implement the objects in a stable model through object-oriented programming without losing its stability is a big challenge in the real software development. In this paper, we propose new methods to realize the business objects in the implementation of stable model. We also rephrase the definition of the business objects from the implementation perspective, in hope the new description can help software developers to adopt and implement stable models more easily. Finally, we describe the implementation of a stable model for a balloon rental resource management scope to illustrate the advantages of the proposed method.

  11. Monte Carlo methods for pricing financial options

    Indian Academy of Sciences (India)

    N Bolia; S Juneja

    2005-04-01

    Pricing financial options is amongst the most important and challenging problems in the modern financial industry. Except in the simplest cases, the prices of options do not have a simple closed form solution and efficient computational methods are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex financial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the ‘curse of dimensionality’. However, even Monte-Carlo techniques can be quite slow as the problem-size increases, motivating research in variance reduction techniques to increase the efficiency of the simulations. In this paper, we review some of the popular variance reduction techniques and their application to pricing options. We particularly focus on the recent Monte-Carlo techniques proposed to tackle the difficult problem of pricing American options. These include: regression-based methods, random tree methods and stochastic mesh methods. Further, we show how importance sampling, a popular variance reduction technique, may be combined with these methods to enhance their effectiveness. We also briefly review the evolving options market in India.

  12. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    Science.gov (United States)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  13. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    International Nuclear Information System (INIS)

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations

  14. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Ma, C-M; Li, J S; Deng, J; Fan, J [Radiation Oncology Department, Fox Chase Cancer Center, Philadelphia, PA (United States)], E-mail: Charlie.ma@fccc.edu

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife (registered) SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head and neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  15. Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method

    CERN Document Server

    Gilbreth, C N

    2014-01-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  16. Bayesian Monte Carlo method for nuclear data evaluation

    Science.gov (United States)

    Koning, A. J.

    2015-12-01

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight.

  17. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    Science.gov (United States)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  18. Non-analogue Monte Carlo method, application to neutron simulation

    International Nuclear Information System (INIS)

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Nowadays, only the Monte Carlo method offers such possibilities. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette

  19. Development of ray tracing visualization program by Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Otani, Takayuki [Japan Atomic Energy Research Inst., Tokyo (Japan); Hasegawa, Yukihiro

    1997-09-01

    Ray tracing algorithm is a powerful method to synthesize three dimensional computer graphics. In conventional ray tracing algorithms, a view point is used as a starting point of ray tracing, from which the rays are tracked up to the light sources through center points of pixels on the view screen to calculate the intensities of the pixels. This manner, however, makes it difficult to define the configuration of light source as well as to strictly simulate the reflections of the rays. To resolve these problems, we have developed a new ray tracing means which traces rays from a light source, not from a view point, with use of Monte Carlo method which is widely applied in nuclear fields. Moreover, we adopt the variance reduction techniques to the program with use of the specialized machine (Monte-4) for particle transport Monte Carlo so that the computational time could be successfully reduced. (author)

  20. Multi-way Monte Carlo Method for Linear Systems

    OpenAIRE

    Wu, Tao; Gleich, David F.

    2016-01-01

    We study the Monte Carlo method for solving a linear system of the form $x = H x + b$. A sufficient condition for the method to work is $\\| H \\| < 1$, which greatly limits the usability of this method. We improve this condition by proposing a new multi-way Markov random walk, which is a generalization of the standard Markov random walk. Under our new framework we prove that the necessary and sufficient condition for our method to work is the spectral radius $\\rho(H^{+}) < 1$, which is a weake...

  1. Dynamical Monte Carlo method for stochastic epidemic models

    CERN Document Server

    Aiello, O E

    2002-01-01

    A new approach to Dynamical Monte Carlo Methods is introduced to simulate markovian processes. We apply this approach to formulate and study an epidemic Generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta method in a region of deterministic solution. Introducing local stochastic interactions, the Runge-Kutta method is not applicable, and we solve and check it self-consistently with a stochastic version of the Euler Method. The results are also analyzed under the herd-immunity concept.

  2. Calculations of pair production by Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Bottcher, C.; Strayer, M.R.

    1991-01-01

    We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.

  3. Monte Carlo methods and applications for the nuclear shell model

    OpenAIRE

    Dean, D. J.; White, J A

    1998-01-01

    The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sdpf- shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.

  4. Calculations of pair production by Monte Carlo methods

    International Nuclear Information System (INIS)

    We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs

  5. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

    2012-07-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)

  6. A new lattice Monte Carlo method for simulating dielectric inhomogeneity

    Science.gov (United States)

    Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei

    We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.

  7. On adaptive resampling strategies for sequential Monte Carlo methods

    CERN Document Server

    Del Moral, Pierre; Jasra, Ajay; 10.3150/10-BEJ335

    2012-01-01

    Sequential Monte Carlo (SMC) methods are a class of techniques to sample approximately from any sequence of probability distributions using a combination of importance sampling and resampling steps. This paper is concerned with the convergence analysis of a class of SMC methods where the times at which resampling occurs are computed online using criteria such as the effective sample size. This is a popular approach amongst practitioners but there are very few convergence results available for these methods. By combining semigroup techniques with an original coupling argument, we obtain functional central limit theorems and uniform exponential concentration estimates for these algorithms.

  8. Estimativa da produtividade em soldagem pelo Método de Monte Carlo Productivity estimation in welding by Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    José Luiz Ferreira Martins

    2011-09-01

    . From these data was taken at random samples with, respectively, 10, 15 and 20 elements and were performed simulations by Monte Carlo method. Comparing the results of the sample with 160 elements and the data generated by simulation is observed that good results can be obtained by using Monte Carlo method in estimating productivity of industrial welding. On the other hand in Brazilian construction industry the value of productivity average is normally used as a productivity indicator and is based on historical data from other projects collected and measured only after project completion, which is a limitation. This article presents a tool for evaluation of the implementation in real time, enabling adjustments in estimates and monitoring productivity during the project. Similarly, in biddings, budgets and schedule estimations, the use of this tool could enable the adoption of other estimative different from of the average productivity, which is commonly used and as an alternative are suggested three criteria: optimistic, average and pessimistic productivity.

  9. A new hybrid method--combined heat flux method with Monte-Carlo method to analyze thermal radiation

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A new hybrid method, Monte-Carlo-Heat-Flux (MCHF) method, was presented to analyze the radiative heat transfer of participating medium in a three-dimensional rectangular enclosure using combined the Monte-Carlo method with the heat flux method. Its accuracy and reliability was proved by comparing the computational results with exact results from classical "Zone Method".

  10. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign ? Part II: Model application to the CENICA, Pedregal and Santa Ana sites

    OpenAIRE

    San Martini, F. M.; E. J. Dunlea; R. Volkamer; Onasch, T. B.; J. T. Jayne; Canagaratna, M. R.; Worsnop, D. R.; C. E. Kolb; J. H. Shorter; S. C. Herndon; M. S. Zahniser; D. Salcedo; Dzepina, K.; Jimenez, J. L; Ortega, J. M.

    2006-01-01

    A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately pred...

  11. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites

    OpenAIRE

    San Martini, F. M.; Dunlea, E. J.; R. Volkamer; Onasch, T. B.; Jayne, J. T.; Canagaratna, M. R.; Worsnop, D. R.; Kolb, C. E.; Shorter, J. H.; Herndon, S. C.; Zahniser, M. S.; D. Salcedo; Dzepina, K.; Jimenez, J. L.; Ortega, J. M.

    2006-01-01

    A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the mode...

  12. An object-oriented implementation of a parallel Monte Carlo code for radiation transport

    Science.gov (United States)

    Santos, Pedro Duarte; Lani, Andrea

    2016-05-01

    This paper describes the main features of a state-of-the-art Monte Carlo solver for radiation transport which has been implemented within COOLFluiD, a world-class open source object-oriented platform for scientific simulations. The Monte Carlo code makes use of efficient ray tracing algorithms (for 2D, axisymmetric and 3D arbitrary unstructured meshes) which are described in detail. The solver accuracy is first verified in testcases for which analytical solutions are available, then validated for a space re-entry flight experiment (i.e. FIRE II) for which comparisons against both experiments and reference numerical solutions are provided. Through the flexible design of the physical models, ray tracing and parallelization strategy (fully reusing the mesh decomposition inherited by the fluid simulator), the implementation was made efficient and reusable.

  13. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 107 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual

  14. Dynamical Monte Carlo methods for plasma-surface reactions

    Science.gov (United States)

    Guerra, Vasco; Marinov, Daniil

    2016-08-01

    Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.

  15. EMTP IMPLEMENTATION OF A MONTE CARLO METHOD FOR LIGHTNING PERFORMANCE ANALYSIS OF TRANSMISSION LINES IMPLANTACIÓN EN EMTP DE UN PROCEDIMIENTO BASADO EN EL MÉTODO DE MONTE CARLO PARA ANALIZAR EL COMPORTAMIENTO DE LÍNEAS DE TRANSMISIÓN FRENTE AL RAYO

    Directory of Open Access Journals (Sweden)

    Juan A Martínez-Velasco

    2008-06-01

    Full Text Available An accurate calculation of lightning overvoltages is an important issue for the analysis and design of overhead transmission lines. The different parts of a transmission line that are involved in lightning calculations must be represented taking into account the frequency ranges of transients associated to lightning. In addition, the procedures to be used in these calculations must be developed considering the random nature of lightning phenomena. Several simulation tools have been used to estimate the lightning performance of transmission lines. The most popular approaches are those based on a time-domain simulation technique for which adequate procedures and transmission line models have to be developed. This paper presents a summary of the computational efforts made by the authors for the development and implementation in an EMTP-like tool of a Monte Carlo procedure, as well as the models of some transmission line components, aimed at analyzing the lightning performance of transmission lines. An actual test line is used to illustrate the scope of this procedure and the type of studies that can be performed.El cálculo riguroso de sobretensiones de origen atmosférico es un aspecto importante en el análisis y diseño de líneas aéreas de transmisión. Las diferentes partes de una línea que están involucradas en las sobretensiones causadas por el rayo deben ser representadas teniendo en cuenta el rango de frecuencia de los transientes causados por el impacto de una descarga atmosférica. Por otro lado, los procedimientos a emplear en el cálculo de sobretensiones deben ser desarrollados teniendo en cuenta la naturaleza aleatoria del rayo. Varias herramientas de cálculo han sido empleadas hasta la fecha para estimar el comportamiento de líneas aéreas de transmisión frente al rayo. Los procedimientos más utilizados emplean una técnica basada en el dominio del tiempo para la que se han de desarrollar y aplicar modelos adecuados de las

  16. Iridium 192 dosimetric study by Monte-Carlo method

    International Nuclear Information System (INIS)

    The Monte-Carlo method was applied to a dosimetry of iridium192 in water and in air; an iridium-platinum alloy seed, enveloped by a platinum can, is used as source. The radioactive decay of this nuclide and the transport of emitted particles from the seed-source in the can and in the irradiated medium are simulated successively. The photons energy spectra outside the source, as well as dose distributions, are given. Phi(d) function is calculated and our results with various experimental values are compared

  17. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  18. The macro response Monte Carlo method for electron transport

    CERN Document Server

    Svatos, M M

    1999-01-01

    This thesis demonstrates the feasibility of basing dose calculations for electrons in radiotherapy on first-principles single scatter physics, in a calculation time that is comparable to or better than current electron Monte Carlo methods. The macro response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. The problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position, and trajectory after leaving the local geometry, a small sphere or "kugel." A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25-8 MeV) and sizes (0.025 to 0.1 cm in radius). The second transport stage is a global calculation, in which steps that conform to the size of the kugels in the...

  19. Application of Monte Carlo methods in tomotherapy and radiation biophysics

    Science.gov (United States)

    Hsiao, Ya-Yun

    Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published

  20. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  1. Application of Monte Carlo method to nuclear core characteristic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. K.; Han, C. Y.; Shin, C. H. [Hangyang Univ., Seoul (Korea, Republic of)

    2000-05-01

    The nuclear core characteristic analysis for Korean Next Generation Reactor(KNGR) was performed by using Monte Carlo method. MCNP4B code was employed to model the initial core of KNGR on a three dimensional representation. Material compositions for each type and burnup of fuel assemblies were obtained by using CASMO-3 runs. A new cross section library for different in-vessel core temperatures was generated by NJOY 97 code. The criticality benchmark of the modeled KNGR core was carried out though KCODE calculation and the relative powers of each fuel rod were obtained. The nuclear characteristics including the effective multiplication factor, relative power distributions, pin peaking factor, and axial offset(AO) were obtained from the results in KCODE calculation. The comparison between the results from MCNP calculation and the reference data from KEPCO Nuclear Fuel Company(KNFC) validates the MCNP modeling for KNGR core and the leads to the applications of Monte Carlo method to the nuclear core characteristic analysis.

  2. Application of Macro Response Monte Carlo method for electron spectrum simulation

    International Nuclear Information System (INIS)

    During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations

  3. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems

    Science.gov (United States)

    Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.

    2016-08-01

    We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.

  4. GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method

    Science.gov (United States)

    Wei, J.; Kruis, F. E.

    2013-09-01

    Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.

  5. The macro response Monte Carlo method for electron transport

    Energy Technology Data Exchange (ETDEWEB)

    Svatos, M M

    1998-09-01

    The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could

  6. Efficient implementation of the Hellmann-Feynman theorem in a diffusion Monte Carlo calculation.

    Science.gov (United States)

    Vitiello, S A

    2011-02-01

    Kinetic and potential energies of systems of (4)He atoms in the solid phase are computed at T = 0. Results at two densities of the liquid phase are presented as well. Calculations are performed by the multiweight extension to the diffusion Monte Carlo method that allows the application of the Hellmann-Feynman theorem in a robust and efficient way. This is a general method that can be applied in other situations of interest as well.

  7. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  8. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer: I. Algorithms and numerical methods

    CERN Document Server

    Harries, Tim J

    2015-01-01

    We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the resu...

  9. Methods for variance reduction in Monte Carlo simulations

    Science.gov (United States)

    Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.

    2016-03-01

    Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.

  10. Radiative heat transfer by the Monte Carlo method

    CERN Document Server

    Hartnett †, James P; Cho, Young I; Greene, George A; Taniguchi, Hiroshi; Yang, Wen-Jei; Kudo, Kazuhiko

    1995-01-01

    This book presents the basic principles and applications of radiative heat transfer used in energy, space, and geo-environmental engineering, and can serve as a reference book for engineers and scientists in researchand development. A PC disk containing software for numerical analyses by the Monte Carlo method is included to provide hands-on practice in analyzing actual radiative heat transfer problems.Advances in Heat Transfer is designed to fill the information gap between regularly scheduled journals and university level textbooks by providing in-depth review articles over a broader scope than journals or texts usually allow.Key Features* Offers solution methods for integro-differential formulation to help avoid difficulties* Includes a computer disk for numerical analyses by PC* Discusses energy absorption by gas and scattering effects by particles* Treats non-gray radiative gases* Provides example problems for direct applications in energy, space, and geo-environmental engineering

  11. Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering

    OpenAIRE

    Machta, Jon; Ellis, Richard S.

    2011-01-01

    Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, populatio...

  12. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-04-01

    Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.

  13. Interacting multiagent systems kinetic equations and Monte Carlo methods

    CERN Document Server

    Pareschi, Lorenzo

    2014-01-01

    The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...

  14. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part II: Model application to the CENICA, Pedregal and Santa Ana sites

    Directory of Open Access Journals (Sweden)

    F. M. San Martini

    2006-01-01

    Full Text Available A Markov Chain Monte Carlo model for integrating the observations of inorganic species with a thermodynamic equilibrium model was presented in Part I of this series. Using observations taken at three ground sites, i.e. a residential, industrial and rural site, during the MCMA-2003 campaign in Mexico City, the model is used to analyze the inorganic particle and ammonia data and to predict gas phase concentrations of nitric and hydrochloric acid. In general, the model is able to accurately predict the observed inorganic particle concentrations at all three sites. The agreement between the predicted and observed gas phase ammonia concentration is excellent. The NOz concentration calculated from the NOy, NO and NO2 observations is of limited use in constraining the gas phase nitric acid concentration given the large uncertainties in this measure of nitric acid and additional reactive nitrogen species. Focusing on the acidic period of 9–11 April identified by Salcedo et al. (2006, the model accurately predicts the particle phase observations during this period with the exception of the nitrate predictions after 10:00 a.m. (Central Daylight Time, CDT on 9 April, where the model underpredicts the observations by, on average, 20%. This period had a low planetary boundary layer, very high particle concentrations, and higher than expected nitrogen dioxide concentrations. For periods when the particle chloride observations are consistently above the detection limit, the model is able to both accurately predict the particle chloride mass concentrations and provide well-constrained HCl (g concentrations. The availability of gas-phase ammonia observations helps constrain the predicted HCl (g concentrations. When the particles are aqueous, the most likely concentrations of HCl (g are in the sub-ppbv range. The most likely predicted concentration of HCl (g was found to reach concentrations of order 10 ppbv if the particles are dry. Finally, the

  15. Comprehensive evaluation and clinical implementation of commercially available Monte Carlo dose calculation algorithm.

    Science.gov (United States)

    Zhang, Aizhen; Wen, Ning; Nurushev, Teamour; Burmeister, Jay; Chetty, Indrin J

    2013-01-01

    A commercial electron Monte Carlo (eMC) dose calculation algorithm has become available in Eclipse treatment planning system. The purpose of this work was to evaluate the eMC algorithm and investigate the clinical implementation of this system. The beam modeling of the eMC algorithm was performed for beam energies of 6, 9, 12, 16, and 20 MeV for a Varian Trilogy and all available applicator sizes in the Eclipse treatment planning system. The accuracy of the eMC algorithm was evaluated in a homogeneous water phantom, solid water phantoms containing lung and bone materials, and an anthropomorphic phantom. In addition, dose calculation accuracy was compared between pencil beam (PB) and eMC algorithms in the same treatment planning system for heterogeneous phantoms. The overall agreement between eMC calculations and measurements was within 3%/2 mm, while the PB algorithm had large errors (up to 25%) in predicting dose distributions in the presence of inhomogeneities such as bone and lung. The clinical implementation of the eMC algorithm was investigated by performing treatment planning for 15 patients with lesions in the head and neck, breast, chest wall, and sternum. The dose distributions were calculated using PB and eMC algorithms with no smoothing and all three levels of 3D Gaussian smoothing for comparison. Based on a routine electron beam therapy prescription method, the number of eMC calculated monitor units (MUs) was found to increase with increased 3D Gaussian smoothing levels. 3D Gaussian smoothing greatly improved the visual usability of dose distributions and produced better target coverage. Differences of calculated MUs and dose distributions between eMC and PB algorithms could be significant when oblique beam incidence, surface irregularities, and heterogeneous tissues were present in the treatment plans. In our patient cases, monitor unit differences of up to 7% were observed between PB and eMC algorithms. Monitor unit calculations were also preformed

  16. Thermal Scattering Law Data: Implementation and Testing Using the Monte Carlo Neutron Transport Codes COG, MCNP and TART

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D E; Hansen, L F; Lent, E M; Plechaty, E F

    2003-05-17

    Recently we implemented the ENDF/B-VI thermal scattering law data in our neutron transport codes COG and TART. Our objective was to convert the existing ENDF/B data into double differential form in the Livermore ENDL format. This will allow us to use the ENDF/B data in any neutron transport code, be it a Monte Carlo, or deterministic code. This was approached as a multi-step project. The first step was to develop methods to directly use the thermal scattering law data in our Monte Carlo codes. The next step was to convert the data to double-differential form. The last step was to verify that the results obtained using the data directly are essentially the same as the results obtained using the double differential data. Part of the planned verification was intended to insure that the data as finally implemented in the COG and TART codes, gave the same answer as the well known MCNP code, which includes thermal scattering law data. Limitations in the treatment of thermal scattering law data in MCNP have been uncovered that prevented us from performing this part of our verification.

  17. Quasi Monte Carlo methods for optimization models of the energy industry with pricing and load processes

    International Nuclear Information System (INIS)

    We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising.

  18. On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-31

    The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.

  19. Monte Carlo and deterministic computational methods for the calculation of the effective delayed neutron fraction

    Science.gov (United States)

    Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry

    2013-07-01

    The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama

  20. Limit theorems for weighted samples with applications to sequential Monte Carlo methods

    OpenAIRE

    Douc, R.; Moulines, France E.

    2008-01-01

    In the last decade, sequential Monte Carlo methods (SMC) emerged as a key tool in computational statistics [see, e.g., Sequential Monte Carlo Methods in Practice (2001) Springer, New York, Monte Carlo Strategies in Scientific Computing (2001) Springer, New York, Complex Stochastic Systems (2001) 109–173]. These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles, which are generated recursively. ¶ ...

  1. A Comparison of Advanced Monte Carlo Methods for Open Systems: CFCMC vs CBMC

    NARCIS (Netherlands)

    A. Torres-Knoop; S.P. Balaji; T.J.H. Vlugt; D. Dubbeldam

    2014-01-01

    Two state-of-the-art simulation methods for computing adsorption properties in porous materials like zeolites and metal-organic frameworks are compared: the configurational bias Monte Carlo (CBMC) method and the recently proposed continuous fractional component Monte Carlo (CFCMC) method. We show th

  2. Formulation and Application of Quantum Monte Carlo Method to Fractional Quantum Hall Systems

    OpenAIRE

    Suzuki, Sei; Nakajima, Tatsuya

    2003-01-01

    Quantum Monte Carlo method is applied to fractional quantum Hall systems. The use of the linear programming method enables us to avoid the negative-sign problem in the Quantum Monte Carlo calculations. The formulation of this method and the technique for avoiding the sign problem are described. Some numerical results on static physical quantities are also reported.

  3. Diffusion Monte Carlo methods applied to Hamaker Constant evaluations

    CERN Document Server

    Hongo, Kenta

    2016-01-01

    We applied diffusion Monte Carlo (DMC) methods to evaluate Hamaker constants of liquids for wettabilities, with practical size of a liquid molecule, Si$_6$H$_{12}$ (cyclohexasilane). The evaluated constant would be justified in the sense that it lies within the expected dependence on molecular weights among similar kinds of molecules, though there is no reference experimental values available for this molecule. Comparing the DMC with vdW-DFT evaluations, we clarified that some of the vdW-DFT evaluations could not describe correct asymptotic decays and hence Hamaker constants even though they gave reasonable binding lengths and energies, and vice versa for the rest of vdW-DFTs. We also found the advantage of DMC for this practical purpose over CCSD(T) because of the large amount of BSSE/CBS corrections required for the latter under the limitation of basis set size applicable to the practical size of a liquid molecule, while the former is free from such limitations to the extent that only the nodal structure of...

  4. Gas Swing Options: Introduction and Pricing using Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Václavík Tomáš

    2016-02-01

    Full Text Available Motivated by the changing nature of the natural gas industry in the European Union, driven by the liberalisation process, we focus on the introduction and pricing of gas swing options. These options are embedded in typical gas sales agreements in the form of offtake flexibility concerning volume and time. The gas swing option is actually a set of several American puts on a spread between prices of two or more energy commodities. This fact, together with the fact that the energy markets are fundamentally different from traditional financial security markets, is important for our choice of valuation technique. Due to the specific features of the energy markets, the existing analytic approximations for spread option pricing are hardly applicable to our framework. That is why we employ Monte Carlo methods to model the spot price dynamics of the underlying commodities. The price of an arbitrarily chosen gas swing option is then computed in accordance with the concept of risk-neutral expectations. Finally, our result is compared with the real payoff from the option realised at the time of the option execution and the maximum ex-post payoff that the buyer could generate in case he knew the future, discounted to the original time of the option pricing.

  5. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    Science.gov (United States)

    Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.

    2015-09-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.

  6. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for peta scale platforms and beyond

    International Nuclear Information System (INIS)

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC-Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC-Chem has been shown to be capable of running at the peta scale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exa scale platforms with a comparable level of efficiency is expected to be feasible. (authors)

  7. Quantum Monte Carlo for large chemical systems: implementing efficient strategies for petascale platforms and beyond.

    Science.gov (United States)

    Scemama, Anthony; Caffarel, Michel; Oseret, Emmanuel; Jalby, William

    2013-04-30

    Various strategies to implement efficiently quantum Monte Carlo (QMC) simulations for large chemical systems are presented. These include: (i) the introduction of an efficient algorithm to calculate the computationally expensive Slater matrices. This novel scheme is based on the use of the highly localized character of atomic Gaussian basis functions (not the molecular orbitals as usually done), (ii) the possibility of keeping the memory footprint minimal, (iii) the important enhancement of single-core performance when efficient optimization tools are used, and (iv) the definition of a universal, dynamic, fault-tolerant, and load-balanced framework adapted to all kinds of computational platforms (massively parallel machines, clusters, or distributed grids). These strategies have been implemented in the QMC=Chem code developed at Toulouse and illustrated with numerical applications on small peptides of increasing sizes (158, 434, 1056, and 1731 electrons). Using 10-80 k computing cores of the Curie machine (GENCI-TGCC-CEA, France), QMC=Chem has been shown to be capable of running at the petascale level, thus demonstrating that for this machine a large part of the peak performance can be achieved. Implementation of large-scale QMC simulations for future exascale platforms with a comparable level of efficiency is expected to be feasible.

  8. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

    Energy Technology Data Exchange (ETDEWEB)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  9. Quantum Monte Carlo methods and lithium cluster properties

    Energy Technology Data Exchange (ETDEWEB)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  10. Direct simulation Monte Carlo calculation of rarefied gas drag using an immersed boundary method

    Science.gov (United States)

    Jin, W.; Kleijn, C. R.; van Ommen, J. R.

    2016-06-01

    For simulating rarefied gas flows around a moving body, an immersed boundary method is presented here in conjunction with the Direct Simulation Monte Carlo (DSMC) method in order to allow the movement of a three dimensional immersed body on top of a fixed background grid. The simulated DSMC particles are reflected exactly at the landing points on the surface of the moving immersed body, while the effective cell volumes are taken into account for calculating the collisions between molecules. The effective cell volumes are computed by utilizing the Lagrangian intersecting points between the immersed boundary and the fixed background grid with a simple polyhedra regeneration algorithm. This method has been implemented in OpenFOAM and validated by computing the drag forces exerted on steady and moving spheres and comparing the results to that from conventional body-fitted mesh DSMC simulations and to analytical approximations.

  11. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabalinejad, M.; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo s

  12. Contribution to the solution of the multigroup Boltzmann equation by the determinist methods and the Monte Carlo method; Contribution a la resolution de l`equation de Bolztmann en multigroupe par les methodes deterministes et Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Li, M

    1998-08-01

    In this thesis, two methods for solving the multigroup Boltzmann equation have been studied: the interface-current method and the Monte Carlo method. A new version of interface-current (IC) method has been develop in the TDT code at SERMA, where the currents of interface are represented by piecewise constant functions in the solid angle space. The convergence of this method to the collision probability (CP) method has been tested. Since the tracking technique is used for both the IC and CP methods, it is necessary to normalize he collision probabilities obtained by this technique. Several methods for this object have been studied and implemented in our code, we have compared their performances and chosen the best one as the standard choice. The transfer matrix treatment has been a long-standing difficulty for the multigroup Monte Carlo method: when the cross-sections are converted into multigroup form, important negative parts will appear in the angular transfer laws represented by low-order Legendre polynomials. Several methods based on the preservation of the first moments, such as the discrete angles methods and the equally-probable step function method, have been studied and implemented in the TRIMARAN-II code. Since none of these codes has been satisfactory, a new method, the non equally-probably step function method, has been proposed and realized in our code. The comparisons for these methods have been done in several aspects: the preservation of the moments required, the calculation of a criticality problem and the calculation of a neutron-transfer in water problem. The results have showed that the new method is the best one in all these comparisons, and we have proposed that it should be a standard choice for the multigroup transfer matrix. (author) 76 refs.

  13. Theory and applications of the fission matrix method for continuous-energy Monte Carlo

    International Nuclear Information System (INIS)

    Highlights: • The fission matrix method is implemented into the MCNP Monte Carlo code. • Eigenfunctions and eigenvalues of power distributions are shown and studied. • Source convergence acceleration is demonstrated for a fuel storage vault problem. • Forward flux eigenmodes and relative uncertainties are shown for a reactor problem. • Eigenmodes expansions are performed during source convergence for a reactor problem. - Abstract: The fission matrix method can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission distribution. It can also be used to accelerate the convergence of power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. These aspects of the method are here both theoretically justified and demonstrated, and then used to investigate fundamental properties of the transport equation for a continuous-energy physics treatment. Implementation into the MCNP6 Monte Carlo code is also discussed, including a sparse representation of the fission matrix, which permits much larger and more accurate representations. Properties of the calculated eigenvalue spectrum of a 2D PWR problem are discussed: for a fine enough mesh and a sufficient degree of sampling, the spectrum both converges and has a negligible imaginary component. Calculation of the fundamental mode of the fission matrix for a fuel storage vault problem shows how convergence can be accelerated by over a factor of ten given a flat initial distribution. Forward fluxes and the relative uncertainties for a 2D PWR are shown, both of which qualitatively agree with expectation. Lastly, eigenmode expansions are performed during source convergence of the 2D PWR

  14. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  15. Monte Carlo Method for Calculating Oxygen Abundances and Their Uncertainties from Strong-Line Flux Measurements

    CERN Document Server

    Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or

    2015-01-01

    We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...

  16. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy

    CERN Document Server

    Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F

    2010-01-01

    Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...

  17. Monte Carlo method for calculating oxygen abundances and their uncertainties from strong-line flux measurements

    Science.gov (United States)

    Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.

    2016-07-01

    We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z https://github.com/nyusngroup/pyMCZ.

  18. Efficient, Automated Monte Carlo Methods for Radiation Transport.

    Science.gov (United States)

    Kong, Rong; Ambrose, Martin; Spanier, Jerome

    2008-11-20

    Monte Carlo simulations provide an indispensible model for solving radiative transport problems, but their slow convergence inhibits their use as an everyday computational tool. In this paper, we present two new ideas for accelerating the convergence of Monte Carlo algorithms based upon an efficient algorithm that couples simulations of forward and adjoint transport equations. Forward random walks are first processed in stages, each using a fixed sample size, and information from stage k is used to alter the sampling and weighting procedure in stage k + 1. This produces rapid geometric convergence and accounts for dramatic gains in the efficiency of the forward computation. In case still greater accuracy is required in the forward solution, information from an adjoint simulation can be added to extend the geometric learning of the forward solution. The resulting new approach should find widespread use when fast, accurate simulations of the transport equation are needed. PMID:23226872

  19. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues

    International Nuclear Information System (INIS)

    The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL

  20. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues

    Energy Technology Data Exchange (ETDEWEB)

    Harris, G.; Van Horn, R.

    1996-06-01

    The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.

  1. Quasi-Monte Carlo methods for lattice systems: a first look

    CERN Document Server

    Jansen, K; Nube, A; Griewank, A; Müller-Preussker, M

    2013-01-01

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like 1/Sqrt(N), where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to 1/N. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  2. Quasi-Monte Carlo methods for lattice systems. A first look

    International Nuclear Information System (INIS)

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N-1/2, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N-1. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  3. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method

    International Nuclear Information System (INIS)

    This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  4. Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows

    Science.gov (United States)

    Ladiges, Daniel R.; Sader, John E.

    2015-10-01

    Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.

  5. Growing lattice animals and Monte-Carlo methods

    Science.gov (United States)

    Reich, G. R.; Leath, P. L.

    1980-01-01

    We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals

  6. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method

    Science.gov (United States)

    Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.

    2013-12-01

    We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.

  7. MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks

    Directory of Open Access Journals (Sweden)

    Zhaoyan Jin

    2013-10-01

    Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works

  8. {sup 32}P Brachytherapy Conformal Source Model RIC-100 for High-Dose-Rate Treatment of Superficial Disease: Monte Carlo Calculations, Diode Measurements, and Clinical Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Gil' ad N., E-mail: coheng@mskcc.org [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Munro, John J. [Montrose Technology, Inc, North Andover, Massachusetts (United States); Kirov, Assen; Losasso, Thomas [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Yamada, Yoshiya [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Williamson, Matthew; Dauer, Lawrence T.; Zaider, Marco [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States)

    2014-03-01

    Purpose: A novel {sup 32}P brachytherapy source has been in use at our institution intraoperatively for temporary radiation therapy of the spinal dura and other localized tumors. We describe the dosimetry and clinical implementation of the source. Methods and Materials: Dosimetric evaluation for the source was done with a complete set of MCNP5 Monte Carlo calculations preceding clinical implementation. In addition, the depth dose curve and dose rate were measured by use of an electron field diode to verify the Monte Carlo calculations. Calibration procedures using the diode in a custom-designed phantom to provide an absolute dose calibration and to check dose uniformity across the source area for each source before treatment were established. Results: Good agreement was established between the Monte Carlo calculations and diode measurements. Quality assurance measurements results are provided for about 100 sources used to date. Clinical source calibrations were usually within 10% of manufacturer specifications. Procedures for safe handling of the source are described. Discussion: Clinical considerations for using the source are discussed.

  9. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    Institute of Scientific and Technical Information of China (English)

    XI Jia-mi; YANG Geng-she

    2008-01-01

    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  10. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer - I. Algorithms and numerical methods

    Science.gov (United States)

    Harries, Tim J.

    2015-04-01

    We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.

  11. New methods for the Monte Carlo simulation of neutron noise experiments in Ads

    International Nuclear Information System (INIS)

    This paper presents two improvements to speed up the Monte-Carlo simulation of neutron noise experiments. The first one is to separate the actual Monte Carlo transport calculation from the digital signal processing routines, while the second is to introduce non-analogue techniques to improve the efficiency of the Monte Carlo calculation. For the latter method, adaptations to the theory of neutron noise experiments were made to account for the distortion of the higher-moments of the calculated neutron noise. Calculations were performed to test the feasibility of the above outlined scheme and to demonstrate the advantages of the application of the track length estimator. It is shown that the modifications improve the efficiency of these calculations to a high extent, which turns the Monte Carlo method into a powerful tool for the development and design of on-line reactivity measurement systems for ADS

  12. Fermion-Dimer Scattering using Impurity Lattice Monte Carlo and the Adiabatic Projection Method

    CERN Document Server

    Elhatisari, Serdar

    2014-01-01

    We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use L\\"uscher's finite-volume relations to determine the $s$-wave, $p$-wave, and $d$-wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.

  13. Fermion-dimer scattering using an impurity lattice Monte Carlo approach and the adiabatic projection method

    Science.gov (United States)

    Elhatisari, Serdar; Lee, Dean

    2014-12-01

    We present lattice Monte Carlo calculations of fermion-dimer scattering in the limit of zero-range interactions using the adiabatic projection method. The adiabatic projection method uses a set of initial cluster states and Euclidean time projection to give a systematically improvable description of the low-lying scattering cluster states in a finite volume. We use Lüscher's finite-volume relations to determine the s -wave, p -wave, and d -wave phase shifts. For comparison, we also compute exact lattice results using Lanczos iteration and continuum results using the Skorniakov-Ter-Martirosian equation. For our Monte Carlo calculations we use a new lattice algorithm called impurity lattice Monte Carlo. This algorithm can be viewed as a hybrid technique which incorporates elements of both worldline and auxiliary-field Monte Carlo simulations.

  14. The all particle method: Coupled neutron, photon, electron, charged particle Monte Carlo calculations

    International Nuclear Information System (INIS)

    At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig

  15. Prediction of Protein-DNA binding by Monte Carlo method

    Science.gov (United States)

    Deng, Yuefan; Eisenberg, Moises; Korobka, Alex

    1997-08-01

    We present an analysis and prediction of protein-DNA binding specificity based on the hydrogen bonding between DNA, protein, and auxillary clusters of water molecules. Zif268, glucocorticoid receptor, λ-repressor mutant, HIN-recombinase, and tramtrack protein-DNA complexes are studied. Hydrogen bonds are approximated by the Lennard-Jones potential with a cutoff distance between the hydrogen and the acceptor atoms set to 3.2 Åand an angular component based on a dipole-dipole interaction. We use a three-stage docking algorithm: geometric hashing that matches pairs of hydrogen bonding sites; (2) least-squares minimization of pairwise distances to filter out insignificant matches; and (3) Monte Carlo stochastic search to minimize the energy of the system. More information can be obtained from our first paper on this subject [Y.Deng et all, J.Computational Chemistry (1995)]. Results show that the biologically correct base pair is selected preferentially when there are two or more strong hydrogen bonds (with LJ potential lower than -0.20) that bind it to the protein. Predicted sequences are less stable in the case of weaker bonding sites. In general the inclusion of water bridges does increase the number of base pairs for which correct specificity is predicted.

  16. Combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    Energy Technology Data Exchange (ETDEWEB)

    Saleur, H.; Derrida, B.

    1985-07-01

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.

  17. Hybrid Monte-Carlo method for simulating neutron and photon radiography

    International Nuclear Information System (INIS)

    We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable (E)quivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs

  18. Hybrid Monte-Carlo method for simulating neutron and photon radiography

    Science.gov (United States)

    Wang, Han; Tang, Vincent

    2013-11-01

    We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs.

  19. The S/sub N//Monte Carlo response matrix hybrid method

    International Nuclear Information System (INIS)

    A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples

  20. Progress on burnup calculation methods coupling Monte Carlo and depletion codes

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: lesinki@cab.cnea.gob.ar

    2005-07-01

    Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author)

  1. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  2. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Gleisberg, Tanju

    2008-07-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  3. Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method

    CERN Document Server

    Wollaeger, Ryan T; Graziani, Carlo; Couch, Sean M; Jordan, George C; Lamb, Donald Q; Moses, Gregory A

    2013-01-01

    We explore the application of Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to radiation transport in strong fluid outflows with structured opacity. The IMC method of Fleck & Cummings is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking Monte Carlo particles through optically thick materials. The DDMC method of Densmore accelerates an IMC computation where the domain is diffusive. Recently, Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent neutrino transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally grey DDMC method. In this article we rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. The method described is suitable for a large variety of non-mono...

  4. TMD PDFs. A Monte Carlo implementation for the sea quark distribution

    Energy Technology Data Exchange (ETDEWEB)

    Hautmann, F. [Oxford Univ. (United Kingdom). Dept. of Theoretical Physics; Hentschinski, M. [Univ. Autonoma de Madrid (Spain). Dept. Fisica Teorica UAM/CSIC; Jung, H. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2012-05-15

    This article gives an introduction to transverse momentum dependent (TMD) parton distribution functions and their use in shower Monte Carlo event generators for high-energy hadron collisions, and describes recent progress in the treatment of sea quark effects within a TMD parton-shower framework.

  5. Implementation and the choice of evaluation methods

    DEFF Research Database (Denmark)

    Flyvbjerg, Bent

    1984-01-01

    The development of evaluation and implementation processes has been closely interrelated in both theory and practice. Today, two major paradigms of evaluation and implementation exist: the programmed paradigm with its approach based on the natural science model, and the adaptive paradigm...... as definded by the programmed paradigm, the way in which analytical methods have been applied in practice can be criticized for narrowing and biasing policy formulation and implementation. The adaptive paradigm has its own problems, one being a less developed theoretical and methodological basis than...... with an approach founded more in phenomenology and social science. The role of analytical methods is viewed very differently in the two paradigms as in the conception of the policy process in general. Allthough analytical methods have come to play a prominent (and often dominant) role in transportation evaluation...

  6. Metric conjoint segmentation methods : A Monte Carlo comparison

    NARCIS (Netherlands)

    Vriens, M; Wedel, M; Wilms, T

    1996-01-01

    The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared co

  7. A Monte Carlo Study of Eight Confidence Interval Methods for Coefficient Alpha

    Science.gov (United States)

    Romano, Jeanine L.; Kromrey, Jeffrey D.; Hibbard, Susan T.

    2010-01-01

    The purpose of this research is to examine eight of the different methods for computing confidence intervals around alpha that have been proposed to determine which of these, if any, is the most accurate and precise. Monte Carlo methods were used to simulate samples under known and controlled population conditions. In general, the differences in…

  8. The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Clifford [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Ji, Weixiao [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); Blaisten-Barojas, Estela, E-mail: blaisten@gmu.edu [Computational Materials Science Center, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States); School of Physics, Astronomy, and Computational Sciences, George Mason University, 4400 University Dr., Fairfax, VA 22030 (United States)

    2014-02-01

    We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.

  9. Spectral method and its high performance implementation

    KAUST Repository

    Wu, Zedong

    2014-01-01

    We have presented a new method that can be dispersion free and unconditionally stable. Thus the computational cost and memory requirement will be reduced a lot. Based on this feature, we have implemented this algorithm on GPU based CUDA for the anisotropic Reverse time migration. There is almost no communication between CPU and GPU. For the prestack wavefield extrapolation, it can combine all the shots together to migration. However, it requires to solve a bigger dimensional problem and more meory which can\\'t fit into one GPU cards. In this situation, we implement it based on domain decomposition method and MPI for distributed memory system.

  10. Comparison of uncertainty in fatigue tests obtained by the Monte Carlo method in two softwares

    Science.gov (United States)

    Trevisan, Lisiane; Kapper Fabricio, Daniel Antonio; Reguly, Afonso

    2016-07-01

    The Supplement 1 to the “Guide to the expression of uncertainty in measurement” indicates the Monte Carlo method for calculating the expanded measurement uncertainty. The objective of this work is to compare the measurement uncertainty values obtained via Monte Carlo method through two commercial softwares (Matlab® and Crystal Ball®) for the parameter ‘adjusted strain’, obtained from fatigue tests. Simulations were carried out using different number of iterations and different levels of confidence. The results showed that there are short differences between the measurement uncertainty values generated by different software.

  11. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    International Nuclear Information System (INIS)

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations

  12. External individual monitoring: experiments and simulations using Monte Carlo Method

    International Nuclear Information System (INIS)

    In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF2:NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF2:NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF2:NaCl compound estimated by simulation to be 2,20(25) mm-1 was introduced. Conversion coefficients Cp from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low price can

  13. Implementation of Mobility Management Methods for MANET

    Directory of Open Access Journals (Sweden)

    Jiri Hosek

    2012-12-01

    Full Text Available The Mobile Adhoc Networks represent very perspective way of communication. The mobility management is on the most often discussed research issues within these networks. There have been designed many methods and algorithms to control and predict the movement of mobile nodes, but each method has different functional principle and is suitable for different environment and network circumstances. Therefore, it is advantageous to use a simulation tool in order to model and evaluate a mobile network together with the mobility management method. The aim of this paper is to present the implementation process of movement control methods into simulation environment OPNET Modeler based on the TRJ file. The described trajectory control procedure utilized the information about the route stored in the GPX file which is used to store the GPS coordinates. The developed conversion tool, implementation of proposed method into OPNET Modeler and also final evaluation are presented in this paper.

  14. A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions

    CERN Document Server

    Liang, Yihao; Li, Yaohang

    2016-01-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  15. Quasi-Monte Carlo methods for lattice systems. A first look

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2013-02-15

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  16. Zone modeling of radiative heat transfer in industrial furnaces using adjusted Monte-Carlo integral method for direct exchange area calculation

    International Nuclear Information System (INIS)

    This paper proposes the Monte-Carlo Integral method for the direct exchange area calculation in the zone method for the first time. This method is simple and able to handle the complex geometry zone problem and the self-zone radiation problem. The Monte-Carlo Integral method is adjusted to improve the efficiency, so that an acceptable accuracy within a reasonable computation time could be achieved. The zone method with the adjusted Monte-Carlo Integral method is used for the modeling and simulation of the radiation transfer in the industrial furnace. The simulation result is compared with the industrial data and show great accordance. It also shows the high temperature flue gas heats the furnace wall, which reflects the radiant heat to the reactor tubes. The highest temperature of flue gas and the side wall appears in nearly one third of the furnace height from the bottom, which corresponds with the industrial measuring data. The simulation result indicates that the zone method is comprehensive and easy to implement for radiative phenomenon in the furnace. - Highlights: • The Monte Carlo Integral method for evaluating direct exchange areas. • Adjustment from the MCI method to the AMCI method for efficiency. • Examination of the performance of the MCI and AMCI methods. • Development of the 3D zone model with the AMCI method. • The simulation results show good accordance with the industrial data

  17. Improving Power System Risk Evaluation Method Using Monte Carlo Simulation and Gaussian Mixture Method

    Directory of Open Access Journals (Sweden)

    GHAREHPETIAN, G. B.

    2009-06-01

    Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.

  18. Dynamic measurements and uncertainty estimation of clinical thermometers using Monte Carlo method

    Science.gov (United States)

    Ogorevc, Jaka; Bojkovski, Jovan; Pušnik, Igor; Drnovšek, Janko

    2016-09-01

    Clinical thermometers in intensive care units are used for the continuous measurement of body temperature. This study describes a procedure for dynamic measurement uncertainty evaluation in order to examine the requirements for clinical thermometer dynamic properties in standards and recommendations. In this study thermistors were used as temperature sensors, transient temperature measurements were performed in water and air and the measurement data were processed for the investigation of thermometer dynamic properties. The thermometers were mathematically modelled. A Monte Carlo method was implemented for dynamic measurement uncertainty evaluation. The measurement uncertainty was analysed for static and dynamic conditions. Results showed that dynamic uncertainty is much larger than steady-state uncertainty. The results of dynamic uncertainty analysis were applied on an example of clinical measurements and were compared to current requirements in ISO standard for clinical thermometers. It can be concluded that there was no need for dynamic evaluation of clinical thermometers for continuous measurement, while dynamic measurement uncertainty was within the demands of target uncertainty. Whereas in the case of intermittent predictive thermometers, the thermometer dynamic properties had a significant impact on the measurement result. Estimation of dynamic uncertainty is crucial for the assurance of traceable and comparable measurements.

  19. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors

    Institute of Scientific and Technical Information of China (English)

    雷咏梅; 蒋英; 等

    2002-01-01

    This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors(SMPs).The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition.Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied.Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly.It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.

  20. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models h...... conclude that a cervical smear analysed by DNA typing is the best to screen for cervical cancer. The analysis presents some general ideas for comparison of diagnostic tests....... with-in person and with-in family dependences.     In the second study we compare four different screening methods for cervical cancer. The methods have been performed on a number of women, and the data possess a complicated dependence structure due to the replicate test results for the same woman. We...

  1. An Implementation of the Frequency Matching Method

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Hansen, Thomas Mejer;

    During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One of these......During the last decade multiple-point statistics has become in-creasingly popular as a tool for incorporating complex prior infor-mation when solving inverse problems in geosciences. A variety of methods have been proposed but often the implementation of these is not straightforward. One...... of these methods is the recently proposed Frequency Matching method to compute the maximum a posteriori model of an inverse problem where multiple-point statistics, learned from a training image, is used to formulate a closed form expression for an a priori probability density function. This paper discusses...... aspects of the implementation of the Fre-quency Matching method and the techniques adopted to make it com-putationally feasible also for large-scale inverse problems. The source code is publicly available at GitHub and this paper also provides an example of how to apply the Frequency Matching method...

  2. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  3. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Agudelo-Giraldo, J.D. [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo, J. [Grupo de Magnetismo y Simulación, Instituto de Física, Universidad de Antioquia, A.A. 1226, Medellín (Colombia)

    2015-10-01

    The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La{sub 2/3}Ca{sub 1/3}MnO{sub 3}, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn{sup 3+eg’}–O–Mn{sup 3+eg}, Mn{sup 3+eg}–O–Mn{sup 4+d3} and Mn{sup 3+eg’}–O–Mn{sup 4+d3}. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions T{sub C} (Curie temperature) and T{sub MI} (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, T{sub MI} presented lower values than T{sub C}. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below T{sub MI}. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below T{sub C}. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below T{sub MI} by the vacancies effect. • The resistive hysteresis

  4. Methods of Monte Carlo biasing using two-dimensional discrete ordinates adjoint flux

    Energy Technology Data Exchange (ETDEWEB)

    Tang, J.S.; Stevens, P.N.; Hoffman, T.J.

    1976-06-01

    Methods of biasing three-dimensional deep penetration Monte Carlo calculations using importance functions obtained from a two-dimensional discrete ordinates adjoint calculation have been developed. The important distinction was made between the applications of the point value and the event value to alter the random walk in Monte Carlo analysis of radiation transport. The biasing techniques developed are the angular probability biasing which alters the collision kernel using the point value as the importance function and the path length biasing which alters the transport kernel using the event value as the importance function. Source location biasings using the step importance function and the scalar adjoint flux obtained from the two-dimensional discrete ordinates adjoint calculation were also investigated. The effects of the biasing techniques to Monte Carlo calculations have been investigated for neutron transport through a thick concrete shield with a penetrating duct. Source location biasing, angular probability biasing, and path length biasing were employed individually and in various combinations. Results of the biased Monte Carlo calculations were compared with the standard Monte Carlo and discrete ordinates calculations.

  5. A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany)

    2012-11-01

    This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented.

  6. A Monte Carlo Green's function method for three-dimensional neutron transport

    International Nuclear Information System (INIS)

    This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution

  7. The information-based complexity of approximation problem by adaptive Monte Carlo methods

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.

  8. A Monte-Carlo-Based Network Method for Source Positioning in Bioluminescence Tomography

    OpenAIRE

    Zhun Xu; Xiaolei Song; Xiaomeng Zhang; Jing Bai

    2007-01-01

    We present an approach based on the improved Levenberg Marquardt (LM) algorithm of backpropagation (BP) neural network to estimate the light source position in bioluminescent imaging. For solving the forward problem, the table-based random sampling algorithm (TBRS), a fast Monte Carlo simulation method ...

  9. Calibration coefficient of reference brachytherapy ionization chamber using analytical and Monte Carlo methods.

    Science.gov (United States)

    Kumar, Sudhir; Srinivasan, P; Sharma, S D

    2010-06-01

    A cylindrical graphite ionization chamber of sensitive volume 1002.4 cm(3) was designed and fabricated at Bhabha Atomic Research Centre (BARC) for use as a reference dosimeter to measure the strength of high dose rate (HDR) (192)Ir brachytherapy sources. The air kerma calibration coefficient (N(K)) of this ionization chamber was estimated analytically using Burlin general cavity theory and by the Monte Carlo method. In the analytical method, calibration coefficients were calculated for each spectral line of an HDR (192)Ir source and the weighted mean was taken as N(K). In the Monte Carlo method, the geometry of the measurement setup and physics related input data of the HDR (192)Ir source and the surrounding material were simulated using the Monte Carlo N-particle code. The total photon energy fluence was used to arrive at the reference air kerma rate (RAKR) using mass energy absorption coefficients. The energy deposition rates were used to simulate the value of charge rate in the ionization chamber and N(K) was determined. The Monte Carlo calculated N(K) agreed within 1.77 % of that obtained using the analytical method. The experimentally determined RAKR of HDR (192)Ir sources, using this reference ionization chamber by applying the analytically estimated N(K), was found to be in agreement with the vendor quoted RAKR within 1.43%.

  10. Sequential Monte Carlo methods for nonlinear discrete-time filtering

    CERN Document Server

    Bruno, Marcelo GS

    2013-01-01

    In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable.We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in line

  11. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  12. Application of Monte Carlo methods for dead time calculations for counting measurements; Anwendung von Monte-Carlo-Methoden zur Berechnung der Totzeitkorrektion fuer Zaehlmessungen

    Energy Technology Data Exchange (ETDEWEB)

    Henniger, Juergen; Jakobi, Christoph [Technische Univ. Dresden (Germany). Arbeitsgruppe Strahlungsphysik (ASP)

    2015-07-01

    From a mathematical point of view Monte Carlo methods are the numerical solution of certain integrals and integral equations using a random experiment. There are several advantages compared to the classical stepwise integration. The time required for computing increases for multi-dimensional problems only moderately with increasing dimension. The only requirements for the integral kernel are its capability of being integrated in the considered integration area and the possibility of an algorithmic representation. These are the important properties of Monte Carlo methods that allow the application in every scientific area. Besides that Monte Carlo algorithms are often more intuitive than conventional numerical integration methods. The contribution demonstrates these facts using the example of dead time corrections for counting measurements.

  13. A scatter correction method for T1-201 images: A Monte Carlo investigation

    Energy Technology Data Exchange (ETDEWEB)

    Hademenos, G.J.; King, M.A. (Univ. of Massachusetts Medical Center, Worcester, MA (United States). Dept. of Nuclear Medicine); Ljungberg, M. (Lund Univ., (Sweden). Dept. of Radiation Physics); Zubal, G.; Harrell, C.R. (Yale Univ. School of Medicine, New Haven, CT (United States). Dept. of Diagnostic Radiology)

    1993-08-01

    Results from the application of a modified dual photopeak window (DPW) scatter correction method to Monte Carlo simulated T1-201 emission images are presented. In the Monte Carlo investigation, individual simulations were performed for six radiation emissions of T1-201. For each emission, point sources of T1-201 were imaged at various locations i na water-filled elliptical tub phantom using three energy windows: two 12% windows abutted at 72 keV and a third 10 keV window placed to the right of the photopeak window (95.001 keV - 105.000 keV). The third window was used to estimate the spilldown contribution from the T1-201 gamma rays in each of the two photopeak windows. Using the corrected counts in these two windows, the DPW method was applied to each point source image to estimate the scatter distribution. For point source images in both homogeneous and non-homogeneous attenuating media, the application of this modified version of DPW resulted in an approximately six-fold reduction in the scatter fraction and an excellent agreement of the shape of the tails between the estimated scatter distribution and the Monte Carlo-simulated truth. This method was also applied to two views of an extended cardiac distribution within an anthropomorphic phantom, again, resulting in at least a six-fold improvement between the scatter estimate and the Monte Carlo-simulated true scatter.

  14. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Gabriela Ižaríková

    2015-12-01

    Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.

  15. A Method for Estimating Annual Energy Production Using Monte Carlo Wind Speed Simulation

    Directory of Open Access Journals (Sweden)

    Birgir Hrafnkelsson

    2016-04-01

    Full Text Available A novel Monte Carlo (MC approach is proposed for the simulation of wind speed samples to assess the wind energy production potential of a site. The Monte Carlo approach is based on historical wind speed data and reserves the effect of autocorrelation and seasonality in wind speed observations. No distributional assumptions are made, and this approach is relatively simple in comparison to simulation methods that aim at including the autocorrelation and seasonal effects. Annual energy production (AEP is simulated by transforming the simulated wind speed values via the power curve of the wind turbine at the site. The proposed Monte Carlo approach is generic and is applicable for all sites provided that a sufficient amount of wind speed data and information on the power curve are available. The simulated AEP values based on the Monte Carlo approach are compared to both actual AEP and to simulated AEP values based on a modified Weibull approach for wind speed simulation using data from the Burfell site in Iceland. The comparison reveals that the simulated AEP values based on the proposed Monte Carlo approach have a distribution that is in close agreement with actual AEP from two test wind turbines at the Burfell site, while the simulated AEP of the Weibull approach is such that the P50 and the scale are substantially lower and the P90 is higher. Thus, the Weibull approach yields AEP that is not in line with the actual variability in AEP, while the Monte Carlo approach gives a realistic estimate of the distribution of AEP.

  16. Modeling radiation from the atmosphere of Io with Monte Carlo methods

    Science.gov (United States)

    Gratiy, Sergey

    Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. To validate a global numerical model of Io's atmosphere against astronomical observations requires a 3-D spherical-shell radiative transfer (RT) code to simulate disk-resolved images and disk-integrated spectra from the ultraviolet to the infrared spectral region. In addition, comparison of simulated and astronomical observations provides important information to improve existing atmospheric models. In order to achieve this goal, a new 3-D spherical-shell forward/backward photon Monte Carlo code capable of simulating radiation from absorbing/emitting and scattering atmospheres with an underlying emitting and reflecting surface was developed. A new implementation of calculating atmospheric brightness in scattered sunlight is presented utilizing the notion of an "effective emission source" function. This allows for the accumulation of the scattered contribution along the entire path of a ray and the calculation of the atmospheric radiation when both scattered sunlight and thermal emission contribute to the observed radiation---which was not possible in previous models. A "polychromatic" algorithm was developed for application with the backward Monte Carlo method and was implemented in the code. It allows one to calculate radiative intensity at several wavelengths simultaneously, even when the scattering properties of the atmosphere are a function of wavelength. The application of the "polychromatic" method improves the computational efficiency because it reduces the number of photon bundles traced during the simulation. A 3-D gas dynamics model of Io's atmosphere, including both sublimation and volcanic

  17. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  18. Pedestrian counting with grid-based binary sensors based on Monte Carlo method

    OpenAIRE

    Fujii, Shuto; Taniguchi, Yoshiaki; Hasegawa, Go; Matsuoka, Morito

    2014-01-01

    Abstract In this paper, we propose a method for estimating the number of pedestrians walking in opposite directions, as in cases of a shopping street or a sidewalk in a downtown area. The proposed method utilizes a compound-eye sensor that is constructed by placing two binary sensors for the pedestrians’ movement direction and multiple binary sensors for the vertical direction of the pedestrians’ movement direction. A number of Monte Carlo simulations about the movement of pedestrians are con...

  19. Quantile Mechanics II: Changes of Variables in Monte Carlo methods and GPU-Optimized Normal Quantiles

    OpenAIRE

    Shaw, W. T.; Luu, T.; Brickman, N.

    2009-01-01

    With financial modelling requiring a better understanding of model risk, it is helpful to be able to vary assumptions about underlying probability distributions in an efficient manner, preferably without the noise induced by resampling distributions managed by Monte Carlo methods. This paper presents differential equations and solution methods for the functions of the form Q(x) = F −1(G(x)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Mont...

  20. Construction of the Jacobian matrix for fluorescence diffuse optical tomography using a perturbation Monte Carlo method

    Science.gov (United States)

    Zhang, Xiaofeng

    2012-03-01

    Image formation in fluorescence diffuse optical tomography is critically dependent on construction of the Jacobian matrix. For clinical and preclinical applications, because of the highly heterogeneous characteristics of the medium, Monte Carlo methods are frequently adopted to construct the Jacobian. Conventional adjoint Monte Carlo method typically compute the Jacobian by multiplying the photon density fields radiated from the source at the excitation wavelength and from the detector at the emission wavelength. Nonetheless, this approach assumes that the source and the detector in Green's function are reciprocal, which is invalid in general. This assumption is particularly questionable in small animal imaging, where the mean free path length of photons is typically only one order of magnitude smaller than the representative dimension of the medium. We propose a new method that does not rely on the reciprocity of the source and the detector by tracing photon propagation entirely from the source to the detector. This method relies on the perturbation Monte Carlo theory to account for the differences in optical properties of the medium at the excitation and the emission wavelengths. Compared to the adjoint methods, the proposed method is more valid in reflecting the physical process of photon transport in diffusive media and is more efficient in constructing the Jacobian matrix for densely sampled configurations.

  1. Study of dipole moments of LiSr and KRb molecules by quantum Monte Carlo methods

    CERN Document Server

    Guo, Shi; Mitas, Lubos; Reynolds, Peter J

    2013-01-01

    Heteronuclear dimers are of significant interest to experiments seeking to exploit ultracold polar molecules in a number of novel ways including precision measurement, quantum computing, and quantum simulation. We calculate highly accurate Born-Oppenheimer total energies and electric dipole moments as a function of internuclear separation for two such dimers, LiSr and KRb. We apply fully-correlated, high-accuracy quantum Monte Carlo methods for evaluating these molecular properties in a many-body framework. We use small-core effective potentials combined with multi-reference Slater-Jastrow trial wave functions to provide accurate nodes for the fixed-node diffusion Monte Carlo method. For reference and comparison, we calculate the same properties with Hartree-Fock and with restricted Configuration Interaction methods, and carefully assess the impact of the recovered many-body correlations on the calculated quantities.

  2. Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle

    Science.gov (United States)

    Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.

    2015-12-01

    In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.

  3. Sequential Monte Carlo Methods for Joint Detection and Tracking of Multiaspect Targets in Infrared Radar Images

    Directory of Open Access Journals (Sweden)

    Bruno MarceloGS

    2008-01-01

    Full Text Available We present in this paper a sequential Monte Carlo methodology for joint detection and tracking of a multiaspect target in image sequences. Unlike the traditional contact/association approach found in the literature, the proposed methodology enables integrated, multiframe target detection and tracking incorporating the statistical models for target aspect, target motion, and background clutter. Two implementations of the proposed algorithm are discussed using, respectively, a resample-move (RS particle filter and an auxiliary particle filter (APF. Our simulation results suggest that the APF configuration outperforms slightly the RS filter in scenarios of stealthy targets.

  4. Sequential Monte Carlo Methods for Joint Detection and Tracking of Multiaspect Targets in Infrared Radar Images

    Directory of Open Access Journals (Sweden)

    Anton G. Pavlov

    2008-02-01

    Full Text Available We present in this paper a sequential Monte Carlo methodology for joint detection and tracking of a multiaspect target in image sequences. Unlike the traditional contact/association approach found in the literature, the proposed methodology enables integrated, multiframe target detection and tracking incorporating the statistical models for target aspect, target motion, and background clutter. Two implementations of the proposed algorithm are discussed using, respectively, a resample-move (RS particle filter and an auxiliary particle filter (APF. Our simulation results suggest that the APF configuration outperforms slightly the RS filter in scenarios of stealthy targets.

  5. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method

    Science.gov (United States)

    Basire, M.; Soudan, J.-M.; Angelié, C.

    2014-09-01

    The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called "corrected EAM" (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard N_{at}^{-1/3} linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat potential clusters studied in Paper I.

  6. Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES

    Science.gov (United States)

    Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean

    2009-06-01

    Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).

  7. Research of Monte Carlo method used in simulation of different maintenance processes

    International Nuclear Information System (INIS)

    The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)

  8. The application of Monte Carlo method to electron and photon beams transport; Zastosowanie metody Monte Carlo do analizy transportu elektronow i fotonow

    Energy Technology Data Exchange (ETDEWEB)

    Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1994-12-31

    The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.

  9. Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe

    Science.gov (United States)

    Martin, Nicolas

    This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic

  10. Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method

    International Nuclear Information System (INIS)

    A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology.

  11. A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport

    Science.gov (United States)

    Tautz, R. C.

    2016-05-01

    A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.

  12. Monte Carlo method of macroscopic modulation of small-angle charged particle reflection from solid surfaces

    CERN Document Server

    Bratchenko, M I

    2001-01-01

    A novel method of Monte Carlo simulation of small-angle reflection of charged particles from solid surfaces has been developed. Instead of atomic-scale simulation of particle-surface collisions the method treats the reflection macroscopically as 'condensed history' event. Statistical parameters of reflection are sampled from the theoretical distributions upon energy and angles. An efficient sampling algorithm based on combination of inverse probability distribution function method and rejection method has been proposed and tested. As an example of application the results of statistical modeling of particles flux enhancement near the bottom of vertical Wehner cone are presented and compared with simple geometrical model of specular reflection.

  13. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA

    International Nuclear Information System (INIS)

    Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)

  14. The massive Schwinger model on the lattice studied via a local Hamiltonian Monte-Carlo method

    International Nuclear Information System (INIS)

    A local Hamiltonian Monte-Carlo method is used to study the massive Schwinger model. A non-vanishing quark condensate is found and the dependence of the condensate and the string tension on the background field is calculated. These results reproduce well the expected continuum results. We study also the first-order phase transition which separates the weak and strong coupling regimes and find evidence for the behaviour conjectured by Coleman. (author)

  15. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  16. A ''local'' exponential transform method for global variance reduction in Monte Carlo transport problems

    International Nuclear Information System (INIS)

    Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation

  17. Implementation of SMED method in wood processing

    Directory of Open Access Journals (Sweden)

    Vukićević Milan R.

    2007-01-01

    Full Text Available The solution of problems in production is mainly tackled by the management based on the hardware component, i.e. by the introduction of work centres of recent generation. In this way, it ensures the continuity of quality reduced consumption of energy, humanization of work, etc. However, the interaction between technical-technological and organizational-economic aspects of production is neglected. This means that the new-generation equipment requires a modern approach to planning, organization, and management of production, as well as to economy of production. Consequently it is very important to ensure the implementation of modern organizational methods in wood processing. This paper deals with the problem of implementation of SMED method (SMED - Single Digit Minute Exchange of Die in the aim of rationalization of set-up-end-up operations. It is known that in the conditions of discontinuous production, set-up-end-up time is a significant limiting factor in the increase of flexibility of production systems.

  18. A Markov chain Monte Carlo method family in incomplete data analysis

    Directory of Open Access Journals (Sweden)

    Vasić Vladimir V.

    2003-01-01

    Full Text Available A Markov chain Monte Carlo method family is a collection of techniques for pseudorandom draws out of probability distribution function. In recent years, these techniques have been the subject of intensive interest of many statisticians. Roughly speaking, the essence of a Markov chain Monte Carlo method family is generating one or more values of a random variable Z, which is usually multidimensional. Let P(Z = f(Z denote a density function of a random variable Z, which we will refer to as a target distribution. Instead of sampling directly from the distribution f, we will generate [Z(1, Z(2..., Z(t,... ], in which each value is, in a way, dependant upon the previous value and where the stationary distribution will be a target distribution. For a sufficient value of t, Z(t will be approximately random sampling of the distribution f. A Markov chain Monte Carlo method family is useful when direct sampling is difficult, but when sampling of each value is not.

  19. Review of improved Monte Carlo methods in uncertainty-based design optimization for aerospace vehicles

    Science.gov (United States)

    Hu, Xingzhi; Chen, Xiaoqian; Parks, Geoffrey T.; Yao, Wen

    2016-10-01

    Ever-increasing demands of uncertainty-based design, analysis, and optimization in aerospace vehicles motivate the development of Monte Carlo methods with wide adaptability and high accuracy. This paper presents a comprehensive review of typical improved Monte Carlo methods and summarizes their characteristics to aid the uncertainty-based multidisciplinary design optimization (UMDO). Among them, Bayesian inference aims to tackle the problems with the availability of prior information like measurement data. Importance sampling (IS) settles the inconvenient sampling and difficult propagation through the incorporation of an intermediate importance distribution or sequential distributions. Optimized Latin hypercube sampling (OLHS) is a stratified sampling approach to achieving better space-filling and non-collapsing characteristics. Meta-modeling approximation based on Monte Carlo saves the computational cost by using cheap meta-models for the output response. All the reviewed methods are illustrated by corresponding aerospace applications, which are compared to show their techniques and usefulness in UMDO, thus providing a beneficial reference for future theoretical and applied research.

  20. Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

    Science.gov (United States)

    Sohn, Ilyoup

    approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking

  1. Investigation of neutral particle leakages in lacunary media to speed up Monte Carlo methods

    International Nuclear Information System (INIS)

    This research aims at optimizing calculation methods which are used for long duration penetration problems in radiation protection when vacuum media are involved. After having recalled the main notions of the transport theory, the various numerical methods which are used to solve them, the fundamentals of the Monte Carlo method, and problems related to long duration penetration, the report focuses on the problem of leaks through vacuum. It describes the bias introduced in the TRIPOLI code, reports the search for an optimal bias in cylindrical configurations by using the JANUS code. It reports the application to a simple straight tube

  2. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method

    Science.gov (United States)

    Ebru Ermis, Elif; Celiktas, Cuneyt

    2015-07-01

    Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.

  3. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Science.gov (United States)

    Chen, Chaobin; Huang, Qunying; Wu, Yican

    2005-04-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  4. Simulation model based on Monte Carlo method for traffic assignment in local area road network

    Institute of Scientific and Technical Information of China (English)

    Yuchuan DU; Yuanjing GENG; Lijun SUN

    2009-01-01

    For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.

  5. Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method

    Science.gov (United States)

    Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.

    2016-03-01

    The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.

  6. Determination of cascade summing correction for HPGe spectrometers by the Monte Carlo method

    CERN Document Server

    Takeda, M N

    2001-01-01

    The present work describes the methodology developed for calculating the cascade sum correction to be applied to experimental efficiencies obtained by means of HPGe spectrometers. The detection efficiencies have been numerically calculated by the Monte Carlo Method for point sources. Another Monte Carlo algorithm has been developed to follow the path in the decay scheme from the beginning state at the precursor radionuclide decay level, down to the ground state of the daughter radionuclide. Each step in the decay scheme is selected by random numbers taking into account the transition probabilities and internal transition coefficients. The selected transitions are properly tagged according to the type of interaction has occurred, giving rise to a total or partial energy absorption events inside the detector crystal. Once the final state has been reached, the selected transitions were accounted for verifying each pair of transitions which occurred simultaneously. With this procedure it was possible to calculate...

  7. Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    Chen Chaobin; Huang Qunying; Wu Yican

    2005-01-01

    A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.

  8. Path-integral Monte Carlo method for Rényi entanglement entropies.

    Science.gov (United States)

    Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A

    2014-07-01

    We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.

  9. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  10. On the Calculation of Reactor Time Constants Using the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Leppaenen, Jaakko [VTT Technical Research Centre of Finland, P.O. Box 1000, FI-02044 VTT (Finland)

    2008-07-01

    Full-core reactor dynamics calculation involves the coupled modelling of thermal hydraulics and the time-dependent behaviour of core neutronics. The reactor time constants include prompt neutron lifetimes, neutron reproduction times, effective delayed neutron fractions and the corresponding decay constants, typically divided into six or eight precursor groups. The calculation of these parameters is traditionally carried out using deterministic lattice transport codes, which also produce the homogenised few-group constants needed for resolving the spatial dependence of neutron flux. In recent years, there has been a growing interest in the production of simulator input parameters using the stochastic Monte Carlo method, which has several advantages over deterministic transport calculation. This paper reviews the methodology used for the calculation of reactor time constants. The calculation techniques are put to practice using two codes, the PSG continuous-energy Monte Carlo reactor physics code and MORA, a new full-core Monte Carlo neutron transport code entirely based on homogenisation. Both codes are being developed at the VTT Technical Research Centre of Finland. The results are compared to other codes and experimental reference data in the CROCUS reactor kinetics benchmark calculation. (author)

  11. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    Science.gov (United States)

    Sharma, Anupam; Long, Lyle N.

    2004-10-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

  12. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  13. Comparing Subspace Methods for Closed Loop Subspace System Identification by Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    2009-10-01

    Full Text Available A novel promising bootstrap subspace system identification algorithm for both open and closed loop systems is presented. An outline of the SSARX algorithm by Jansson (2003 is given and a modified SSARX algorithm is presented. Some methods which are consistent for closed loop subspace system identification presented in the literature are discussed and compared to a recently published subspace algorithm which works for both open as well as for closed loop data, i.e., the DSR_e algorithm as well as the bootstrap method. Experimental comparisons are performed by Monte Carlo simulations.

  14. Estimating Super Heavy Element Event Random Probabilities Using Monte Carlo Methods

    Science.gov (United States)

    Stoyer, Mark; Henderson, Roger; Kenneally, Jacqueline; Moody, Kenton; Nelson, Sarah; Shaughnessy, Dawn; Wilk, Philip

    2009-10-01

    Because superheavy element (SHE) experiments involve very low event rates and low statistics, estimating the probability that a given event sequence is due to random events is extremely important in judging the validity of the data. A Monte Carlo method developed at LLNL [1] is used on recent SHE experimental data to calculate random event probabilities. Current SHE experimental activities in collaboration with scientists at Dubna, Russia will be discussed. [4pt] [1] N.J. Stoyer, et al., Nucl. Instrum. Methods Phys. Res. A 455 (2000) 433.

  15. Experimental results and Monte Carlo simulations of a landmine localization device using the neutron backscattering method

    Energy Technology Data Exchange (ETDEWEB)

    Datema, C.P. E-mail: c.datema@iri.tudelft.nl; Bom, V.R.; Eijk, C.W.E. van

    2002-08-01

    Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found.

  16. Simulation of clinical X-ray tube using the Monte Carlo Method - PENELOPE code

    International Nuclear Information System (INIS)

    Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)

  17. Mass attenuation coefficient calculations of different detector crystals by means of FLUKA Monte Carlo method

    OpenAIRE

    Ermis Elif Ebru; Celiktas Cuneyt

    2015-01-01

    Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded f...

  18. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams

    International Nuclear Information System (INIS)

    Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  19. Ant colony algorithm implementation in electron and photon Monte Carlo transport: Application to the commissioning of radiosurgery photon beams

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2010-07-15

    Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.

  20. Monte Carlo simulations of Higgs-boson production at the LHC with the KrkNLO method

    CERN Document Server

    Jadach, S; Placzek, W; Sapeta, S; Siodmok, A; Skrzypek, M

    2016-01-01

    We present numerical tests and predictions of the KrkNLO method for matching of NLO QCD corrections to hard processes with LO parton shower Monte Carlo generators. This method was described in detail in our previous publications, where its advantages over other approaches, such as MCatNLO and POWHEG, were pointed out. Here we concentrate on presenting some numerical results (cross sections and distributions) for $Z/\\gamma^*$ (Drell-Yan) and Higgs-boson production processes at the LHC. The Drell--Yan process is used mainly to validate the KrkNLO implementation in the Herwig 7 program with respect to the previous implementation in Sherpa. We also show predictions for this process with the new, complete, MC-scheme parton distribution functions and compare them with our previously published results. Then, we present the first results of the KrkNLO method for the Higgs production in gluon--gluon fusion at the LHC and compare them with the predictions of other programs, such as MCFM, MCatNLO, POWHEG and HNNLO, as w...

  1. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)

    Science.gov (United States)

    Hansson, Marie; Isaksson, Mats

    2007-04-01

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  2. Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Basire, M.; Soudan, J.-M.; Angelié, C., E-mail: christian.angelie@cea.fr [Laboratoire Francis Perrin, CNRS-URA 2453, CEA/DSM/IRAMIS/LIDyL, F-91191 Gif-sur-Yvette Cedex (France)

    2014-09-14

    The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, g{sub p}(E{sub p}) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients S{sub ij} are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature T{sub m} is plotted in terms of the cluster atom number N{sub at}. The standard N{sub at}{sup −1/3} linear dependence (Pawlow law) is observed for N{sub at} >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For N{sub at} <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I.

  3. Research on Reliability Modelling Method of Machining Center Based on Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Chuanhai Chen

    2013-03-01

    Full Text Available The aim of this study is to get the reliability of series system and analyze the reliability of machining center. So a modified method of reliability modelling based on Monte Carlo simulation for series system is proposed. The reliability function, which is built by the classical statistics method based on the assumption that machine tools were repaired as good as new, may be biased in the real case. The reliability functions of subsystems are established respectively and then the reliability model is built according to the reliability block diagram. Then the fitting reliability function of machine tools is established using the failure data of sample generated by Monte Carlo simulation, whose inverse reliability function is solved by the linearization technique based on radial basis function. Finally, an example of the machining center is presented using the proposed method to show its potential application. The analysis results show that the proposed method can provide an accurate reliability model compared with the conventional method.

  4. Diffusion Coefficient and Electric Field Studies for HSX using Monte Carlo Methods

    Science.gov (United States)

    Gerhardt, S. P.; Talmadge, J. N.

    1999-11-01

    The HSX experiment has a magnetic field spectrum which closely approximates helical symmetry. Never the less, symmetry breaking terms are present which lead to asymmetric diffusion. Models for the asymmetric component of the monoenergetic diffusion coefficient are unable to account for all the terms in the HSX magnetic spectrum and the functional dependence on the radial electric field (Er). To model the diffusion coefficient as a function of Er and collisionality, Monte Carlo simulations have been made for different values of Er and background density. These results are fit to analytic models for the diffusion coefficient. Enforcing ambipolarity on these fluxes can lead to a calculation of the stellarator Er. To measure Er, we will use a spectroscopic system to measure impurity flow. A 1-meter spectrometer with a CCD detector has been purchased for this purpose; a LabVIEW control system has been implemented and collection optics designed. Details of the spectroscopic system will be presented.

  5. Online Health Management for Complex Nonlinear Systems Based on Hidden Semi-Markov Model Using Sequential Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qinming Liu

    2012-01-01

    Full Text Available Health management for a complex nonlinear system is becoming more important for condition-based maintenance and minimizing the related risks and costs over its entire life. However, a complex nonlinear system often operates under dynamically operational and environmental conditions, and it subjects to high levels of uncertainty and unpredictability so that effective methods for online health management are still few now. This paper combines hidden semi-Markov model (HSMM with sequential Monte Carlo (SMC methods. HSMM is used to obtain the transition probabilities among health states and health state durations of a complex nonlinear system, while the SMC method is adopted to decrease the computational and space complexity, and describe the probability relationships between multiple health states and monitored observations of a complex nonlinear system. This paper proposes a novel method of multisteps ahead health recognition based on joint probability distribution for health management of a complex nonlinear system. Moreover, a new online health prognostic method is developed. A real case study is used to demonstrate the implementation and potential applications of the proposed methods for online health management of complex nonlinear systems.

  6. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    Science.gov (United States)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  7. Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems

    Science.gov (United States)

    Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark

    2016-03-01

    The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.

  8. A 'local' exponential transform method for global variance reduction in Monte Carlo transport problems

    International Nuclear Information System (INIS)

    We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.)

  9. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  10. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process

    International Nuclear Information System (INIS)

    The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)

  11. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  12. Microlens assembly error analysis for light field camera based on Monte Carlo method

    Science.gov (United States)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  13. Investigation of the optimal parameters for laser treatment of leg telangiectasia using the Monte Carlo method

    Science.gov (United States)

    Kienle, Alwin; Hibst, Raimund

    1996-05-01

    Treatment of leg telangiectasia with a pulsed laser is investigated theoretically. The Monte Carlo method is used to calculate light propagation and absorption in the epidermis, dermis and the ectatic blood vessel. Calculations are made for different diameters and depths of the vessel in the dermis. In addition, the scattering and the absorption coefficients of the dermis are varied. On the basis of the considered damage model it is found that for vessels with diameters between 0.3 mm and 0.5 mm wavelengths about 600 nm are optimal to achieve selective photothermolysis.

  14. Linewidth of Cyclotron Absorption in Band-Gap Graphene: Relaxation Time Approximation vs. Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    S.V. Kryuchkov

    2015-03-01

    Full Text Available The power of the elliptically polarized electromagnetic radiation absorbed by band-gap graphene in presence of constant magnetic field is calculated. The linewidth of cyclotron absorption is shown to be non-zero even if the scattering is absent. The calculations are performed analytically with the Boltzmann kinetic equation and confirmed numerically with the Monte Carlo method. The dependence of the linewidth of the cyclotron absorption on temperature applicable for a band-gap graphene in the absence of collisions is determined analytically.

  15. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    Energy Technology Data Exchange (ETDEWEB)

    Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  16. Diffusion coefficients for LMFBR cells calculated with MOC and Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Rooijen, W.F.G. van, E-mail: rooijen@u-fukui.ac.j [Research Institute of Nuclear Energy, University of Fukui, Bunkyo 3-9-1, Fukui-shi, Fukui-ken 910-8507 (Japan); Chiba, G., E-mail: chiba.go@jaea.go.j [Japan Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195 (Japan)

    2011-01-15

    The present work discusses the calculation of the diffusion coefficient of a lattice of hexagonal cells, with both 'sodium present' and 'sodium absent' conditions. Calculations are performed in the framework of lattice theory (also known as fundamental mode approximation). Unlike the classical approaches, our heterogeneous leakage model allows the calculation of diffusion coefficients under all conditions, even if planar voids are present in the lattice. Equations resulting from this model are solved using the method of characteristics (MOC). Independent confirmation of the MOC result comes from Monte Carlo calculations, in which the diffusion coefficient is obtained without any of the assumptions of lattice theory. It is shown by comparison to the Monte Carlo results that the MOC solution yields correct values of the diffusion coefficient under all conditions, even in cases where the classic calculation of the diffusion coefficient fails. This work is a first step in the development of a robust method to calculate the diffusion coefficient of lattice cells. Adoption into production codes will require more development and validation of the method.

  17. Spin kinetic Monte Carlo method for nanoferromagnetism and magnetization dynamics of nanomagnets with large magnetic anisotropy

    Institute of Scientific and Technical Information of China (English)

    LIU Bang-gui; ZHANG Kai-cheng; LI Ying

    2007-01-01

    The Kinetic Monte Carlo (KMC) method based on the transition-state theory, powerful and famous for sim-ulating atomic epitaxial growth of thin films and nanostruc-tures, was used recently to simulate the nanoferromagnetism and magnetization dynamics of nanomagnets with giant mag-netic anisotropy. We present a brief introduction to the KMC method and show how to reformulate it for nanoscale spin systems. Large enough magnetic anisotropy, observed exper-imentally and shown theoretically in terms of first-principle calculation, is not only essential to stabilize spin orientation but also necessary in making the transition-state barriers dur-ing spin reversals for spin KMC simulation. We show two applications of the spin KMC method to monatomic spin chains and spin-polarized-current controlled composite nano-magnets with giant magnetic anisotropy. This spin KMC method can be applied to other anisotropic nanomagnets and composite nanomagnets as long as their magnetic anisotropy energies are large enough.

  18. Numerical simulation of C/O spectroscopy in logging by Monte-Carlo method

    International Nuclear Information System (INIS)

    Numerical simulation of ratio of C/O spectroscopy in logging by Monte-Carlo method is made in this paper. Agree well with the measured spectra, the simulated spectra can meet the requirement of logging practice. Vari- ous kinds of C/O ratios affected by different formation oil saturations,borehole oil fractions, casing sizes and concrete ring thicknesses are investigated. In order to achieve accurate results of processing the spectra, this paper presents a new method for unfolding the C/O inelastic gamma spectroscopy, and analysis for the spectra using the method, The result agrees with the fact. These rules and method can be used as calibrating tools and logging interpretation. (authors)

  19. Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method

    Science.gov (United States)

    Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin

    2015-12-01

    The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.

  20. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method

    CERN Document Server

    Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H

    2014-01-01

    The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.

  1. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry

    International Nuclear Information System (INIS)

    The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)

  2. Analysis of uncertainty quantification method by comparing Monte-Carlo method and Wilk's formula

    International Nuclear Information System (INIS)

    An analysis of the uncertainty quantification related to LBLOCA using the Monte-Carlo calculation has been performed and compared with the tolerance level determined by the Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LOCA phenomena were determined based on previous PIRT results and documentation during the BEMUSE project. Calulations were conducted on 3,500 cases within a 2-week CPU time on a 14-PC cluster system. The Monte-Carlo exercise shows that the 95% upper limit PCT value can be obtained well, with a 95% confidence level using the Wilks' formula, although we have to endure a 5% risk of PCT under-prediction. The results also show that the statistical fluctuation of the limit value using Wilks' first-order is as large as the uncertainty value itself. It is therefore desirable to increase the order of the Wilks' formula to be higher than the second-order to estimate the reliable safety margin of the design features. It is also shown that, with its ever increasing computational capability, the Monte-Carlo method is accessible for a nuclear power plant safety analysis within a realistic time frame.

  3. Self-optimizing Monte Carlo method for nuclear well logging simulation

    Science.gov (United States)

    Liu, Lianyan

    1997-09-01

    In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very

  4. NASA astronaut dosimetry: Implementation of scalable human phantoms and benchmark comparisons of deterministic versus Monte Carlo radiation transport

    Science.gov (United States)

    Bahadori, Amir Alexander

    Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle

  5. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or c

  6. Clinical implementation of enhanced dynamic wedges into the Pinnacle treatment planning system: Monte Carlo validation and patient-specific QA

    Science.gov (United States)

    Ahmad, Munir; Deng, Jun; Lund, Molly W.; Chen, Zhe; Kimmett, James; Moran, Meena S.; Nath, Ravinder

    2009-01-01

    The goal of this work is to present a systematic Monte Carlo validation study on the clinical implementation of the enhanced dynamic wedges (EDWs) into the Pinnacle3 (Philips Medical Systems, Fitchburg, WI) treatment planning system (TPS) and QA procedures for patient plan verification treated with EDWs. Modeling of EDW beams in the Pinnacle3 TPS, which employs a collapsed-cone convolution superposition (CCCS) dose model, was based on a combination of measured open-beam data and the 'Golden Segmented Treatment Table' (GSTT) provided by Varian for each photon beam energy. To validate EDW models, dose profiles of 6 and 10 MV photon beams from a Clinac 2100 C/D were measured in virtual water at depths from near-surface to 30 cm for a wide range of field sizes and wedge angles using the Profiler 2 (Sun Nuclear Corporation, Melbourne, FL) diode array system. The EDW output factors (EDWOFs) for square fields from 4 to 20 cm wide were measured in virtual water using a small-volume Farmer-type ionization chamber placed at a depth of 10 cm on the central axis. Furthermore, the 6 and 10 MV photon beams emerging from the treatment head of Clinac 2100 C/D were fully modeled and the central-axis depth doses, the off-axis dose profiles and the output factors in water for open and dynamically wedged fields were calculated using the Monte Carlo (MC) package EGS4. Our results have shown that (1) both the central-axis depth doses and the off-axis dose profiles of various EDWs computed with the CCCS dose model and MC simulations showed good agreement with the measurements to within 2%/2 mm; (2) measured EDWOFs used for monitor-unit calculation in Pinnacle3 TPS agreed well with the CCCS and MC predictions within 2%; (3) all the EDW fields satisfied our validation criteria of 1% relative dose difference and 2 mm distance-to-agreement (DTA) with 99-100% passing rate in routine patient treatment plan verification using MapCheck 2D diode array.

  7. Monte Carlo simulation methods of determining red bone marrow dose from external radiation

    International Nuclear Information System (INIS)

    Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors)

  8. Adapting phase-switch Monte Carlo method for flexible organic molecules

    Science.gov (United States)

    Bridgwater, Sally; Quigley, David

    2014-03-01

    The role of cholesterol in lipid bilayers has been widely studied via molecular simulation, however, there has been relatively little work on crystalline cholesterol in biological environments. Recent work has linked the crystallisation of cholesterol in the body with heart attacks and strokes. Any attempt to model this process will require new models and advanced sampling methods to capture and quantify the subtle polymorphism of solid cholesterol, in which two crystalline phases are separated by a phase transition close to body temperature. To this end, we have adapted phase-switch Monte Carlo for use with flexible molecules, to calculate the free energy between crystal polymorphs to a high degree of accuracy. The method samples an order parameter , which divides a displacement space for the N molecules, into regions energetically favourable for each polymorph; which is traversed using biased Monte Carlo. Results for a simple model of butane will be presented, demonstrating that conformational flexibility can be correctly incorporated within a phase-switching scheme. Extension to a coarse grained model of cholesterol and the resulting free energies will be discussed.

  9. Simulating rotationally inelastic collisions using a Direct Simulation Monte Carlo method

    CERN Document Server

    Schullian, O; Vaeck, N; van der Avoird, A; Heazlewood, B R; Rennick, C J; Softley, T P

    2015-01-01

    A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH$_3$ by He.

  10. Synchronous Parallel Kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  11. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Science.gov (United States)

    Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin

    2013-01-01

    Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  12. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  13. Analysis over Critical Issues of Implementation or Non-implementation of the ABC Method in Romania

    Directory of Open Access Journals (Sweden)

    Sorinel Cãpusneanu

    2009-12-01

    Full Text Available This article analyses the critical issues regarding implementation or non-implementation of the Activity-Based Costing (ABC method in Romania. There are highlighted the specialists views in the field opinions and own point of view of the authors regarding informational, technical, behavioral, financial, managerial, property and competitive issues regarding implementation or non-implementation of the ABC method in Romania.

  14. Application of Monte Carlo method for dose calculation in thyroid follicle

    International Nuclear Information System (INIS)

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  15. An implementation of Runge's method for Diophantine equations

    OpenAIRE

    Beukers, F.; Tengely, Sz.

    2005-01-01

    In this paper we suggest an implementation of Runge's method for solving Diophantine equations satisfying Runge's condition. In this implementation we avoid the use of Puiseux series and algebraic coefficients.

  16. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods

    Energy Technology Data Exchange (ETDEWEB)

    Ghassoun, Jillali; Jehoauni, Abdellatif [Nuclear physics and Techniques Lab., Faculty of Science, Semlalia, Marrakech (Morocco)

    2000-01-01

    In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)

  17. Improvement of the neutron flux calculations in thick shield by conditional Monte Carlo and deterministic methods

    International Nuclear Information System (INIS)

    In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)

  18. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  19. Intra-operative radiation therapy optimization using the Monte Carlo method

    International Nuclear Information System (INIS)

    The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.)

  20. Intra-operative radiation therapy optimization using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Rosetti, M. [ENEA, Bologna (Italy); Benassi, M.; Bufacchi, A.; D' Andrea, M. [Ist. Regina Elena, Rome (Italy); Bruzzaniti, V. [ENEA, S. Maria di Galeria (Rome) (Italy)

    2001-07-01

    The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.)

  1. Estimation of pressure-particle velocity impedance measurement uncertainty using the Monte Carlo method.

    Science.gov (United States)

    Brandão, Eric; Flesch, Rodolfo C C; Lenzi, Arcanjo; Flesch, Carlos A

    2011-07-01

    The pressure-particle velocity (PU) impedance measurement technique is an experimental method used to measure the surface impedance and the absorption coefficient of acoustic samples in situ or under free-field conditions. In this paper, the measurement uncertainty of the the absorption coefficient determined using the PU technique is explored applying the Monte Carlo method. It is shown that because of the uncertainty, it is particularly difficult to measure samples with low absorption and that difficulties associated with the localization of the acoustic centers of the sound source and the PU sensor affect the quality of the measurement roughly to the same extent as the errors in the transfer function between pressure and particle velocity do.

  2. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    Science.gov (United States)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  3. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

    International Nuclear Information System (INIS)

    This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

  4. Estimating the Probability of Asteroid Collision with the Earth by the Monte Carlo Method

    Science.gov (United States)

    Chernitsov, A. M.; Tamarov, V. A.; Barannikov, E. A.

    2016-09-01

    The commonly accepted method of estimating the probability of asteroid collision with the Earth is investigated on an example of two fictitious asteroids one of which must obviously collide with the Earth and the second must pass by at a dangerous distance from the Earth. The simplest Kepler model of motion is used. Confidence regions of asteroid motion are estimated by the Monte Carlo method. Two variants of constructing the confidence region are considered: in the form of points distributed over the entire volume and in the form of points mapped onto the boundary surface. The special feature of the multidimensional point distribution in the first variant of constructing the confidence region that can lead to zero probability of collision for bodies that collide with the Earth is demonstrated. The probability estimates obtained for even considerably smaller number of points in the confidence region determined by its boundary surface are free from this disadvantage.

  5. Treatment of the Shrodinger equation through a Monte Carlo method based upon the generalized Feynman-Kac formula

    International Nuclear Information System (INIS)

    We present a new Monte Carlo method based upon the theoretical proposal of Claverie and Soto. By contrast with other Quantum Monte Carlo methods used so far, the present approach uses a pure diffusion process without any branching. The many-fermion problem (with the specific constraint due to the Pauli principle) receives a natural solution in the framework of this method: in particular, there is neither the fixed-node approximation not the nodal release problem which occur in other approaches (see, e.g., Ref. 8 for a recent account). We give some numerical results concerning simple systems in order to illustrate the numerical feasibility of the proposed algorithm

  6. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    Science.gov (United States)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  7. A Monte Carlo method for critical systems in infinite volume: the planar Ising model

    CERN Document Server

    Herdeiro, Victor

    2016-01-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  8. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy

    Science.gov (United States)

    King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael

    2010-11-01

    Recent attempts to constrain cosmological variation in the fine structure constant, α, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Δα/α, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.

  9. Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy

    CERN Document Server

    King, Julian A; Webb, John K; Murphy, Michael T

    2009-01-01

    Recent attempts to constrain cosmological variation in the fine structure constant, alpha, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for (Delta alpha)/(alpha), the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.

  10. Application of Monte Carlo method in modelling physical and physico-chemical processes

    International Nuclear Information System (INIS)

    The seminar was held on September 9 and 10, 1982 at the Faculty of Nuclear Science and Technical Engineering of the Czech Technical University in Prague. The participants heard 11 papers of which 7 were inputed in INIS. The papers dealt with the use of the Monte Carlo method for modelling the transport and scattering of gamma radiation in layers of materials, the application of low-energy gamma radiation for the determination of secondary X radiation flux, the determination of self-absorption corrections for a 4π chamber, modelling the response function of a scintillation detector and the optimization of geometrical configuration in measuring material density using backscattered gamma radiation. The possibility was studied of optimizing modelling with regard to computer time, and the participants were informed of comouterized nuclear data libraries. (M.D.)

  11. A study of the dielectric and magnetic properties of multiferroic materials using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    A. Sosa

    2012-03-01

    Full Text Available A study of the dielectric and magnetic properties of multiferroic materials using the Monte Carlo (MC method is presented. Two different systems are considered: the first, ferroelectric-antiferromagnetic (FE-AFM recently studied by X. S. Gaoand J. M. Liu and the second antiferroelectric-ferromagnetic (AFE-FM. Based on the DIFFOUR-Ising hybrid microscopic model developed by Janssen, a Hamiltonian that takes into account the magnetoelectric coupling in both ferroic phases is proposed. The obtained results show that the existence of such coupling modifies the ferroelectric and magnetic ordering in both phases. Additionally, it is shown that the presence of a magnetic or an electric field influences the electric polarization and the magnetization, respectively, making evident the magnetoelectric effect.

  12. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

    CERN Document Server

    Ferraioli, Luigi; Plagnol, Eric

    2012-01-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

  13. MAMONT program for neutron field calculation by the Monte Carlo method

    International Nuclear Information System (INIS)

    The MAMONT program (MAthematical MOdelling of Neutron Trajectories) designed for three-dimensional calculation of neutron transport by analogue and nonanalogue Monte Carlo methods in the range of energies from 15 MeV to the thermal ones is described. The program is written in FORTRAN and is realized at the BESM-6 computer. Group constants of the library modulus are compiled of the ENDL-83, ENDF/B-4 and JENDL-2 files. The possibility of calculation for the layer spherical, cylindrical and rectangular configurations is envisaged. Accumulation and averaging of slowing-down kinetics functionals (averaged logarithmic losses of energy, time of slowing- down, free paths, the number of collisions, age), diffusion parameters, leakage spectra and fluxes as well as formation of separate isotopes over zones are realized in the process of calculation. 16 tabs

  14. Investigation of Confinement Induced Resonance in Atomic Waveguides with Different Geometries by Quantum Monte Carlo Methods

    CERN Document Server

    Azizi, Sajad

    2016-01-01

    We have investigated the quantum dynamics of two ultracold bosons inside an atomic waveguide for two different confinement geometries (cigar-shaped and toroidal waveguides) by quantum Monte Carlo methods. For quasi-1D gases, the confining potential of the waveguide leads to the so-called confinement induced resonance (CIR), results in the phase transition of the gas to the impenetrable bosonic regime (known as TG gas). In this regime the bosons repel each other strongly and behave like fermions. We reproduce CIR for a cigar-shaped waveguide and analyze the behavior of the system for different conditions. Moreover, our analysis demonstrates appearance of CIR for a toroidal waveguide. Particularly, we show that the resonance position is dependent on the size of the waveguide, which is in contrast to the cigar shaped waveguides for which it is universal.

  15. Investigation of physical regularities in gamma gamma logging of oil wells by Monte Carlo method

    International Nuclear Information System (INIS)

    Some results are given of calculations by the Monte Carlo method of specific problems of gamma-gamma density logging. The paper considers the influence of probe length and volume density of the rocks; the angular distribution of the scattered radiation incident on the instrument; the spectra of the radiation being recorded and of the source radiation; depths of surveys, the effect of the mud cake, the possibility of collimating the source radiation; the choice of source, initial collimation angles, the optimum angle of recording scattered gamma-radiation and the radiation discrimination threshold; and the possibility of determining the mineralogical composition of rocks in sections of oil wells and of identifying once-scattered radiation. (author)

  16. Calculation of narrow beam γ ray mass attenuation coefficients of absorbing medium by Monte Carlo method

    International Nuclear Information System (INIS)

    The mathematics model of particle transportation was built, based on the sample of the impaction trace of the narrow beam γ photon in the medium according to the principle of interaction between γ photon and the material, and a computer procedure was organized to simulate the process of transportation for the γ photon in the medium and record the emission probability of γ photon and its corresponding thickness of medium with LabWindows/CVI, which was used to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. The results show that it is feasible for Monte Carlo method to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. (authors)

  17. Generation of organic scintillators response function for fast neutrons using the Monte Carlo method

    International Nuclear Information System (INIS)

    A computer program (DALP) in Fortran-4-G language, has been developed using the Monte Carlo method to simulate the experimental techniques leading to the distribution of pulse heights due to monoenergetic neutrons reaching an organic scintillator. The calculation of the pulse height distribution has been done for two different systems: 1) Monoenergetic neutrons from a punctual source reaching the flat face of a cylindrical organic scintillator; 2) Environmental monoenergetic neutrons randomly reaching either the flat or curved face of the cylindrical organic scintillator. The computer program has been developed in order to be applied to the NE-213 liquid organic scintillator, but can be easily adapted to any other kind of organic scintillator. With this program one can determine the pulse height distribution for neutron energies ranging from 15 KeV to 10 MeV. (Author)

  18. Investigation of Reliabilities of Bolt Distances for Bolted Structural Steel Connections by Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Ertekin Öztekin Öztekin

    2015-12-01

    Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.

  19. Application of multi-stage Monte Carlo method for solving machining optimization problems

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2014-08-01

    Full Text Available Enhancing the overall machining performance implies optimization of machining processes, i.e. determination of optimal machining parameters combination. Optimization of machining processes is an active field of research where different optimization methods are being used to determine an optimal combination of different machining parameters. In this paper, multi-stage Monte Carlo (MC method was employed to determine optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, abrasive waterjet machining, electrochemical discharge machining and electrochemical micromachining. Optimization solutions obtained by using multi-stage MC method were compared with the optimization solutions of past researchers obtained by using meta-heuristic optimization methods, e.g. genetic algorithm, simulated annealing algorithm, artificial bee colony algorithm and teaching learning based optimization algorithm. The obtained results prove the applicability and suitability of the multi-stage MC method for solving machining optimization problems with up to four independent variables. Specific features, merits and drawbacks of the MC method were also discussed.

  20. Nonequilibrium hypersonic flows simulations with asymptotic-preserving Monte Carlo methods

    Science.gov (United States)

    Ren, Wei; Liu, Hong; Jin, Shi

    2014-12-01

    In the rarefied gas dynamics, the DSMC method is one of the most popular numerical tools. It performs satisfactorily in simulating hypersonic flows surrounding re-entry vehicles and micro-/nano- flows. However, the computational cost is expensive, especially when Kn → 0. Even for flows in the near-continuum regime, pure DSMC simulations require a number of computational efforts for most cases. Albeit several DSMC/NS hybrid methods are proposed to deal with this, those methods still suffer from the boundary treatment, which may cause nonphysical solutions. Filbet and Jin [1] proposed a framework of new numerical methods of Boltzmann equation, called asymptotic preserving schemes, whose computational costs are affordable as Kn → 0. Recently, Ren et al. [2] realized the AP schemes with Monte Carlo methods (AP-DSMC), which have better performance than counterpart methods. In this paper, AP-DSMC is applied in simulating nonequilibrium hypersonic flows. Several numerical results are computed and analyzed to study the efficiency and capability of capturing complicated flow characteristics.

  1. Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows

    Science.gov (United States)

    Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.

    2014-12-01

    For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.

  2. A deterministic alternative to the full configuration interaction quantum Monte Carlo method.

    Science.gov (United States)

    Tubman, Norm M; Lee, Joonho; Takeshita, Tyler Y; Head-Gordon, Martin; Whaley, K Birgitta

    2016-07-28

    Development of exponentially scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, is a useful algorithm that allows exact diagonalization through stochastically sampling determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, along with a stochastic projected wave function, to find the important parts of Hilbert space. However, the stochastic representation of the wave function is not required to search Hilbert space efficiently, and here we describe a highly efficient deterministic method that can achieve chemical accuracy for a wide range of systems, including the difficult Cr2 molecule. We demonstrate for systems like Cr2 that such calculations can be performed in just a few cpu hours which makes it one of the most efficient and accurate methods that can attain chemical accuracy for strongly correlated systems. In addition our method also allows efficient calculation of excited state energies, which we illustrate with benchmark results for the excited states of C2. PMID:27475353

  3. Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method

    Science.gov (United States)

    Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.

    2000-07-01

    This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.

  4. EVALUATION OF AGILE METHODS AND IMPLEMENTATION

    OpenAIRE

    Hossain, Arif

    2015-01-01

    The concepts of agile development were introduced when programmers were experiencing different obstacles in building software in various aspects. The obsolete waterfall model became defective and was no more pure process in terms of developing software. Consequently new other development methods have been introduced to mitigate the defects. The purpose of this thesis is to study different agile methods and find out the best one for software development. Each important agile method offers ...

  5. Analysis of communication costs for domain decomposed Monte Carlo methods in nuclear reactor analysis

    International Nuclear Information System (INIS)

    A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted in the context of simplified performance models which elucidate key scaling regimes of the parallel algorithm.

  6. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method

    International Nuclear Information System (INIS)

    The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)

  7. Coarse-grained computation for particle coagulation and sintering processes by linking Quadrature Method of Moments with Monte-Carlo

    International Nuclear Information System (INIS)

    The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.

  8. Non-Pilot-Aided Sequential Monte Carlo Method to Joint Signal, Phase Noise, and Frequency Offset Estimation in Multicarrier Systems

    Directory of Open Access Journals (Sweden)

    Christelle Garnier

    2008-05-01

    Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.

  9. Technical Note: Implementation of biological washout processes within GATE/GEANT4—A Monte Carlo study in the case of carbon therapy treatments

    Energy Technology Data Exchange (ETDEWEB)

    Martínez-Rovira, I., E-mail: immamartinez@gmail.com; Jouvie, C.; Jan, S. [Service Hospitalier Frédéric Joliot, Commissariat à l’énergie atomique et aux énergies alternatives (CEA/DSV/I2BM/SHFJ), 4 place du général Leclerc, 91401 Orsay Cedex (France)

    2015-04-15

    Purpose: The imaging of positron emitting isotopes produced during patient irradiation is the only in vivo method used for hadrontherapy dose monitoring in clinics nowadays. However, the accuracy of this method is limited by the loss of signal due to the metabolic decay processes (biological washout). In this work, a generic modeling of washout was incorporated into the GATE simulation platform. Additionally, the influence of the washout on the β{sup +} activity distributions in terms of absolute quantification and spatial distribution was studied. Methods: First, the irradiation of a human head phantom with a {sup 12}C beam, so that a homogeneous dose distribution was achieved in the tumor, was simulated. The generated {sup 11}C and {sup 15}O distribution maps were used as β{sup +} sources in a second simulation, where the PET scanner was modeled following a detailed Monte Carlo approach. The activity distributions obtained in the presence and absence of washout processes for several clinical situations were compared. Results: Results show that activity values are highly reduced (by a factor of 2) in the presence of washout. These processes have a significant influence on the shape of the PET distributions. Differences in the distal activity falloff position of 4 mm are observed for a tumor dose deposition of 1 Gy (T{sub ini} = 0 min). However, in the case of high doses (3 Gy), the washout processes do not have a large effect on the position of the distal activity falloff (differences lower than 1 mm). The important role of the tumor washout parameters on the activity quantification was also evaluated. Conclusions: With this implementation, GATE/GEANT 4 is the only open-source code able to simulate the full chain from the hadrontherapy irradiation to the PET dose monitoring including biological effects. Results show the strong impact of the washout processes, indicating that the development of better models and measurement of biological washout data are

  10. Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks

    KAUST Repository

    Ben Hammouda, Chiheb

    2015-05-12

    In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for

  11. Report of the AAPM Task Group No. 105: Issues associated with clinical implementation of Monte Carlo-based photon and electron external beam treatment planning

    International Nuclear Information System (INIS)

    The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and

  12. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  13. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Science.gov (United States)

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  14. Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty

    International Nuclear Information System (INIS)

    To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96

  15. Monte Carlo implementation of Schiff's approximation for estimating radiative properties of homogeneous, simple-shaped and optically soft particles: Application to photosynthetic micro-organisms

    Science.gov (United States)

    Charon, Julien; Blanco, Stéphane; Cornet, Jean-François; Dauchet, Jérémi; El Hafi, Mouna; Fournier, Richard; Abboud, Mira Kaissar; Weitz, Sebastian

    2016-03-01

    In the present paper, Schiff's approximation is applied to the study of light scattering by large and optically-soft axisymmetric particles, with special attention to cylindrical and spheroidal photosynthetic micro-organisms. This approximation is similar to the anomalous diffraction approximation but includes a description of phase functions. Resulting formulations for the radiative properties are multidimensional integrals, the numerical resolution of which requires close attention. It is here argued that strong benefits can be expected from a statistical resolution by the Monte Carlo method. But designing such efficient Monte Carlo algorithms requires the development of non-standard algorithmic tricks using careful mathematical analysis of the integral formulations: the codes that we develop (and make available) include an original treatment of the nonlinearity in the differential scattering cross-section (squared modulus of the scattering amplitude) thanks to a double sampling procedure. This approach makes it possible to take advantage of recent methodological advances in the field of Monte Carlo methods, illustrated here by the estimation of sensitivities to parameters. Comparison with reference solutions provided by the T-Matrix method is presented whenever possible. Required geometric calculations are closely similar to those used in standard Monte Carlo codes for geometric optics by the computer-graphics community, i.e. calculation of intersections between rays and surfaces, which opens interesting perspectives for the treatment of particles with complex shapes.

  16. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Morgan C. White

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second

  17. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    International Nuclear Information System (INIS)

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to

  18. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.

    2009-02-23

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  19. QWalk: A Quantum Monte Carlo Program for Electronic Structure

    CERN Document Server

    Wagner, Lucas K; Mitas, Lubos

    2007-01-01

    We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org

  20. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient

    International Nuclear Information System (INIS)

    Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)

  1. SEMI-BLIND CHANNEL ESTIMATION OF MULTIPLE-INPUT/MULTIPLE-OUTPUT SYSTEMS BASED ON MARKOV CHAIN MONTE CARLO METHODS

    Institute of Scientific and Technical Information of China (English)

    Jiang Wei; Xiang Haige

    2004-01-01

    This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.

  2. 欧式期权定价的Monte-Carlo方法%Monte-Carlo methods for Pricing European-style options

    Institute of Scientific and Technical Information of China (English)

    张丽虹

    2015-01-01

    讨论各种欧式期权价格的Monte-Carlo方法。根据Black-Scholes期权定价模型以及风险中性理论,首先详细地讨论如何利用Monte-Carlo方法来计算标准欧式期权价格;然后讨论如何引入控制变量以及对称变量来提高Monte-Carlo方法的精确性;最后用Monte-Carlo方法来计算标准欧式期权、欧式—两值期权、欧式—回望期权以及欧式—亚式期权的价格,并讨论相关方法的优缺点。%We discuss Monte-Carlo methods for pricing European options.Based on the famous Black-Scholes model,we first discuss the Monte-Carlo simulation method to pricing standard European options according to Risk neutral theory.Methods to improve the Monte-Carlo simulation performance including introducing control variates and antithetic variates are also discussed.Finally we apply the proposed Monte-Carlo methods to price the European binary options,European lookback options and European Asian options.

  3. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    Science.gov (United States)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  4. Statistical Modification Analysis of Helical Planetary Gears based on Response Surface Method and Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun; GUO Fan

    2015-01-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  5. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.

  6. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  7. Monte Carlo Methods for Top-k Personalized PageRank Lists and Name Disambiguation

    CERN Document Server

    Avrachenkov, Konstantin; Nemirovsky, Danil A; Smirnova, Elena; Sokol, Marina

    2010-01-01

    We study a problem of quick detection of top-k Personalized PageRank lists. This problem has a number of important applications such as finding local cuts in large graphs, estimation of similarity distance and name disambiguation. In particular, we apply our results to construct efficient algorithms for the person name disambiguation problem. We argue that when finding top-k Personalized PageRank lists two observations are important. Firstly, it is crucial that we detect fast the top-k most important neighbours of a node, while the exact order in the top-k list as well as the exact values of PageRank are by far not so crucial. Secondly, a little number of wrong elements in top-k lists do not really degrade the quality of top-k lists, but it can lead to significant computational saving. Based on these two key observations we propose Monte Carlo methods for fast detection of top-k Personalized PageRank lists. We provide performance evaluation of the proposed methods and supply stopping criteria. Then, we apply ...

  8. Systematic hierarchical coarse-graining with the inverse Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Lyubartsev, Alexander P., E-mail: alexander.lyubartsev@mmk.su.se [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Naômé, Aymeric, E-mail: aymeric.naome@unamur.be [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Vercauteren, Daniel P., E-mail: daniel.vercauteren@unamur.be [UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Laaksonen, Aatto, E-mail: aatto@mmk.su.se [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Science for Life Laboratory, 17121 Solna (Sweden)

    2015-12-28

    We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.

  9. Systematic hierarchical coarse-graining with the inverse Monte Carlo method

    Science.gov (United States)

    Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto

    2015-12-01

    We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.

  10. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    KRSTIVOJEVIC, J. P.

    2015-08-01

    Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.

  11. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  12. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs

  13. A practical cone-beam CT scatter correction method with optimized Monte Carlo simulations for image-guided radiation therapy

    Science.gov (United States)

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-05-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the

  14. Simulations of background characteristics of HPGe detectors operating in shallow underground using the Monte Carlo method

    International Nuclear Information System (INIS)

    Monte Carlo codes GEANT 4 and MUSIC have been used to calculate background components of low-level HPGe gamma-ray spectrometers operating in a shallow underground laboratory. The simulated background gamma-ray spectra have been comparable with spectra measured at the Ogoya underground laboratory operating at the depth of 270 m w.e. (water equivalent). The Monte Carlo simulations have proved to be useful approach in estimation of background characteristics of HPGe spectrometers before their construction. (author)

  15. Inchworm Monte Carlo for exact non-adiabatic dynamics. II. Benchmarks and comparison with established methods

    CERN Document Server

    Chen, Hsing-Ta; Reichman, David R

    2016-01-01

    In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.

  16. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    Science.gov (United States)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  17. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.

    2013-08-21

    In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate

  18. Evaluation of the scattered radiation components produced in a gamma camera using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Polo, Ivon Oramas, E-mail: ivonoramas67@gmail.com [Department of Nuclear Engineering, Faculty of Nuclear Sciences and Technologies, Higher Institute of Applied Science and Technology (InSTEC), La Habana (Cuba)

    2014-07-01

    Introduction: this paper presents a simulation for evaluation of the scattered radiation components produced in a gamma camera PARK using Monte Carlo code SIMIND. It simulates a whole body study with MDP (Methylene Diphosphonate) radiopharmaceutical based on Zubal anthropomorphic phantom, with some spinal lesions. Methods: the simulation was done by comparing 3 configurations for the detected photons. The corresponding energy spectra were obtained using Low Energy High Resolution collimator. The parameters related with the interactions and the fraction of events in the energy window, the simulated events of the spectrum and scatter events were calculated. Results: the simulation confirmed that the images without influence of scattering events have a higher number of valid recorded events and it improved the statistical quality of them. A comparison among different collimators was made. The parameters and detector energy spectrum were calculated for each simulation configuration with these collimators using {sup 99m}Tc. Conclusion: the simulation corroborated that LEHS collimator has higher sensitivity and HEHR collimator has lower sensitivity when they are used with low energy photons. (author)

  19. A Monte-Carlo Method for Making SDSS $u$-Band Magnitude more accurate

    CERN Document Server

    Gu, Jiayin; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu

    2016-01-01

    We develop a new Monte-Carlo-based method to convert the SDSS (Sloan Digital Sky Survey) $u$-band magnitude to the SCUSS (South Galactic Cap of $u$-band Sky Survey) $u$-band magnitude. Due to more accuracy of SCUSS $u$-band measurements, the converted $u$-band magnitude becomes more accurate comparing with the original SDSS $u$-band magnitude, in particular at the faint end. The average $u$ (both SDSS and SCUSS) magnitude error of numerous main-sequence stars with $0.2

  20. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  1. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    International Nuclear Information System (INIS)

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  2. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    International Nuclear Information System (INIS)

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation. (author)

  3. A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo

    Science.gov (United States)

    Chen, Yajun; Liu, Ding; Liang, Junli

    2013-10-01

    During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.

  4. Development of a software package for solid-angle calculations using the Monte Carlo method

    Science.gov (United States)

    Zhang, Jie; Chen, Xiulian; Zhang, Changsheng; Li, Gang; Xu, Jiayun; Sun, Guangai

    2014-02-01

    Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4.

  5. Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer: I. Algorithms and numerical methods

    OpenAIRE

    Harries, Tim J.

    2015-01-01

    We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two ...

  6. Comprehensive modeling of special nuclear materials detection using three-dimensional deterministic and Monte Carlo methods

    Science.gov (United States)

    Ghita, Gabriel M.

    Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction

  7. Range verification methods in particle therapy: underlying physics and Monte Carlo modelling

    Directory of Open Access Journals (Sweden)

    Aafke Christine Kraan

    2015-07-01

    Full Text Available Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients.Non-invasive in-vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including beta+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC predictions is a key issue. Correctly modelling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modelling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  8. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    Science.gov (United States)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  9. Method to implement the CCD timing generator based on FPGA

    Science.gov (United States)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  10. Exposure-response modeling methods and practical implementation

    CERN Document Server

    Wang, Jixian

    2015-01-01

    Discover the Latest Statistical Approaches for Modeling Exposure-Response RelationshipsWritten by an applied statistician with extensive practical experience in drug development, Exposure-Response Modeling: Methods and Practical Implementation explores a wide range of topics in exposure-response modeling, from traditional pharmacokinetic-pharmacodynamic (PKPD) modeling to other areas in drug development and beyond. It incorporates numerous examples and software programs for implementing novel methods.The book describes using measurement

  11. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    Science.gov (United States)

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation.

  12. Uncertainty Determination for Aeroheating in Uranus and Saturn Probe Entries by the Monte Carlo Method

    Science.gov (United States)

    Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.

    2013-01-01

    The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While

  13. A method for tuning parameters of Monte Carlo generators and a determination of the unintegrated gluon density

    International Nuclear Information System (INIS)

    A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)

  14. Improvement of the symbolic Monte-Carlo method for the transport equation: P1 extension and coupling with diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)

    2005-07-01

    We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)

  15. A Projector Quantum Monte Carlo Method for non-linear wavefunctions

    CERN Document Server

    Schwarz, Lauretta R; Booth, George H

    2016-01-01

    We reformulate the projected imaginary-time evolution of Full Configuration Interaction Quantum Monte Carlo in terms of a Lagrangian minimization. This naturally leads to the admission of polynomial complex wavefunction parameterizations, circumventing the exponential scaling of the approach. While previously these functions have traditionally inhabited the domain of Variational Monte Carlo, we consider recently developments for the identification of deep-learning neural networks to optimize this Lagrangian, which can be written as a modification of the propagator for the wavefunction dynamics. We demonstrate this approach with a form of Tensor Network State, and use it to find solutions to the strongly-correlated Hubbard model, as well as its application to a fully periodic ab-initio Graphene sheet. The number of variables which can be simultaneously optimized greatly exceeds alternative formulations of Variational Monte Carlo, allowing for systematic improvability of the wavefunction flexibility towards exa...

  16. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author)

  17. The Wigner Monte-Carlo method for nanoelectronic devices a particle description of quantum transport and decoherence

    CERN Document Server

    Querlioz, Damien

    2013-01-01

    This book gives an overview of the quantum transport approaches for nanodevices and focuses on the Wigner formalism. It details the implementation of a particle-based Monte Carlo solution of the Wigner transport equation and how the technique is applied to typical devices exhibiting quantum phenomena, such as the resonant tunnelling diode, the ultra-short silicon MOSFET and the carbon nanotube transistor. In the final part, decoherence theory is used to explain the emergence of the semi-classical transport in nanodevices.

  18. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  19. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  20. Numerical simulations of blast-impact problems using the direct simulation Monte Carlo method

    Science.gov (United States)

    Sharma, Anupam

    There is an increasing need to design protective structures that can withstand or mitigate the impulsive loading due to the impact of a blast or a shock wave. A preliminary step in designing such structures is the prediction of the pressure loading on the structure. This is called the "load definition." This thesis is focused on a numerical approach to predict the load definition on arbitrary geometries for a given strength of the incident blast/shock wave. A particle approach, namely the Direct Simulation Monte Carlo (DSMC) method, is used as the numerical model. A three-dimensional, time-accurate DSMC flow solver is developed as a part of this study. Embedded surfaces, modeled as triangulations, are used to represent arbitrary-shaped structures. Several techniques to improve the computational efficiency of the algorithm of particle-structure interaction are presented. The code is designed using the Object Oriented Programming (OOP) paradigm. Domain decomposition with message passing is used to solve large problems in parallel. The solver is extensively validated against analytical results and against experiments. Two kinds of geometries, a box and an I-shaped beam are investigated for blast impact. These simulations are performed in both two- and three-dimensions. A major portion of the thesis is dedicated to studying the uncoupled fluid dynamics problem where the structure is assumed to remain stationary and intact during the simulation. A coupled, fluid-structure dynamics problem is solved in one spatial dimension using a simple, spring-mass-damper system to model the dynamics of the structure. A parametric study, by varying the mass, spring constant, and the damping coefficient, to study their effect on the loading and the displacement of the structure is also performed. Finally, the parallel performance of the solver is reported for three sample-size problems on two Beowulf clusters.

  1. Romania Monte Carlo Methods Application to CANDU Spent Fuel Comparative Analysis

    International Nuclear Information System (INIS)

    Romania has a single NPP at Cernavoda with 5 PHWR reactors of CANDU6 type of 705 MW(e) each, with Cernavoda Unit1, operational starting from December 1996, Unit2 under construction while the remaining Unit3-5 is being conserved. The nuclear energy world wide development is accompanied by huge quantities of spent nuclear fuel accumulation. Having in view the possible impact upon population and environment, in all activities associated to nuclear fuel cycle, namely transportation, storage, reprocessing or disposal, the spent fuel characteristics must be well known. The paper aim is to apply Monte Carlo methods to CANDU spent fuel analysis, starting from the discharge moment, followed by spent fuel transport after a defined cooling period and finishing with the intermediate dry storage. As radiation source 3 CANDU fuels have been considered: standard 37 rods fuel bundle with natural UO2 and SEU fuels, and 43 rods fuel bundle with SEU fuel. After a criticality calculation using KENO-VI code, the criticality coefficient and the actinides and fission products concentrations are obtained. By using ORIGEN-S code, the photon source profiles are calculated and the spent fuel characteristics estimation is done. For the shielding calculations MORSE-SGC code has been used. Regarding to the spent fuel transport, the photon dose rates to the shipping cask wall and in air, at different distances from the cask, are estimated. The shielding calculation for the spent fuel intermediate dry storage is done and the photon dose rates at the storage basket wall (active element of the Cernavoda NPP intermediate dry storage) are obtained. A comparison between the 3 types of CANDU fuels is presented. (authors)

  2. SU-E-T-569: Neutron Shielding Calculation Using Analytical and Multi-Monte Carlo Method for Proton Therapy Facility

    Energy Technology Data Exchange (ETDEWEB)

    Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H [Samsung Medical Center, Seoul (Korea, Republic of)

    2015-06-15

    Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.

  3. Determination of surface dose rate of indigenous 32P patch brachytherapy source by experimental and Monte Carlo methods

    International Nuclear Information System (INIS)

    Isotope production and Application Division of Bhabha Atomic Research Center developed 32P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed 32P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the 32P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed 32P patch sources for contact brachytherapy applications. - Highlights: • Surface dose rates of 25 mm nominal diameter newly developed 32P patch sources were measured experimentally using extrapolation chamber and Gafchromic EBT2 film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed. • The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and

  4. IMRT dose delivery effects in radiotherapy treatment planning using Monte Carlo methods

    Science.gov (United States)

    Tyagi, Neelam

    Inter- and intra-leaf transmission and head scatter can play significant roles in Intensity Modulated Radiation Therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head be accurately modeled. In this thesis Monte Carlo (MC) methods have been used to model the treatment head of a Varian linear accelerator. A comprehensive model of the Varian 120-leaf MLC has been developed within the DPM MC code and has been verified against measurements in homogeneous and heterogeneous phantom geometries under different IMRT delivery circumstances. Accuracy of the MLC model in simulating details in the leaf geometry has been established over a range of arbitrarily shaped fields and IMRT fields. A sensitivity analysis of the effect of the electron-on-target parameters and the structure of the flattening filter on the accuracy of calculated dose distributions has been conducted. Adjustment of the electron-on-target parameters resulting in optimal agreement with measurements was an iterative process, with the final parameters representing a tradeoff between small (3x3 cm2) and large (40x40 cm2) field sizes. A novel method based on adaptive kernel density estimation, in the phase space simulation process is also presented as an alternative to particle recycling. Using this model dosimetric differences between MLC-based static (SMLC) and dynamic (DMLC) deliveries have been investigated. Differences between SMLC and DMLC, possibly related to fluence and/or spectral changes, appear to vary systematically with the density of the medium. The effect of fluence modulation due to leaf sequencing shows differences, up to 10% between plans developed with 1% and 10% fluence intervals for both SMLC and DMLC-delivered sequences. Dose differences between planned and delivered leaf sequences

  5. Transport methods: general. 2. Monte Carlo Particle Transport in Media with Exponentially Varying Time-Dependent Cross Sections

    International Nuclear Information System (INIS)

    We have investigated Monte Carlo schemes for analyzing particle transport through media with exponentially varying time-dependent cross sections. For such media, the cross sections are represented in the form Σ(t) = Σ0 e-at (1) or equivalently as Σ(x) = Σ0 e-bx (2) where b = av and v is the particle speed. For the following discussion, the parameters a and b may be either positive, for exponentially decreasing cross sections, or negative, for exponentially increasing cross sections. For most time-dependent Monte Carlo applications, the time and spatial variations of the cross-section data are handled by means of a stepwise procedure, holding the cross sections constant for each region over a small time interval Δt, performing the Monte Carlo random walk over the interval Δt, updating the cross sections, and then repeating for a series of time intervals. Continuously varying spatial- or time-dependent cross sections can be treated in a rigorous Monte Carlo fashion using delta-tracking, but inefficiencies may arise if the range of cross-section variation is large. In this paper, we present a new method for sampling collision distances directly for cross sections that vary exponentially in space or time. The method is exact and efficient and has direct application to Monte Carlo radiation transport methods. To verify that the probability density function (PDF) is correct and that the random-sampling procedure yields correct results, numerical experiments were performed using a one-dimensional Monte Carlo code. The physical problem consisted of a beam source impinging on a purely absorbing infinite slab, with a slab thickness of 1 cm and Σ0 = 1 cm-1. Monte Carlo calculations with 10 000 particles were run for a range of the exponential parameter b from -5 to +20 cm-1. Two separate Monte Carlo calculations were run for each choice of b, a continuously varying case using the random-sampling procedures described earlier, and a 'conventional' case where the

  6. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  7. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    International Nuclear Information System (INIS)

    Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C7, C9, and C10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl+l- operators with real-valued Ci. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K*γ, B→K(*)l+l-, and Bs→μ+μ- decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings Ci. The two solutions are related

  8. Determination of the double angular and energy differential gamma-ray albedo for iron material by using Monte Carlo method

    International Nuclear Information System (INIS)

    The Monte Carlo method can be used to compute the gamma-ray backscattering albedo. This method was used by Raso to compute the angular differential albedo. Raso's results have been used by Chilton and Huddelston to adjust their well-known albedo formula. Here, an efficient estimator is proposed to compute the double-differential angular and energetic albedo from gamma-ray histories simulated in matter by the three-dimensional Monte Carlo transport code TRIPOLI. A detailed physical albedo analysis could be done in this way. The double-differential angular and energetic gamma-ray albedo is calculated for iron material for initial gamma-ray energies of 8, 3, 1, and 0.5 MeV

  9. Numerical investigation of turbomolecular pumps using the direct simulation Monte Carlo method with moving surfaces

    NARCIS (Netherlands)

    Versluis, R.; Dorsman, R.; Thielen, L.; Roos, M.E.

    2009-01-01

    A new approach for performing numerical direct simulation Monte Carlo (DSMC) simulations on turbomolecular pumps in the free molecular and transitional flow regimes is described. The chosen approach is to use surfaces that move relative to the grid to model the effect of rotors and stators on a gas

  10. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered

  11. Dose optimization based on linear programming implemented in a system for treatment planning in Monte Carlo; Optimizacion de dosis basada en programacion lineal implemenetada en un un sistema para la planificacion de tratamiento en Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A.; Palma, B. A.; Leal, A.

    2011-07-01

    Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.

  12. Monte Carlo method studies and a comparative between GEANT4 tool kit and MCNPX to depth dose in medical physics

    International Nuclear Information System (INIS)

    Knowing the depth dose at the central axis is fundamental for the accurate planning of medical treatment systems involving ionizing radiation. With the evolution of the informatics it is possible the utilization of various computational tools such as GEANT4 and the MCNPX, which use the Monte Carlo Method for simulation of such situations, This paper makes a comparative between the two tools for the this type of application

  13. Life-time integration using Monte Carlo Methods when optimizing the design of concentrated solar power plants

    OpenAIRE

    Farges, Olivier; Bézian, Jean Jacques; Bru, Hélène; El Hafi, Mouna; Fournier, Richard; Spiesser, Christophe

    2015-01-01

    Rapidity and accuracy of algorithms evaluating yearly collected energy are an important issue in the context of optimizing concentrated solar power plants (CSP). These last ten years, several research groups have concentrated their efforts on the development of such sophisticated tools: approximations are required to decrease the CPU time, closely checking that the corresponding loss in accuracy remains acceptable. Here we present an alternative approach using the Monte Carlo Methods (MCM). T...

  14. Evaluation of a special pencil ionization chamber by the Monte Carlo method; Avaliacao de uma camara de ionizacao tipo lapis especial pelo metodo de Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Mendonca, Dalila; Neves, Lucio P.; Perini, Ana P., E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), Uberlandia, MG (Brazil). Instituto de Fisica; Santos, William S.; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleres (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    A special pencil type ionization chamber, developed at the Instituto de Pesquisas Energeticas e Nucleares, was characterized by means of Monte Carlo simulation to determine the influence of its components on its response. The main differences between this ionization chamber and commercial ionization chambers are related to its configuration and constituent materials. The simulations were made employing the MCNP-4C Monte Carlo code. The highest influence was obtained for the body of PMMA: 7.0%. (author)

  15. Condensation and homogenization of cross sections for the deterministic transport codes with Monte Carlo method: Application to the GEN IV fast neutron reactors

    International Nuclear Information System (INIS)

    In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3 for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4). At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4 code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation. Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries. Finally, a B1 leakage model is implemented in the TRIPOLI-4 code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPOLI-4 code allows producing multi-group constants which can then be used in the core

  16. Implementation of the parametric variation method in an EMTP program

    DEFF Research Database (Denmark)

    Holdyk, Andrzej; Holbøll, Joachim

    2013-01-01

    of parameters in an electric system. The proposed method allows varying any parameter of a circuit, including the simulation settings and exploits the specific structure of the ATP-EMTP software. In the implementation of the method, Matlab software is used to control the execution of the ATP solver. Two......The paper presents an algorithm for- and shows the implementation of a method to perform parametric variation studies using electromagnetic transients programs applied to an offshore wind farm. Those kind of studies are used to investigate the sensitivity of a given phenomena to variation...

  17. Implementation of the Maximum Entropy Method for Analytic Continuation

    CERN Document Server

    Levy, Ryan; Gull, Emanuel

    2016-01-01

    We present $\\texttt{Maxent}$, a tool for performing analytic continuation of spectral functions using the maximum entropy method. The code operates on discrete imaginary axis datasets (values with uncertainties) and transforms this input to the real axis. The code works for imaginary time and Matsubara frequency data and implements the 'Legendre' representation of finite temperature Green's functions. It implements a variety of kernels, default models, and grids for continuing bosonic, fermionic, anomalous, and other data. Our implementation is licensed under GPLv2 and extensively documented. This paper shows the use of the programs in detail.

  18. Implementing the Open Method of Co-ordination in Pensions

    Directory of Open Access Journals (Sweden)

    Jarosław POTERAJ

    2009-01-01

    Full Text Available The article presents an insight into the European Union Open Methodof Co-ordination (OMC in area of pension. The author’s goal was to presentthe development and the effects of implementation the OMC. The introductionis followed by three topic paragraphs: 1. the OMC – step by step, 2. theevaluation of the OMC, and 3. the effects of OMC implementation. In thesummary, the author highlights as except of advantages there are alsodisadvantages of the implementation of the OMC, and there are many doubtsexist in the context of efficiency of performing that method in the future.

  19. Evaluation of the NHS R & D implementation methods programme

    OpenAIRE

    Hanney, S; Soper, B; Buxton, MJ

    2010-01-01

    Chapter 1: Background and introduction • Concern with research implementation was a major factor behind the creation of the NHS R&D Programme in 1991. In 1994 an Advisory Group was established to identify research priorities in this field. The Implementation Methods Programme (IMP) flowed from this and its Commissioning Group funded 36 projects. Funding for the IMP was capped before the second round of commissioning. The Commissioning Group was disbanded and eventually responsibility for t...

  20. Implementing Collaborative Learning Methods in the Political Science Classroom

    Science.gov (United States)

    Wolfe, Angela

    2012-01-01

    Collaborative learning is one, among other, active learning methods, widely acclaimed in higher education. Consequently, instructors in fields that lack pedagogical training often implement new learning methods such as collaborative learning on the basis of trial and error. Moreover, even though the benefits in academic circles are broadly touted,…

  1. Semantics-directed implementation of method-call interception

    NARCIS (Netherlands)

    Lämmel, R.; Stenzel, C.

    2003-01-01

    We describe a form of method-call interception (MCI) that allows the programmer to superimpose extra functionality onto method calls at run-time. We provide a reference semantics and a reference implementation for corresponding language constructs. The setup applies to class-based, statically typed,

  2. A 3D Monte Carlo Method for Estimation of Patient-specific Internal Organs Absorbed Dose for (99m)Tc-hynic-Tyr(3)-octreotide Imaging.

    Science.gov (United States)

    Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman

    2016-01-01

    Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-specific S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results. PMID:27134562

  3. A polygon-surface reference Korean male phantom (PSRK-Man) and its direct implementation in Geant4 Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chan Hyeong; Jeong, Jong Hwi [Department of Nuclear Engineering, Hanyang University, 17 Haengdang-dong, Seongdong-gu, Seoul 133-791 (Korea, Republic of); Bolch, Wesley E [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Cho, Kun-Woo [Korea Institute of Nuclear Safety, 19 Guseong-dong, Yuseong-gu, Daejeon 305-600 (Korea, Republic of); Hwang, Sung Bae, E-mail: chkim@hanyang.ac.kr [Department of Physical Therapy, Kyungbuk College, Hyucheon 2-dong, Yeongju-si, Gyeongbuk 750-712 (Korea, Republic of)

    2011-05-21

    Even though the hybrid phantom embodies both the anatomic reality of voxel phantoms and the deformability of stylized phantoms, it must be voxelized to be used in a Monte Carlo code for dose calculation or some imaging simulation, which incurs the inherent limitations of voxel phantoms. In the present study, a voxel phantom named VKH-Man (Visible Korean Human-Man), was converted to a polygon-surface phantom (PSRK-Man, Polygon-Surface Reference Korean-Man), which was then adjusted to the Reference Korean data. Subsequently, the PSRK-Man polygon phantom was directly, without any voxelization process, implemented in the Geant4 Monte Carlo code for dose calculations. The calculated dose values and computation time were then compared with those of HDRK-Man (High Definition Reference Korean-Man), a corresponding voxel phantom adjusted to the same Reference Korean data from the same VKH-Man voxel phantom. Our results showed that the calculated dose values of the PSRK-Man surface phantom agreed well with those of the HDRK-Man voxel phantom. The calculation speed for the PSRK-Man polygon phantom though was 70-150 times slower than that of the HDRK-Man voxel phantom; that speed, however, could be acceptable in some applications, in that direct use of the surface phantom PSRK-Man in Geant4 does not require a separate voxelization process. Computing speed can be enhanced, in future, either by optimizing the Monte Carlo transport kernel for the polygon surfaces or by using modern computing technologies such as grid computing and general-purpose computing on graphics processing units programming.

  4. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    Science.gov (United States)

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  5. DEEP code to calculate dose equivalents in human phantom for external photon exposure by Monte Carlo method

    International Nuclear Information System (INIS)

    The present report describes a computer code DEEP which calculates the organ dose equivalents and the effective dose equivalent for external photon exposure by the Monte Carlo method. MORSE-CG, Monte Carlo radiation transport code, is incorporated into the DEEP code to simulate photon transport phenomena in and around a human body. The code treats an anthropomorphic phantom represented by mathematical formulae and user has a choice for the phantom sex: male, female and unisex. The phantom can wear personal dosimeters on it and user can specify their location and dimension. This document includes instruction and sample problem for the code as well as the general description of dose calculation, human phantom and computer code. (author)

  6. Calculation of gamma-ray mass attenuation coefficients of some Egyptian soil samples using Monte Carlo methods

    Science.gov (United States)

    Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan

    2014-08-01

    Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.

  7. Monte-Carlo method simulation of the Bremsstrahlung mirror reflection experiment

    International Nuclear Information System (INIS)

    Full text: To detect gamma-ray mirror reflection on macroscopic smooth surface a search experiment at microtron MT-22S with 330 meter flying distance is in progress. Measured slip angles (i.e. angles between incident ray and reflector surface) don't exceed tens of micro-radian. Under such angles an effect of the reflection could be easily veiled due to negative background conditions. That is why the process needed to be simulated by Monte-Carlo method as accurate as possible and corresponding computer program was developed. A first operating mode of the MT-22S generates 13 MeV electrons that are incident on a Bremsstrahlung target. So energies of gamma-rays were simulated to be in the range of 0.01†12.5 MeV and be distributed by known Shift formula. When any gamma-quantum was incident on the reflector it resulted in following two cases. If its slip angle was more than the critical one, gamma-quantum was to be absorbed by the reflector and the program started to simulate next event. In the other case the program replaced incident gamma-quantum trajectory parameters by the reflected ones. The gamma-quantum trajectory behind the reflector was traced till its detector. Any gamma-quantum that got the detector was to be registered. As any simulated gamma-quantum was of random energy the critical slip angle of every simulated event was evaluated by the following formula: αcrit = eh/E √ZNAρ/πAm. Table values of the absorption coefficients were used for random simulation of gamma-quanta absorption in the air. And it was assumed that any gamma-quantum interaction with air resulted in its disappearance. Dependence of different flying distances (120 and 330 m), gap heights (10, 20 and 50 μ) of the gap collimator and inclinations (20 and 40 μrad) of the reflector's plane on detected gamma-quanta energy distribution and vertical angle one was studied with a help of the developed program

  8. State-of-the-art Monte Carlo 1988

    Energy Technology Data Exchange (ETDEWEB)

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  9. Uma abordagem simplificada do método Monte Carlo Quântico: da solução de integrais ao problema da distribuição eletrônica A simplified approach to the Quantum Monte Carlo method: from the solution of integrals to the electronic distribution problem

    Directory of Open Access Journals (Sweden)

    Wagner Fernando Delfino Angelotti

    2008-01-01

    Full Text Available The paper presents an introductory and general discussion on the quantum Monte Carlo methods, some fundamental algorithms, concepts and applicability. In order to introduce the quantum Monte Carlo method, preliminary concepts associated with Monte Carlo techniques are discussed.

  10. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy.

    Science.gov (United States)

    Koger, B; Kirkby, C

    2016-03-01

    Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP. PMID:26895030

  11. A method for converting dose-to-medium to dose-to-tissue in Monte Carlo studies of gold nanoparticle-enhanced radiotherapy

    Science.gov (United States)

    Koger, B.; Kirkby, C.

    2016-03-01

    Gold nanoparticles (GNPs) have shown potential in recent years as a means of therapeutic dose enhancement in radiation therapy. However, a major challenge in moving towards clinical implementation is the exact characterisation of the dose enhancement they provide. Monte Carlo studies attempt to explore this property, but they often face computational limitations when examining macroscopic scenarios. In this study, a method of converting dose from macroscopic simulations, where the medium is defined as a mixture containing both gold and tissue components, to a mean dose-to-tissue on a microscopic scale was established. Monte Carlo simulations were run for both explicitly-modeled GNPs in tissue and a homogeneous mixture of tissue and gold. A dose ratio was obtained for the conversion of dose scored in a mixture medium to dose-to-tissue in each case. Dose ratios varied from 0.69 to 1.04 for photon sources and 0.97 to 1.03 for electron sources. The dose ratio is highly dependent on the source energy as well as GNP diameter and concentration, though this effect is less pronounced for electron sources. By appropriately weighting the monoenergetic dose ratios obtained, the dose ratio for any arbitrary spectrum can be determined. This allows complex scenarios to be modeled accurately without explicitly simulating each individual GNP.

  12. Monte Carlo evaluation of the Filtered Back Projection method for image reconstruction in proton computed tomography

    International Nuclear Information System (INIS)

    In this paper the use of the Filtered Back Projection (FBP) Algorithm, in order to reconstruct tomographic images using the high energy (200-250 MeV) proton beams, is investigated. The algorithm has been studied in detail with a Monte Carlo approach and image quality has been analysed and compared with the total absorbed dose. A proton Computed Tomography (pCT) apparatus, developed by our group, has been fully simulated to exploit the power of the Geant4 Monte Carlo toolkit. From the simulation of the apparatus, a set of tomographic images of a test phantom has been reconstructed using the FBP at different absorbed dose values. The images have been evaluated in terms of homogeneity, noise, contrast, spatial and density resolution.

  13. Monte Carlo evaluation of the Filtered Back Projection method for image reconstruction in proton computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Cirrone, G.A.P., E-mail: cirrone@lns.infn.it [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Bucciolini, M. [Department of ' Fisiopatologia Clinica' , University of Florence, V.le Morgagni 85, I-50134 Florence (Italy); Bruzzi, M. [Energetic Department, University of Florence, Via S. Marta 3, I-50139 Florence (Italy); Candiano, G. [Laboratorio di Tecnologie Oncologiche HSR, Giglio Contrada, Pietrapollastra-Pisciotto, 90015 Cefalu, Palermo (Italy); Civinini, C. [National Institute for Nuclear Physics INFN, Section of Florence, Via G. Sansone 1, Sesto Fiorentino, I-50019 Florence (Italy); Cuttone, G. [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Guarino, P. [Nuclear Engineering Department, University of Palermo, Via... Palermo (Italy); Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Lo Presti, D. [Physics Department, University of Catania, Via S. Sofia 64, I-95123, Catania (Italy); Mazzaglia, S.E. [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Pallotta, S. [Department of ' Fisiopatologia Clinica' , University of Florence, V.le Morgagni 85, I-50134 Florence (Italy); Randazzo, N. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); Sipala, V. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); Physics Department, University of Catania, Via S. Sofia 64, I-95123, Catania (Italy); Stancampiano, C. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); and others

    2011-12-01

    In this paper the use of the Filtered Back Projection (FBP) Algorithm, in order to reconstruct tomographic images using the high energy (200-250 MeV) proton beams, is investigated. The algorithm has been studied in detail with a Monte Carlo approach and image quality has been analysed and compared with the total absorbed dose. A proton Computed Tomography (pCT) apparatus, developed by our group, has been fully simulated to exploit the power of the Geant4 Monte Carlo toolkit. From the simulation of the apparatus, a set of tomographic images of a test phantom has been reconstructed using the FBP at different absorbed dose values. The images have been evaluated in terms of homogeneity, noise, contrast, spatial and density resolution.

  14. Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method

    OpenAIRE

    KRSTIVOJEVIC, J. P.; DJURIC, M. B.

    2015-01-01

    The results of a comprehensive investigation of the influence of current transformer (CT) saturation on restricted earth fault (REF) protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i) laboratory measurements and (ii) calculations based on the input data obtained by the Monte Carlo (MC) simulation. To make a detailed assessment of the curre...

  15. Observational constraints on G-corrected holographic dark energy using a Markov chain Monte Carlo method

    OpenAIRE

    Alavirad, Hamzeh; Malekjani, Mohammad

    2013-01-01

    We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within $1\\sigma$ confidence level (CL) in a flat universe as: $\\Omega_{\\rm b}h^...

  16. A comparison of electron transport coefficient predictions using spherical harmonic and Monte Carlo methods

    International Nuclear Information System (INIS)

    The electron drift velocity W, and the first Townsend ionization coefficient, α, are calculated for nitrogen, over the range 7000 is the electric field to pressure ratio. The pressure P0 is reduced to 00C. The spherical harmonic expansion calculation predicts α values which are 50-100% larger than those predicted by the Monte Carlo calculation. The predicted drift velocities agree to within 10-20%. (Auth.)

  17. Application of Monte Carlo Method to Phase Separation Dynamics of Complex Systems

    OpenAIRE

    Okabe, Yutaka; Miyajima, Tsukasa; Ito, Toshiro; Kawakatsu, Toshihiro

    1999-01-01

    We report the application of the Monte Carlo simulation to phase separation dynamics. First, we deal with the phase separation under shear flow. The thermal effect on the phase separation is discussed, and the anisotropic growth exponents in the late stage are estimated. Next, we study the effect of surfactants on the three-component solvents. We obtain the mixture of macrophase separation and microphase separation, and investigate the dynamics of both phase separations.

  18. Evaluation of the material assignment method used by a Monte Carlo treatment planning system.

    Science.gov (United States)

    Isambert, A; Brualla, L; Lefkopoulos, D

    2009-12-01

    An evaluation of the conversion process from Hounsfield units (HU) to material composition in computerised tomography (CT) images, employed by the Monte Carlo based treatment planning system ISOgray (DOSIsoft), is presented. A boundary in the HU for the material conversion between "air" and "lung" materials was determined based on a study using 22 patients. The dosimetric consequence of the new boundary was quantitatively evaluated for a lung patient plan.

  19. Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations

    OpenAIRE

    Van Siclen, Clinton DeW.

    2008-01-01

    A computationally simple way to accommodate 'basins' of trapping sites in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.

  20. A Monte Carlo method based on antithetic variates for network reliability computations

    OpenAIRE

    El Khadiri, Mohamed; Rubino, Gerardo

    1992-01-01

    The exact evaluation of usual reliability measures of communication networks is seriously limited because of the excessive computational time usually needed to obtain them. In the general case, the computation of almost all the interesting reliability metrics are NP-hard problems. An alternative approach is to estimate them by means of a Monte Carlo simulation. This allows to deal with larger models than those that can be evaluated exactly. In this paper, we propose an algorithm much more per...

  1. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Science.gov (United States)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  2. Determination of dose distributions for clinical linear accelerators using Monte Carlo method in water phantom

    International Nuclear Information System (INIS)

    Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.

  3. For Clinical Linear Accelerators, Obtaining of Dose Distributions in Water Phantoms by Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.

  4. A study of parallelizing O(N) Green-function-based Monte Carlo method for many fermions coupled with classical degrees of freedom

    International Nuclear Information System (INIS)

    Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N3) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 323 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI

  5. Introducing heterogeneity in Monte Carlo models for risk assessments of high-level nuclear waste. A parallel implementation of the MLCRYSTAL code

    International Nuclear Information System (INIS)

    We have introduced heterogeneity to an existing model as a special feature and simultaneously extended the model from 1D to 3D. Briefly, the code generates stochastic fractures in a given geosphere. These fractures are connected in series to form one pathway for radionuclide transport from the repository to the biosphere. Rock heterogeneity is realized by simulating physical and chemical properties for each fracture, i.e. these properties vary along the transport pathway (which is an ensemble of all fractures serially connected). In this case, each Monte Carlo simulation involves a set of many thousands of realizations, one for each pathway. Each pathway can be formed by approx. 100 fractures. This means that for a Monte Carlo simulation of 1000 realizations, we need to perform a total of 100,000 simulations. Therefore the introduction of heterogeneity has increased the CPU demands by two orders of magnitude. To overcome the demand for CPU, the program, MLCRYSTAL, has been implemented in a parallel workstation environment using the MPI, Message Passing Interface, and later on ported to an IBM-SP2 parallel supercomputer. The program is presented here and a preliminary set of results is given with the conclusions that can be drawn. 3 refs, 12 figs

  6. Comparing Implementations of Estimation Methods for Spatial Econometrics

    Directory of Open Access Journals (Sweden)

    Roger Bivand

    2015-02-01

    Full Text Available Recent advances in the implementation of spatial econometrics model estimation techniques have made it desirable to compare results, which should correspond between implementations across software applications for the same data. These model estimation techniques are associated with methods for estimating impacts (emanating effects, which are also presented and compared. This review constitutes an up-to-date comparison of generalized method of moments and maximum likelihood implementations now available. The comparison uses the cross-sectional US county data set provided by Drukker, Prucha, and Raciborski (2013d. The comparisons will be cast in the context of alternatives using the MATLAB Spatial Econometrics toolbox, Stata's user-written sppack commands, Python with PySAL and R packages including spdep, sphet and McSpatial.

  7. Monte Carlo method and High Performance Computing for solving Fokker-Planck equation of minority plasma particles

    CERN Document Server

    Hirvijoki, Eero; Äkäslompolo, Simppa; Varje, Jari; Koskela, Tuomas; Miettunen, Juho

    2015-01-01

    This paper explains how to obtain the distribution function of minority ions in tokamak plasmas using the Monte Carlo method. Since the emphasis is on energetic ions, the guiding-center transformation is outlined, including also the transformation of the collision operator. Even within the guiding-center formalism, the fast particle simulations can still be very CPU intensive and, therefore, we introduce the reader also to the world of high-performance computing. The paper is concluded with a few examples where the presented method has been applied.

  8. A method for simulating real-time neutron populations, materials and geometries using the GEANT4 Monte Carlo toolkit

    International Nuclear Information System (INIS)

    GEANT4 is a Monte Carlo particle physics toolkit that simulates elementary particles moving through matter. GEANT4 allows a population of neutrons to be tracked in a multiplying medium as the population and the medium evolve. However, the population must be artificially stabilized so that it neither explodes nor vanishes. We present a stabilization method where the simulation is divided into short time intervals and the population is renormalized at the end of each interval. This method was used with a simple sphere of U235 to calculate the effective neutron multiplication factor (keff) from the continuous evolution of the neutron population. (author)

  9. A new method for designing dual foil electron beam forming systems. II. Feasibility of practical implementation of the method

    Science.gov (United States)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.

  10. Singular and Regular Implementations of the Hybrid Boundary Node Method

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The hybrid boundary node method (HdBNM) combines a modified function with the moving least squares approximation to form a boundary-only truly meshless method. This paper describes two implementations of the HdBNM, the singular hybrid boundary node method (ShBNM) and the regular hybrid boundary node method (RhBNM). The ShBNM and RhBNM were compared with each other, and the parameters that influence their performance were studied in detail. The convergence rates and their applicability to thin structures were also investigated. The ShBNM and RhBNM are found to be very easy to implement and to efficiently obtain numerical solutions to computational mechanics problems.

  11. Comparing Implementations of Estimation Methods for Spatial Econometrics

    OpenAIRE

    Roger Bivand; Gianfranco Piras

    2015-01-01

    Recent advances in the implementation of spatial econometrics model estimation tech- niques have made it desirable to compare results, which should correspond between im- plementations across software applications for the same data. These model estimation techniques are associated with methods for estimating impacts (emanating effects), which are also presented and compared. This review constitutes an up-to-date comparison of generalized method of moments and maximum likelihood...

  12. Observational constraints on G-corrected holographic dark energy using a Markov chain Monte Carlo method

    Science.gov (United States)

    Alavirad, Hamzeh; Malekjani, Mohammad

    2014-02-01

    We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within 1 σ confidence level (CL) in a flat universe as: , , and the HDE constant . Using the best fit values, the equation of state of the dark component at the present time w d0 at 1 σ CL can cross the phantom boundary w=-1.

  13. Random vibration analysis of switching apparatus based on Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin

    2007-01-01

    The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.

  14. Review of the theory and applications of Monte Carlo methods. Proceedings of a seminar-workshop, Oak Ridge, Tennessee, April 21-23, 1980

    International Nuclear Information System (INIS)

    This report consists of 24 papers which were presented at the seminar on Theory and Application of Monte Carlo Methods, held in Oak Ridge on April 21-23, plus a summary of the three-man panel discussion which concluded the seminar and two papers which were not given orally. These papers constitute a current statement of the state of the art of the theory and application of Monte Carlo methods for radiation transport problems in shielding and reactor physics

  15. CosmoPMC: Cosmology Population Monte Carlo

    CERN Document Server

    Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren

    2011-01-01

    We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.

  16. Response of thermoluminescent dosimeters to photons simulated with the Monte Carlo method

    Science.gov (United States)

    Moralles, M.; Guimarães, C. C.; Okuno, E.

    2005-06-01

    Personal monitors composed of thermoluminescent dosimeters (TLDs) made of natural fluorite (CaF 2:NaCl) and lithium fluoride (Harshaw TLD-100) were exposed to gamma and X rays of different qualities. The GEANT4 radiation transport Monte Carlo toolkit was employed to calculate the energy depth deposition profile in the TLDs. X-ray spectra of the ISO/4037-1 narrow-spectrum series, with peak voltage (kVp) values in the range 20-300 kV, were obtained by simulating a X-ray Philips MG-450 tube associated with the recommended filters. A realistic photon distribution of a 60Co radiotherapy source was taken from results of Monte Carlo simulations found in the literature. Comparison between simulated and experimental results revealed that the attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account, while this effect is negligible for lithium fluoride. Differences between results obtained by heating the dosimeter from the irradiated side and from the opposite side allowed the determination of the light attenuation coefficient for CaF 2:NaCl (mass proportion 60:40) as 2.2 mm -1.

  17. Simulation of diffuse photon migration in tissue by a Monte Carlo method derived from the optical scattering of spheroids.

    Science.gov (United States)

    Hart, Vern P; Doyle, Timothy E

    2013-09-01

    A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed. PMID:24085080

  18. Applying new hybrid method of analytical hierarchy process, Monte Carlo Simulation and PROMETHEE to prioritize and selecting appropriate target market

    Directory of Open Access Journals (Sweden)

    Amir Kariznoee

    2015-06-01

    Full Text Available Making decision to choose the appropriate target market is one of the key decisions in the success of firms, which has direct effect in the amount of their profits. The aim of this paper is to introduce and use of new hybrid method of AHP, Monte Carlo simulation and PROMETHEE to prioritize cities to establish retailers, considering different indices. The problem of this study is related to a factory, constructing premade pieces of buildings, that to introduce and distribute its new products is searching the new retailers in different cities. To prioritize cities, with the interview with experts and the studying of the previous works the indices have been determined and the hierarchy pattern has been made. Then using the hybrid method of AHP and Monte Carlo simulation the weights of the indices have been determined and then using PROMETHEE method the best city has been chosen and the other ones have been prioritized. From the benefits of the new introduced hybrid method with respect to other ways of selecting target markets is decreasing the risk and increasing the power of decision making.

  19. Comparison of radiative transfer Monte Carlo and volume integral equation methods of studying the clustering of small scatterers.

    Science.gov (United States)

    Green, K; Lumme, K

    2001-08-01

    The effect of clustering of small scatterers on optical properties was studied by creation of a Poisson distributed plane-parallel geometry and slow cooling of the particle system in the sense of simulated annealing in an attempt to minimize the assumed total potential energy and sample the spatial distribution during the process. The optical properties were calculated by the volume integral equation method (VIEM). The scattering results for unclustered structures with different size parameters and packing densities were also compared with those given by Monte Carlo simulation for radiative transfer. In particular, measuring the intensity distribution of the VIEM is well suited to the classic radiative transfer approach. PMID:18360404

  20. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    International Nuclear Information System (INIS)

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements

  1. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  2. Monte Carlo Radiative Transfer

    CERN Document Server

    Whitney, Barbara A

    2011-01-01

    I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

  3. Neutron and photon doses in high energy radiotherapy facilities and evaluation of shielding performance by Monte Carlo method

    International Nuclear Information System (INIS)

    Highlights: → The MCNP5 code has been used to model a radiotherapy room of a 18 MV linear accelerator. → The neutron and the secondary gamma ray dose equivalents were evaluated at various points inside the treatment room and along the the maze. → To reduce the neutron and gamma ray doses, we have also investigated the radiotherapy room shielding performance. → The use of paraffin wax containing boron carbide indicates much better shielding effects. - Abstract: Medical accelerators operating above 10 MV are a source of undesirable neutron radiations which contaminate the therapeutic photon beam. These photoneutrons can also generate secondary gamma rays which increases undesirable dose to the patient body and to personnel and general public. In this study, the Monte Carlo N-Particle MCNP5 code has been used to model the radiotherapy room of a medical linear accelerator operating at 18 MV and to calculate the neutron and the secondary gamma ray energy spectra and the dose equivalents at various points inside the treatment room and along the maze. To validate our Monte Carlo simulation we compared our results with those evaluated by the recommended analytical methods of IAEA Report No. 47, and with experimental and simulated values published in the literature. After validation, the Monte Carlo simulation has been used to evaluate the shielding performance of the radiotherapy room. The obtained results showed that the use of paraffin wax containing boron carbide, in the lining of the radiotherapy room walls, presents enough effectiveness to reduce both neutron and gamma ray doses inside the treatment room and at the maze entrance. Such evaluation cannot be performed by the analytical methods since room material and wall surface lining are not taken into consideration.

  4. Memristor Crossbar-based Hardware Implementation of IDS Method

    CERN Document Server

    Merrikh-Bayat, Farnood; Rohani, Ali

    2010-01-01

    Ink Drop Spread (IDS) is the engine of Active Learning Method (ALM), which is the methodology of soft computing. IDS, as a pattern-based processing unit, extracts useful information from a system subjected to modeling. In spite of its excellent potential in solving problems such as classification and modeling compared to other soft computing tools, finding its simple and fast hardware implementation is still a challenge. This paper describes a new hardware implementation of IDS method based on the memristor crossbar structure. In addition of simplicity, being completely real-time, having low latency and the ability to continue working after the occurrence of power breakdown are some of the advantages of our proposed circuit.

  5. Multi-level Monte Carlo finite volume methods for uncertainty quantification of acoustic wave propagation in random heterogeneous layered medium

    Science.gov (United States)

    Mishra, S.; Schwab, Ch.; Šukys, J.

    2016-05-01

    We consider the very challenging problem of efficient uncertainty quantification for acoustic wave propagation in a highly heterogeneous, possibly layered, random medium, characterized by possibly anisotropic, piecewise log-exponentially distributed Gaussian random fields. A multi-level Monte Carlo finite volume method is proposed, along with a novel, bias-free upscaling technique that allows to represent the input random fields, generated using spectral FFT methods, efficiently. Combined together with a recently developed dynamic load balancing algorithm that scales to massively parallel computing architectures, the proposed method is able to robustly compute uncertainty for highly realistic random subsurface formations that can contain a very high number (millions) of sources of uncertainty. Numerical experiments, in both two and three space dimensions, illustrating the efficiency of the method are presented.

  6. Simulation of the functioning of a gamma camera using Monte Carlo method; Simulacion del funcionamiento de una camara gamma mediante metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Oramas Polo, I.

    2014-07-01

    This paper presents the simulation of the gamma camera Park Isocam II by Monte Carlo code SIMIND. This simulation allows detailed assessment of the functioning of the gamma camera. The parameters evaluated by means of the simulation are: the intrinsic uniformity with different window amplitudes, the system uniformity, the extrinsic spatial resolution, the maximum rate of counts, the intrinsic sensitivity, the system sensitivity, the energy resolution and the pixel size. The results of the simulation are compared and evaluated against the specifications of the manufacturer of the gamma camera and taking into account the National Protocol for Quality Control of Nuclear Medicine Instruments of the Cuban Medical Equipment Control Center. The simulation reported here demonstrates the validity of the SIMIND Monte Carlo code to evaluate the performance of the gamma camera Park Isocam II and as result a computational model of the camera has been obtained. (Author)

  7. Simulation of Cone Beam CT System Based on Monte Carlo Method

    CERN Document Server

    Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing

    2014-01-01

    Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.

  8. Observational constraints on G-corrected holographic dark energy using a Markov chain Monte Carlo method

    CERN Document Server

    Alavirad, Hamzeh

    2013-01-01

    We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within $1\\sigma$ confidence level (CL) in a flat universe as: $\\Omega_{\\rm b}h^2=0.0222^{+0.0018}_{-0.0013}$, $\\Omega_{\\rm c}h^2 =0.1121^{+0.0110}_{-0.0079}$, $\\alpha_{\\rm G}\\equiv \\dot{G}/(HG) =0.1647^{+0.3547}_{-0.2971}$ and the HDE constant $c=0.9322^{+0.4569}_{-0.5447}$. Using the best fit values, the equation of state of the dark component at the present time $w_{\\rm d0}$ at $1\\sigma$ CL can cross the phantom boundary $w=-1$.

  9. Ground-state properties of LiH by reptation quantum Monte Carlo methods.

    Science.gov (United States)

    Ospadov, Egor; Oblinsky, Daniel G; Rothstein, Stuart M

    2011-05-01

    We apply reptation quantum Monte Carlo to calculate one- and two-electron properties for ground-state LiH, including all tensor components for static polarizabilities and hyperpolarizabilities to fourth-order in the field. The importance sampling is performed with a large (QZ4P) STO basis set single determinant, directly obtained from commercial software, without incurring the overhead of optimizing many-parameter Jastrow-type functions of the inter-electronic and internuclear distances. We present formulas for the electrical response properties free from the finite-field approximation, which can be problematic for the purposes of stochastic estimation. The α, γ, A and C polarizability values are reasonably consistent with recent determinations reported in the literature, where they exist. A sum rule is obeyed for components of the B tensor, but B(zz,zz) as well as β(zzz) differ from what was reported in the literature. PMID:21445452

  10. CORAL software: prediction of carcinogenicity of drugs by means of the Monte Carlo method.

    Science.gov (United States)

    Toropova, Alla P; Toropov, Andrey A

    2014-02-14

    Methodology of building up and validation of models for carcinogenic potentials of drugs by means of the CORAL software is described. The QSAR analysis by the CORAL software includes three phases: (i) definition of preferable parameters for the optimization procedure that gives maximal correlation coefficient between endpoint and an optimal descriptor that is calculated with so-called correlation weights of various molecular features; (ii) detection of molecular features with stable positive correlation weights or vice versa stable negative correlation weights (molecular features which are characterized by solely positive or solely negative correlation weights obtained for several starts of the Monte Carlo optimization are a basis for mechanistic interpretations of the model); and (iii) building up the model that is satisfactory from point of view of reliable probabilistic criteria and OECD principles. The methodology is demonstrated for the case of carcinogenicity of a large set (n = 1464) of organic compounds which are potential or actual pharmaceutical agents.

  11. Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods

    International Nuclear Information System (INIS)

    In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D2O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)

  12. Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method

    International Nuclear Information System (INIS)

    When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)

  13. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  14. Possibilities of implementing nonthermal processing methods in the dairy industry

    Directory of Open Access Journals (Sweden)

    Irena Jeličić

    2010-06-01

    Full Text Available In the past two decades a lot of research in the field of food science has focused on new, non-thermal processing methods. This article describes the most intensively investigated new processing methodsfor implementation in the dairy industry, like microfiltration, high hydrostatic pressure, ultrasound and pulsed electric fields. For each method an overview is given for the principle of microbial inactivation, the obtained results regarding reduction of microorganisms as well as the positive and undesirable effects on milk composition and characteristics. Most promising methods for further implementation in the dairy industry appeared to be combination of moderate temperatures with high hydrostatic pressure, respectively, pulsed electric fields and microfiltration, since those treatments did not result in any undesirable changes in sensory properties of milk. Additionally, milk treatment with these methodsresulted in a better milk fat homogenization, faster rennet coagulation, shorter duration of milk fermentations, etc. Very good results regarding microbial inactivation were obtained by treating milkwith combination of moderate temperatures and high intensity ultrasound which is also called a process of thermosonification. However, thermosonification treatments often result in undesirablechanges in milk sensory properties, which is most probably due to ultrasonic induced milk fat oxidation. This article also shortly describes the use of natural compounds with antimicrobial effects such as bacteriocins, lactoperoxidase system and lysozime. However their implementation is limited for reasons like high costs, interaction with other food ingredients, poor solubility, narrow activity spectrum, spontaneous loss of bacteriocinogenicity, etc. In addition, principles of antimicrobial effect of microwaves and ultraviolet irradiation are described. However their implementation in the dairy industry failed mostly due to technical and commercial reasons.

  15. A benchmark study of the two-dimensional Hubbard model with auxiliary-field quantum Monte Carlo method

    CERN Document Server

    Qin, Mingpu; Zhang, Shiwei

    2016-01-01

    Ground state properties of the Hubbard model on a two-dimensional square lattice are studied by the auxiliary-field quantum Monte Carlo method. Accurate results for energy, double occupancy, effective hopping, magnetization, and momentum distribution are calculated for interaction strengths of U/t from 2 to 8, for a range of densities including half-filling and n = 0.3, 0.5, 0.6, 0.75, and 0.875. At half-filling, the results are numerically exact. Away from half-filling, the constrained path Monte Carlo method is employed to control the sign problem. Our results are obtained with several advances in the computational algorithm, which are described in detail. We discuss the advantages of generalized Hartree-Fock trial wave functions and its connection to pairing wave functions, as well as the interplay with different forms of Hubbard-Stratonovich decompositions. We study the use of different twist angle sets when applying the twist averaged boundary conditions. We propose the use of quasi-random sequences, whi...

  16. Crystal-structure prediction via the Floppy-Box Monte Carlo algorithm: Method and application to hard (non)convex particles

    Science.gov (United States)

    de Graaf, Joost; Filion, Laura; Marechal, Matthieu; van Roij, René; Dijkstra, Marjolein

    2012-12-01

    In this paper, we describe the way to set up the floppy-box Monte Carlo (FBMC) method [L. Filion, M. Marechal, B. van Oorschot, D. Pelt, F. Smallenburg, and M. Dijkstra, Phys. Rev. Lett. 103, 188302 (2009), 10.1103/PhysRevLett.103.188302] to predict crystal-structure candidates for colloidal particles. The algorithm is explained in detail to ensure that it can be straightforwardly implemented on the basis of this text. The handling of hard-particle interactions in the FBMC algorithm is given special attention, as (soft) short-range and semi-long-range interactions can be treated in an analogous way. We also discuss two types of algorithms for checking for overlaps between polyhedra, the method of separating axes and a triangular-tessellation based technique. These can be combined with the FBMC method to enable crystal-structure prediction for systems composed of highly shape-anisotropic particles. Moreover, we present the results for the dense crystal structures predicted using the FBMC method for 159 (non)convex faceted particles, on which the findings in [J. de Graaf, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 107, 155501 (2011), 10.1103/PhysRevLett.107.155501] were based. Finally, we comment on the process of crystal-structure prediction itself and the choices that can be made in these simulations.

  17. Monte-Carlo method for studying the slowing down of neutrons in a thin plate of hydrogenated matter

    International Nuclear Information System (INIS)

    The studies of interaction of slow neutrons with atomic nuclei by means of the time of flight methods are made with a pulsed neutron source with a broad energy spectrum. The measurement accuracy needs a high intensity and an output time as short as possible and well defined. If the neutrons source is a target bombarded by the beam of a pulsed accelerator, it is usually required to slow down the neutrons to obtain a sufficient intensity at low energies. The purpose of the Monte-Carlo method which is described in this paper is to study the slowing down properties, mainly the intensity and the output time distribution of the slowed-down neutrons. The choice of the method and parameters studied is explained as well as the principles, some calculations and the program organization. A few results given as examples were obtained in the line of this program, the limits of which are principally due to simplifying physical hypotheses. (author)

  18. Two Methods of AES Implementation Based on CPLD/FPGA

    Institute of Scientific and Technical Information of China (English)

    刘常澍; 彭艮鹏; 王晓卓

    2004-01-01

    This paper describes two single-chip--complex programmable logic devices/field programmable gate arrays(CPLD/FPGA)--implementations of the new advanced encryption standard (AES) algorithm based on the basic iteration architecture (design [A]) and the hybrid pipelining architecture (design [B]). Design [A] is an encryption-and-decryption implementation based on the basic iteration architecture. This design not only supports 128-bit, 192-bit, 256-bit keys, but saves hardware resources because of the iteration architecture and sharing technology. Design [B] is a method of the 2×2 hybrid pipelining architecture. Based on the AES interleaved mode of operation, the design successfully accomplishes the algorithm, which operates in the feedback mode (cipher block chaining). It not only guarantees security of encryption/decryption, but obtains high data throughput of 1.05 Gb/s. The two designs have been realized on Aitera's EP20k300EBC652-1 devices.

  19. Startup of “CANDLE” burnup in a Gas-cooled Fast Reactor using Monte Carlo method

    International Nuclear Information System (INIS)

    Highlights: ► In equilibrium state of a CANDLE core, the burning region contains fission products and actinides. ► These isotopes are not available. The solution is startup of a reactor using easily available materials. ► At the end of core life the fuel for the equilibrium core is produced. ► In this work the startup of a CANDLE-GFR has been evaluated using Monte Carlo technique. ► The results show that the equilibrium state could be achieved after some minor transients. -- Abstract: During the past decade, the CANDLE burnup strategy has been proposed as an innovative fuel cycle and reactor design for complete utilization of uranium resources. In this strategy the shapes of neutron flux, nuclide densities and power density distribution remain constant but the burning region moves in axial direction. The feasibility of this strategy has been demonstrated widely by using the diffusion technique in conjunction with nuclide transmutation equations. On the other hand since the Monte Carlo method provides the exact solution to the neutron transport, the Monte Carlo technique is becoming more widely used in routine burnup calculations. The main objective of this work is startup of CANDLE burnup in a Gas cooled Fast Reactor using a Monte Carlo burnup scheme. In this case only natural or depleted uranium is required for fresh fuel region. However, the construction of the first CANDLE core is faced with a big problem. In equilibrium state the burning region contains a spectrum of fission products as well as higher actinides. These isotopes are not easily available for constructing the initial CANDLE core. The solution is startup of a special reactor using the enriched uranium in starter zone. At the end of core life the fuel for the next core is produced with the composition close to the equilibrium state. An originally MCNP–ORIGEN linkage program named MOBC has been used for criticality and isotopic evaluation of the core. The results of analysis showed that

  20. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    Energy Technology Data Exchange (ETDEWEB)

    Dupuis, Paul [Brown University

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  1. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  2. Quantum Monte Carlo with Variable Spins

    CERN Document Server

    Melton, Cody A; Mitas, Lubos

    2016-01-01

    We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.

  3. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy; Uso de tecnicas Monte Carlo para determinacao de curvas de isodose em braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  4. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NARCIS (Netherlands)

    Schoups, G.H.W.; Vrugt, J.A.; Fenicia, F.; Van de Giesen, N.C.

    2010-01-01

    Conceptual rainfall‐runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first‐order, explicit, fixed‐step integration methods leads to computationally cheap simulation models that are e

  5. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NARCIS (Netherlands)

    G. Schoups; J.A. Vrugt; F. Fenicia; N.C. van de Giesen

    2010-01-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are e

  6. Monte Carlo burnup code acceleration with the correlated sampling method. Preliminary test on an UOX cell with TRIPOLI-4{sup R}

    Energy Technology Data Exchange (ETDEWEB)

    Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M. [Commissariat a l' Energie Atomique et aux Energies Alternatives CEA, Service d' Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)

    2013-07-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)

  7. Evaluation of occupational exposure in interventionist procedures using Monte Carlo Method; Avaliacao das exposicoes dos envolvidos em procedimentos intervencionistas usando metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V.E., E-mail: williathan@yahoo.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Belinato, Walmir; Maia, Ana F. [Universidade Federal de Sergipe (UFS), Sao Cristovao, SE (Brazil). Departamento de Fisica

    2014-07-01

    This study presents a computational model of exposure for a patient, cardiologist and nurse in a typical scenario of cardiac interventional procedures. In this case a set of conversion coefficient (CC) for effective dose (E) in terms of kerma-area product (KAP) for all individuals involved using seven different energy spectra and eight beam projections. The CC was also calculated for the entrance skin dose (ESD) normalized to the PKA for the patient. All individuals were represented by anthropomorphic phantoms incorporated in a radiation transport code based on Monte Carlo simulation. (author)

  8. Atmospheric correction of Earth-observation remote sensing images by Monte Carlo method

    Indian Academy of Sciences (India)

    Hanane Hadjit; Abdelaziz Oukebdane; Ahmad Hafid Belbachir

    2013-10-01

    In earth observation, the atmospheric particles contaminate severely, through absorption and scattering, the reflected electromagnetic signal from the earth surface. It will be greatly beneficial for land surface characterization if we can remove these atmospheric effects from imagery and retrieve surface reflectance that characterizes the surface properties with the purpose of atmospheric correction. Giving the geometric parameters of the studied image and assessing the parameters describing the state of the atmosphere, it is possible to evaluate the atmospheric reflectance, and upward and downward transmittances which take part in the garbling data obtained from the image. To that end, an atmospheric correction algorithm for high spectral resolution data over land surfaces has been developed. It is designed to obtain the main atmospheric parameters needed in the image correction and the interpretation of optical observations. It also estimates the optical characteristics of the Earth-observation imagery (LANDSAT and SPOT). The physics underlying the problem of solar radiation propagations that takes into account multiple scattering and sphericity of the atmosphere has been treated using Monte Carlo techniques.

  9. Specific absorbed fractions of electrons and photons for Rad-HUMAN phantom using Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    WANG Wen; CHENG Meng-Yun; LONG Peng-Cheng; HU Li-Qin

    2015-01-01

    The specific absorbed fractions (SAF) for self-and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides.A set of SAFs of photons and electrons were calculated using the Rad-HUMAN phantom,which is a computational voxel phantom of a Chinese adult female that was created using the color photographic image of the Chinese Visible Human (CVH) data set by the FDS Team.The model can represent most Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians.In this study,the emission of mono-energetic photons and electrons of 10 keV to 4 MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP.Results were compared with the values from ICRP reference and ORNL models.The results showed that SAF from the Rad-HUMAN have similar trends but are larger than those from the other two models.The differences were due to the racial and anatomical differences in organ mass and inter-organ distance.The SAFs based on the Rad-HUMAN phantom provide an accurate and reliable data for internal radiation dose calculations for Chinese females.

  10. Size dependence study of the ordering temperature in the Fast Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Velasquez, E. A., E-mail: eavelas@gmail.com [Universidad de San Buenaventura Seccional Medellin, Grupo de Investigacion en Modelamiento y Simulacion Computacional, Facultad de Ingenierias (Colombia); Mazo-Zuluaga, J., E-mail: johanmazo@gmail.com [Universidad de Antioquia, Grupo de Estado Solido, Grupo de Instrumentacion Cientifica y Microelectronica, Instituto de Fisica-FCEN (Colombia); Mejia-Lopez, J., E-mail: jmejia@puc.cl [Universidad de Antioquia, Instituto de Fisica-FCEN (Colombia)

    2013-02-15

    Based on the framework of the Fast Monte Carlo approach, we study the diameter dependence of the ordering temperature in magnetic nanostructures of cylindrical shape. For the purposes of this study, Fe cylindrical-shaped samples of different sizes (20 nm height, 30-100 nm in diameter) have been chosen, and their magnetic properties have been computed as functions of the scaled temperature. Two main set of results are concluded: (a) the ordering temperature of nanostructures follows a linear scaling relationship as a function of the scaling factor x, for all the studied sizes. This finding rules out a scaling relation T Prime {sub c} = x{sup 3{eta}}T{sub c} (where {eta} is a scaling exponent, and T Prime {sub c} and T{sub c} are the scaled and true ordering temperatures) that has been proposed in the literature, and suggests that temperature should scale linearly with the scaling factor x. (b) For the nanostructures, there are three different order-disorder magnetic transition modes depending on the system's size, in very good agreement with previous experimental reports.

  11. Specific Absorbed Fractions of Electrons and Photons for Rad-HUMAN Phantom Using Monte Carlo Method

    CERN Document Server

    Wang, Wen; Long, Peng-cheng; Hu, Li-qin

    2014-01-01

    The specific absorbed fractions (SAF) for self- and cross-irradiation are effective tools for the internal dose estimation of inhalation and ingestion intakes of radionuclides. A set of SAFs of photon and electron were calculated using the Rad-HUMAN phantom, a computational voxel phantom of Chinese adult female and created using the color photographic image of the Chinese Visible Human (CVH) data set. The model can represent most of Chinese adult female anatomical characteristics and can be taken as an individual phantom to investigate the difference of internal dose with Caucasians. In this study, the emission of mono-energetic photons and electrons of 10keV to 4MeV energy were calculated using the Monte Carlo particle transport calculation code MCNP. Results were compared with the values from ICRP reference and ORNL models. The results showed that SAF from Rad-HUMAN have the similar trends but larger than those from the other two models. The differences were due to the racial and anatomical differences in o...

  12. Neutronic performance calculations with alternative fluids in a hybrid reactor by using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Guenay, Mehtap [Malatya Univ. (Turkey). Physics Department

    2015-03-15

    In this study, salt-heavy metal mixtures consisting of 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% UO{sub 2}, 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% NpO{sub 2}, and 93-85% Li{sub 20}Sn{sub 80} + 5% SFG-PuO{sub 2} and 2-10% UCO were used as fluids. The fluids were used in the liquid first wall, blanket, and shield zones of a fusion-fission hybrid reactor system. A beryllium (Be) zone with a width of 3 cm was used for neutron multiplicity between the liquid first wall and the blanket. 9Cr2WVTa ferritic steel with the width of 4 cm was used as the structural material. The contributions of each isotope in the fluids to the nuclear parameters, such as tritium breeding ratio (TBR), energy multiplication factor (M), and heat deposition rate, of the fusion-fission hybrid reactor were calculated in the liquid first wall, blanket, and shield zones. Three-dimensional analyses were performed using the Monte Carlo code MCNPX-2.7.0 and nuclear data library ENDF/B-VII.0.

  13. Modeling the Thermal Conductivity of Nanocomposites Using Monte-Carlo Methods and Realistic Nanotube Configurations

    Science.gov (United States)

    Bui, Khoa; Papavassiliou, Dimitrios

    2012-02-01

    The effective thermal conductivity (Keff) of carbon nanotube (CNT) composites is affected by the thermal boundary resistance (TBR) and by the dispersion pattern and geometry of the CNTs. We have previously modeled CNTs as straight cylinders and found that the TBR between CNTs (TBRCNT-CNT) can suppress Keff at high volume fractions of CNTs [1]. Effective medium theory results assume that the CNTs are in a perfect dispersion state and exclude the TBRCNT-CNT [2]. In this work, we report on the development of an algorithm for generating CNTs with worm-like geometry in 3D, and with different persistence lengths. These worm-like CNTs are then randomly placed in a periodic box representing a realistic state, since the persistence length of a CNT can be obtained from microscopic images. The use of these CNT geometries in conjunction with off-lattice Monte Carlo simulations [1] in order to study the effective thermal properties of nanocomposites will be discussed, as well as the effects of the persistence length on Keff and comparisons to straight cylinder models. References [1] K. Bui, B.P. Grady, D.V. Papavassiliou, Chem. Phys. Let., 508(4-6), 248-251, 2011 [2] C.W. Nan, G. Liu, Y. Lin, M. Li, App. Phys. Let., 85(16), 3549-3551, 2006

  14. Radiation field characterization of a BNCT research facility using Monte Carlo method - code MCNP-4B

    International Nuclear Information System (INIS)

    Boron Neutron Capture Therapy - BNCT - is a selective cancer treatment and arises as an alternative therapy to treat cancer when usual techniques - surgery, chemotherapy or radiotherapy - show no satisfactory results. The main proposal of this work is to project a facility to BNCT studies. This facility relies on the use of an Am Be neutron source and on a set of moderators, filters and shielding which will provide the best neutron/gamma beam characteristic for these Becton studies, i.e., high intensity thermal and/or epithermal neutron fluxes and with the minimum feasible gamma rays and fast neutrons contaminants. A computational model of the experiment was used to obtain the radiation field in the sample irradiation position. The calculations have been performed with the MCNP 4B Monte Carlo Code and the results obtained can be regarded as satisfactory, i.e., a thermal neutron fluencyNT = 1,35x108 n/cm , a fast neutron dose of 5,86x10-10 Gy/NT and a gamma ray dose of 8,30x10-14 Gy/NT. (author)

  15. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    International Nuclear Information System (INIS)

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s

  16. Assessment of radiation shield integrity of DD/DT fusion neutron generator facilities by Monte Carlo and experimental methods

    Science.gov (United States)

    Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.

    2015-01-01

    DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.

  17. Optimal Allocation of Power-Electronic Interfaced Wind Turbines Using a Genetic Algorithm - Monte Carlo Hybrid Optimization Method

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Siano, Pierluigi; Chen, Zhe;

    2010-01-01

    limit requirements. The method combines the Genetic Algorithm (GA), gradient-based constrained nonlinear optimization algorithm and sequential Monte Carlo simulation (MCS). The GA searches for the optimal locations and capacities of WTs. The gradient-based optimization finds the optimal power factor...... determined by the wind resource and geographic conditions, the location of wind turbines in a power system network may significantly affect the distribution of power flow, power losses, etc. Furthermore, modern WTs with power-electronic interface have the capability of controlling reactive power output...... setting of WTs. The sequential MCS takes into account the stochastic behaviour of wind power generation and load. The proposed hybrid optimization method is demonstrated on an 11 kV 69-bus distribution system....

  18. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta.

    Directory of Open Access Journals (Sweden)

    Zhe Zhang

    Full Text Available The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches.

  19. Application of Enhanced Sampling Monte Carlo Methods for High-Resolution Protein-Protein Docking in Rosetta.

    Science.gov (United States)

    Zhang, Zhe; Schindler, Christina E M; Lange, Oliver F; Zacharias, Martin

    2015-01-01

    The high-resolution refinement of docked protein-protein complexes can provide valuable structural and mechanistic insight into protein complex formation complementing experiment. Monte Carlo (MC) based approaches are frequently applied to sample putative interaction geometries of proteins including also possible conformational changes of the binding partners. In order to explore efficiency improvements of the MC sampling, several enhanced sampling techniques, including temperature or Hamiltonian replica exchange and well-tempered ensemble approaches, have been combined with the MC method and were evaluated on 20 protein complexes using unbound partner structures. The well-tempered ensemble method combined with a 2-dimensional temperature and Hamiltonian replica exchange scheme (WTE-H-REMC) was identified as the most efficient search strategy. Comparison with prolonged MC searches indicates that the WTE-H-REMC approach requires approximately 5 times fewer MC steps to identify near native docking geometries compared to conventional MC searches. PMID:26053419

  20. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    OpenAIRE

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general...

  1. Orbital dependent functionals: An atom projector augmented wave method implementation

    Science.gov (United States)

    Xu, Xiao

    This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.

  2. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro

    2012-11-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM, experiences have almost exclusively been limited to formulation based on flat homogeneous parallel loops. FMM in fact contains operations that cannot be readily expressed in such conventional but restrictive models. We show that task parallelism, or parallel recursions in particular, allows us to parallelize all operations of FMM naturally and scalably. Moreover it allows us to parallelize a \\'\\'mutual interaction\\'\\' for force/potential evaluation, which is roughly twice as efficient as a more conventional, unidirectional force/potential evaluation. The net result is an open source FMM that is clearly among the fastest single node implementations, including those on GPUs; with a million particles on a 32 cores Sandy Bridge 2.20GHz node, it completes a single time step including tree construction and force/potential evaluation in 65 milliseconds. The study clearly showcases both programmability and performance benefits of flexible parallel constructs over more monolithic parallel loops. © 2012 IEEE.

  3. Monte Carlo Simulation with Asymptotic Method (Published in "Journal of Japan Statistical Society", Vol.35-2, 171-203, 2005. )

    OpenAIRE

    Akihiko Takahashi; Nakahiro Yoshida

    2005-01-01

    We shall propose a new computational scheme with the asymptotic method to achieve variance reduction of Monte Carlo simulation for numerical analysis especially in finance. We not only provide general scheme of our method, but also show its effectiveness through numerical examples such as computing optimal portfolio and pricing an average option. Finally, we show mathematical validity of our method.

  4. Experimental Methods for Implementing Graphene Contacts to Finite Bandgap Semiconductors

    DEFF Research Database (Denmark)

    Meyer-Holdt, Jakob

    for molecular electronics with parallel CVD graphene bottom electrodes with SiO2 passivation was successfully fabricated and electronically characterized. A functioning Carbon Burger was not achieved. Along the work on the Carbon Burger, the scope was broadened and focus was put on implementing graphene...... contacts to semiconductor nanowires, more specifically, epitaxially grown InAs nanowires. First, we tried a top down method where CVD graphene was deposited on substrate supported InAs nanowires followed by selective graphene ashing to define graphene electrodes. While electrical contact between...

  5. Changing to problem-oriented methods. Implementation in psychiatric institutions.

    Science.gov (United States)

    Gaviria, B; Alvis, J; Zarour, N

    1976-08-01

    The so-called problem-oriented methods for organizing and recording clinical information offer many potential benefits to users in psychiatric institutions. Beyond the mechanical aspects of implementation, incorporating a problem-oriented approach into the practices of clinical teams entails conceptual and practical readjustments of considerable magnitude. Based on an 18-month study of eight psychiatric teams with diverse characteristics, the paper discusses rationales and priorities, as well as administrative and educational considerations in the conversion process. Such a process must be studied and understood in setting objectives and channeling resources, if outcomes are to match the expectations. PMID:1085344

  6. Implementation and verification of nuclear interactions in a Monte-Carlo code for the Procom-ProGam proton therapy planning system

    International Nuclear Information System (INIS)

    Proton interaction with an exposed object material needs to be modeled with account for three basic processes: electromagnetic stopping of protons in matter, multiple coulomb scattering and nuclear interactions. Just the last type of processes is the topic of this paper. Monte Carlo codes are often used to simulate high-energy particle interaction with matter. However, nuclear interaction models implemented in these codes are rather extensive and their use in treatment planning systems requires huge computational resources. We have selected the IThMC code for its ability to reproduce experiments which measure the distribution of the projected ranges of nuclear secondary particles generated by proton beams in a multi-layer Faraday cup. The multi-layer Faraday cup detectors measure charge rather than dose and allow distinguishing between electromagnetic and nuclear interactions. The event generator used in the IThMC code is faster, but less accurate than any other used in testing. Our model of nuclear reactions demonstrates quite good agreement with experiment in the context of their effect on the Bragg peak in therapeutic applications

  7. Parallel implementation of inverse adding-doubling and Monte Carlo multi-layered programs for high performance computing systems with shared and distributed memory

    Science.gov (United States)

    Chugunov, Svyatoslav; Li, Changying

    2015-09-01

    Parallel implementation of two numerical tools popular in optical studies of biological materials-Inverse Adding-Doubling (IAD) program and Monte Carlo Multi-Layered (MCML) program-was developed and tested in this study. The implementation was based on Message Passing Interface (MPI) and standard C-language. Parallel versions of IAD and MCML programs were compared to their sequential counterparts in validation and performance tests. Additionally, the portability of the programs was tested using a local high performance computing (HPC) cluster, Penguin-On-Demand HPC cluster, and Amazon EC2 cluster. Parallel IAD was tested with up to 150 parallel cores using 1223 input datasets. It demonstrated linear scalability and the speedup was proportional to the number of parallel cores (up to 150x). Parallel MCML was tested with up to 1001 parallel cores using problem sizes of 104-109 photon packets. It demonstrated classical performance curves featuring communication overhead and performance saturation point. Optimal performance curve was derived for parallel MCML as a function of problem size. Typical speedup achieved for parallel MCML (up to 326x) demonstrated linear increase with problem size. Precision of MCML results was estimated in a series of tests - problem size of 106 photon packets was found optimal for calculations of total optical response and 108 photon packets for spatially-resolved results. The presented parallel versions of MCML and IAD programs are portable on multiple computing platforms. The parallel programs could significantly speed up the simulation for scientists and be utilized to their full potential in computing systems that are readily available without additional costs.

  8. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  9. Practical implementation of nonlinear time series methods The TISEAN package

    CERN Document Server

    Hegger, R; Schreiber, T; Hegger, Rainer; Kantz, Holger; Schreiber, Thomas

    1998-01-01

    Nonlinear time series analysis is becoming a more and more reliable tool for the study of complicated dynamics from measurements. The concept of low-dimensional chaos has proven to be fruitful in the understanding of many complex phenomena despite the fact that very few natural systems have actually been found to be low dimensional deterministic in the sense of the theory. In order to evaluate the long term usefulness of the nonlinear time series approach as inspired by chaos theory, it will be important that the corresponding methods become more widely accessible. This paper, while not a proper review on nonlinear time series analysis, tries to make a contribution to this process by describing the actual implementation of the algorithms, and their proper usage. Most of the methods require the choice of certain parameters for each specific time series application. We will try to give guidance in this respect. The scope and selection of topics in this article, as well as the implementational choices that have ...

  10. Absorbed dose measurements in mammography using Monte Carlo method and ZrO{sub 2}+PTFE dosemeters

    Energy Technology Data Exchange (ETDEWEB)

    Duran M, H. A.; Hernandez O, M. [Departamento de Investigacion en Polimeros y Materiales, Universidad de Sonora, Blvd. Luis Encinas y Rosales s/n, Col. Centro, 83190 Hermosillo, Sonora (Mexico); Salas L, M. A.; Hernandez D, V. M.; Vega C, H. R. [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Cipres 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Pinedo S, A.; Ventura M, J.; Chacon, F. [Hospital General de Zona No. 1, IMSS, Interior Alameda 45, 98000 Zacatecas (Mexico); Rivera M, T. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F.(Mexico)], e-mail: hduran20_1@hotmail.com

    2009-10-15

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO{sub 2}+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  11. Calculation of organ doses from environmental gamma rays using human phantoms and Monte Carlo methods. Pt. 1

    International Nuclear Information System (INIS)

    Organ doses from environmental γ-rays (U-238, Th-232, K-40) were calculated using Monte Carlo methods for three typical sources of a semi-infinite volume source in the air, an infinite plane source in the ground and a volume source in the ground. γ-ray fields in the natural environment were simulated rigourously without approximations or simplifications in the intermediate steps except for the disturbance of the radiation field by the human body which was neglected. Organ doses were calculated for four anthropomorphic phantoms representing a baby, a child, a female and a male adult. The dose of a fetus is given by the dose to the uterus of the adult female. Air kerma and dose conversion factors normalised to air kerma and to source intensity are given for monoenergetic sources and for the natural radionuclides. (orig./HP)

  12. Dark matter in disk galaxies I: a Markov Chain Monte Carlo method and application to DDO 154

    CERN Document Server

    Hague, Peter R

    2013-01-01

    We present a new method to constrain the dark matter halo density profiles of disk galaxies. Our algorithm employs a Markov Chain Monte Carlo (MCMC) approach to explore the parameter space of a general family of dark matter profiles. We improve upon previous analyses by considering a wider range of halo profiles and by explicitly identifying cases in which the data are insufficient to break the degeneracies between the model parameters. We demonstrate the robustness of our algorithm using artificial data sets and show that reliable estimates of the halo density profile can be obtained from data of comparable quality to those currently available for low surface brightness (LSB) galaxies. We present our results in terms of physical quantities which are constrained by the data, and find that the logarithmic slope of the halo density profile at the radius of the innermost data point of a measured rotation curve can be strongly constrained in LSB galaxies. High surface brightness galaxies require additional inform...

  13. Using MCNP and Monte Carlo method for Investigation of dose field of Irradiation facility at Hanoi Irradiation Center

    International Nuclear Information System (INIS)

    MCNP and Monte Carlo method was used to calculate dose rate in the air-space of irradiation room at Hanoi Irradiation Center. Experiment measurements were also carried out to investigate the real distribution of dose field in air of the irradiator as well as the distribution of absorbed dose in sample product containers. The results show that there is a deviation between calculated data given by MCNP and measurements. The data of MCNP give a symmetric distribution of dose field against the axes going through the center of the source rack meanwhile the experiment data show that dose rate get higher values in the lower part of the space. Going to lower position to the floor dose rate getting higher value. This phenomenon was also occurred for the measurements of absorbed dose in sample product container. (author)

  14. Calibration of a gamma spectrometer for measuring natural radioactivity. Experimental measurements and modeling by Monte-Carlo methods

    International Nuclear Information System (INIS)

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of 137Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the 60Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  15. Calculating ellipse area by the Monte Carlo method and analysing dice poker with Excel at high school

    Science.gov (United States)

    Benacka, Jan

    2016-08-01

    This paper reports on lessons in which 18-19 years old high school students modelled random processes with Excel. In the first lesson, 26 students formulated a hypothesis on the area of ellipse by using the analogy between the areas of circle, square and rectangle. They verified the hypothesis by the Monte Carlo method with a spreadsheet model developed in the lesson. In the second lesson, 27 students analysed the dice poker game. First, they calculated the probability of the hands by combinatorial formulae. Then, they verified the result with a spreadsheet model developed in the lesson. The students were given a questionnaire to find out if they found the lesson interesting and contributing to their mathematical and technological knowledge.

  16. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  17. Simulation of the Interaction of X-rays with a Gas in an Ionization Chamber by the Monte Carlo Method

    International Nuclear Information System (INIS)

    The final objective of any ionization chamber is the measurement of the energy amount or radiation dose absorbed by the gas into the chamber. The final value depends on the composition of the gas, its density and temperature, the ionization chamber geometry, and type and intensity of the radiation. We describe a Monte Carlo simulation method, which allows one to compute the dose absorbed by the gas for a X-ray beam. Verification of model has been carried out by simulating the attenuation of standard X-ray radiation through the half value layers established in the ISO 4037 report, while assuming a Weibull type energy distribution for the incident photons. (Author) 6 refs

  18. Numerical simulations of a coupled radiative?conductive heat transfer model using a modified Monte Carlo method

    KAUST Repository

    Kovtanyuk, Andrey E.

    2012-01-01

    Radiative-conductive heat transfer in a medium bounded by two reflecting and radiating plane surfaces is considered. This process is described by a nonlinear system of two differential equations: an equation of the radiative heat transfer and an equation of the conductive heat exchange. The problem is characterized by anisotropic scattering of the medium and by specularly and diffusely reflecting boundaries. For the computation of solutions of this problem, two approaches based on iterative techniques are considered. First, a recursive algorithm based on some modification of the Monte Carlo method is proposed. Second, the diffusion approximation of the radiative transfer equation is utilized. Numerical comparisons of the approaches proposed are given in the case of isotropic scattering. © 2011 Elsevier Ltd. All rights reserved.

  19. Application of the Monte Carlo method for investigation of dynamical parameters of rotors supported by magnetorheological squeeze film damping devices

    Directory of Open Access Journals (Sweden)

    Zapoměl J.

    2014-06-01

    Full Text Available A flexible suspension with damping devices is an efficient technological tool for reducing forces transmitted between the rotor and its frame. To achieve optimum performance of the damping elements, their damping effect must be adaptable to the current operating conditions. In practical rotordynamic applications this is offered by magnetorheological squeeze film dampers. Some of parameters, which determine behaviour of rotors, may have uncertain character. Then a probabilistic approach is needed for analysis of such systems. In this paper there is investigated the vibration amplitude of a rigid rotor damped by two magnetorheological squeeze film dampers and magnitude of the force transmitted to the stationary part during the steady state operating regime. The uncertain parameters of the studied system are the rotor unbalance and speed of its rotation. The Monte Carlo method was employed for this analysis.

  20. Predicting the Dielectric Strength of c-C4F8 and SF6 Gas Mixtures by Monte Carlo Method

    Institute of Scientific and Technical Information of China (English)

    WU Bian-tao; XIAO Deng-ming

    2007-01-01

    An improved Monte Carlo method was used to simulate the motion of electrons in c-C4F8 and SF6 gas mixtures for pulsed townsend discharge. The electron swarm parameters such as effective ionization coefficient, (-α) and drift velocity over the E/N range from 280~700 Td(1Td= 10-21 V·m2) were calculated by employing a set of cross sections available in literature. From the variation cure of (-α) with SF6 partial pressure p, the limiting field (E/N)lim of gas mixture at different gas content was determined. It is found that the limiting field of c-C4F8 and SF6gas mixture is higher than that of pure SF6 at any SF6 mixture ratio. Simulation results show excellent agreement with experiment data available in previous literature.

  1. Application of state-of-the-art Monte Carlo methods to gamma-ray (and electron) sources and detectors

    International Nuclear Information System (INIS)

    Advances made in recent years make it possible--and more importantly, practical--to perform photom and electron transport in complex three-dimensional geometries. The purpose of the work given is to acquaint the user community--experimenters and applied scientists--with the kind of calculational information which can be generated with present codes and computers. The central discussion involves the solution of coupled photon-electron problems using Monte Carlo methods in general, and the code SANDYL in particular. The examples presented are results of such calculations obtained with this code. The emphasis on SANDYL is not intended to imply superiority over other codes, and it should be viewed as representative of currently available codes

  2. Modeling of neutron and photon transport in iron and concrete radiation shields by using Monte Carlo method

    CERN Document Server

    Žukauskaitėa, A; Plukienė, R; Ridikas, D

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.

  3. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  4. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Computational Physics and Methods (CCS-2)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and finally the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  5. The differential method for grating efficiencies implemented in mathematica

    Energy Technology Data Exchange (ETDEWEB)

    Valdes, V.; McKinney, W. [Lawrence Berkeley Lab., CA (United States); Palmer, C. [Milton Co., Rochester, NY (United States). Roy Analytical Products Div.

    1993-08-01

    In order to facilitate the accurate calculation of diffraction grating efficiencies in the soft x-ray region, we have implemented the differential method of Neviere and Vincent in Mathematica [1]. This simplifies the programming to maximize the transparency of the theory for the user. We alleviate some of the overhead burden of the Mathematica program by coding the time-consuming numerical integration in C subprograms. We recall the differential method directly from Maxwell`s equations. The pseudo-periodicity of the grating profile and the electromagnetic fields allows us to use their Fourier series expansions to formulate an infinite set of coupled differential equations. A finite subset of the equations are then numerically integrated using the Numerov method for the transverse electric (TE) case and a fourth-order Runge-Kutta algorithm for the transverse magnetic (TM) case. We have tested our program by comparisons with the scalar theory and with published theoretical results for the blazed, sinusoidal and square wave profiles. The Reciprocity Theorem has also been used as a means to verify the method. We have found it to be verified for several cases to within the computational accuracy of the method.

  6. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method

    International Nuclear Information System (INIS)

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108mAg, 110mAg and 60Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  7. Modeling of helium bubble nucleation and growth in austenitic stainless steels using an Object Kinetic Monte Carlo method

    Science.gov (United States)

    De Backer, A.; Adjanor, G.; Domain, C.; Lescoat, M. L.; Jublot-Leclerc, S.; Fortuna, F.; Gentils, A.; Ortiz, C. J.; Souidi, A.; Becquart, C. S.

    2015-06-01

    Implantation of 10 keV helium in 316L steel thin foils was performed in JANNuS-Orsay facility and modeled using a multiscale approach. Density Functional Theory (DFT) atomistic calculations [1] were used to obtain the properties of He and He-vacancy clusters, and the Binary Collision Approximation based code MARLOWE was applied to determine the damage and He-ion depth profiles as in [2,3]. The processes involved in the homogeneous He bubble nucleation and growth were defined and implemented in the Object Kinetic Monte Carlo code LAKIMOCA [4]. In particular as the He to dpa ratio was high, self-trapping of He clusters and the trap mutation of He-vacancy clusters had to be taken into account. With this multiscale approach, the formation of bubbles was modeled up to nanometer-scale size, where bubbles can be observed by Transmission Electron Microscopy. Their densities and sizes were studied as functions of fluence (up to 5 × 1019 He/m2) at two temperatures (473 and 723 K) and for different sample thicknesses (25-250 nm). It appears that the damage is not only due to the collision cascades but is also strongly controlled by the He accumulation in pressurized bubbles. Comparison with experimental data is discussed and sensible agreement is achieved.

  8. Significance testing of clinical data using virus dynamics models with a Markov chain Monte Carlo method: application to emergence of lamivudine-resistant hepatitis B virus.

    OpenAIRE

    Burroughs, N. J.; Pillay, D.; Mutimer, D

    1999-01-01

    Bayesian analysis using a virus dynamics model is demonstrated to facilitate hypothesis testing of patterns in clinical time-series. Our Markov chain Monte Carlo implementation demonstrates that the viraemia time-series observed in two sets of hepatitis B patients on antiviral (lamivudine) therapy, chronic carriers and liver transplant patients, are significantly different, overcoming clinical trial design differences that question the validity of non-parametric tests. We show that lamivudine...

  9. Practical Framework: Implementing OEE Method in Manufacturing Process Environment

    Science.gov (United States)

    Maideen, N. C.; Sahudin, S.; Mohd Yahya, N. H.; Norliawati, A. O.

    2016-02-01

    Manufacturing process environment requires reliable machineries in order to be able to satisfy the market demand. Ideally, a reliable machine is expected to be operated and produce a quality product at its maximum designed capability. However, due to some reason, the machine usually unable to achieved the desired performance. Since the performance will affect the productivity of the system, a measurement technique should be applied. Overall Equipment Effectiveness (OEE) is a good method to measure the performance of the machine. The reliable result produced from OEE can then be used to propose a suitable corrective action. There are a lot of published paper mentioned about the purpose and benefit of OEE that covers what and why factors. However, the how factor not yet been revealed especially the implementation of OEE in manufacturing process environment. Thus, this paper presents a practical framework to implement OEE and a case study has been discussed to explain in detail each steps proposed. The proposed framework is beneficial to the engineer especially the beginner to start measure their machine performance and later improve the performance of the machine.

  10. Taylor series development in the Monte Carlo code Tripoli-4

    Science.gov (United States)

    Mazzolo, Alain; Zoia, Andrea; Martin, Brunella

    2014-06-01

    Perturbation methods for one or several variables based on the Taylor series development up to the second order is presented for the collision estimator in the framework of the Monte Carlo code Tripoli-4. Comparisons with the correlated sampling method implemented in Tripoli-4 demonstrate the need of including the cross derivatives in the development.

  11. Systems and Methods for Implementing High-Temperature Tolerant Supercapacitors

    Science.gov (United States)

    Brandon, Erik J. (Inventor); West, William C. (Inventor); Bugga, Ratnakumar V. (Inventor)

    2016-01-01

    Systems and methods in accordance with embodiments of the invention implement high-temperature tolerant supercapacitors. In one embodiment, a high-temperature tolerant super capacitor includes a first electrode that is thermally stable between at least approximately 80C and approximately 300C; a second electrode that is thermally stable between at least approximately 80C and approximately 300C; an ionically conductive separator that is thermally stable between at least approximately 80C and 300C; an electrolyte that is thermally stable between approximately at least 80C and approximately 300C; where the first electrode and second electrode are separated by the separator such that the first electrode and second electrode are not in physical contact; and where each of the first electrode and second electrode is at least partially immersed in the electrolyte solution.

  12. Search systems and computer-implemented search methods

    Energy Technology Data Exchange (ETDEWEB)

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  13. Implementation of an Innovative Method to Design Reflectarray Antennas

    Directory of Open Access Journals (Sweden)

    Abdelhamid Tayebi

    2012-01-01

    Full Text Available A novel computed aided technique for designing reflectarray antennas is presented. The developed approach automatically generates the geometrical model of reflectarray antennas taking into account some input parameters, such as, the unit cell type and dimensions, frequency, focal length, periodicity, dielectric materials, and desired main beam radiating direction. The characteristics of the reflecting elements are selected considering the spatial phase delay at each unit cell to achieve a progressive phase shift. The implemented procedure also provides the phase characteristic of the unit element, which is rapidly computed by using a parallelized Moment Method (MoM approach. The MoM is also used to obtain the radiation pattern of the full reflectarray antenna. In order to evaluate the new technique, a dual-interface prototype has been designed and simulated showing high-performance capability.

  14. Statistical weights as variance reduction method in back-scattered gamma radiation Monte Carlo spectrometry analysis of thickness gauge detector response

    International Nuclear Information System (INIS)

    The possibility of determining physical quantities (such as the number of particles behind shields of given thickness, energy spectra, detector responses, etc.) with a satisfactory statistical uncertainty, in a relatively short computing time, can be used as a measure of the efficiency of a Monte Carlo method. The numerical simulation of rare events with a straightforward Monte Carlo method is inefficient due to the great number of histories, without scores. In this paper, for the specific geometry of a gamma backscattered thickness gauge, with 60Co and 137Cs as gamma sources, the back-scattered gamma spectrum, probabilities for back-scattering and the spectral characteristics of the detector response were determined using a nonanalog Monte Carlo game with statistical weights applied. (author)

  15. ALIS: An efficient method to compute high spectral resolution polarized solar radiances using the Monte Carlo approach

    International Nuclear Information System (INIS)

    An efficient method to compute accurate polarized solar radiance spectra using the (3D) Monte Carlo model MYSTIC has been developed. Such high resolution spectra are measured by various satellite instruments for remote sensing of atmospheric trace gases. ALIS (Absorption Lines Importance Sampling) allows the calculation of spectra by tracing photons at only one wavelength. In order to take into account the spectral dependence of the absorption coefficient a spectral absorption weight is calculated for each photon path. At each scattering event the local estimate method is combined with an importance sampling method to take into account the spectral dependence of the scattering coefficient. Since each wavelength grid point is computed based on the same set of random photon paths, the statistical error is almost same for all wavelengths and hence the simulated spectrum is not noisy. The statistical error mainly results in a small relative deviation which is independent of wavelength and can be neglected for those remote sensing applications, where differential absorption features are of interest. Two example applications are presented: The simulation of shortwave-infrared polarized spectra as measured by GOSAT from which CO2 is retrieved, and the simulation of the differential optical thickness in the visible spectral range which is derived from SCIAMACHY measurements to retrieve NO2. The computational speed of ALIS (for 1D or 3D atmospheres) is of the order of or even faster than that of one-dimensional discrete ordinate methods, in particular when polarization is considered.

  16. Post-DFT methods for Earth materials: Quantum Monte Carlo simulations of (Mg,Fe)O (Invited)

    Science.gov (United States)

    Driver, K. P.; Militzer, B.; Cohen, R. E.

    2013-12-01

    (Mg,Fe)O is a major mineral phase in Earth's lower mantle that plays a key role in determining the structural and dynamical properties of deep Earth. A pressure-induced spin-pairing transition of Fe has been the subject of numerous theoretical and experimental studies due to the consequential effects on lower mantle physics. The standard density functional theory (DFT) method does not treat strongly correlated electrons properly and results can have dependence on the choice of exchange-correlation functional. DFT+U, offers significant improvement over standard DFT for treating strongly correlated electrons. Indeed, DFT+U calculations and experiments have narrowed the ambient spin-transition between 40-60 GPa in (Mg,Fe)O. However, DFT+U, is not an ideal method due to dependence on Hubbard U parameter among other approximations. In order to further clarify details of the spin transition, it is necessary to use methods that explicitly treat effects of electron exchange and correlation, such as quantum Monte Carlo (QMC). Here, we will discuss methods of going beyond standard DFT and present QMC results on the (Mg,Fe)O elastic properties and spin-transition pressure in order to benchmark DFT+U results.

  17. The Dynamic Monte Carlo Method for Transient Analysis of Nuclear Reactors

    NARCIS (Netherlands)

    Sjenitzer, B.L.

    2013-01-01

    In this thesis a new method for the analysis of power transients in a nuclear reactor is developed, which is more accurate than the present state-of-the-art methods. Transient analysis is important tool when designing nuclear reactors, since they predict the behaviour of a reactor during changing co

  18. Can You Repeat That Please?: Using Monte Carlo Simulation in Graduate Quantitative Research Methods Classes

    Science.gov (United States)

    Carsey, Thomas M.; Harden, Jeffrey J.

    2015-01-01

    Graduate students in political science come to the discipline interested in exploring important political questions, such as "What causes war?" or "What policies promote economic growth?" However, they typically do not arrive prepared to address those questions using quantitative methods. Graduate methods instructors must…

  19. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  20. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    Science.gov (United States)

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...