International Nuclear Information System (INIS)
1 - Description of problem or function: - ZZ-IRDF-82: ENDF-5 Format; 620 group (SAND II) Dosimetry Library. Nuclides: Li, B, F, Na, Mg, Al, P, S, Sc, Ti, Mn, Fe, Co, Ni, Cu, Zn, Zr, Nb, Rh, In, I, Au, Th, U, Np, Pu, Am. - ZZ-IRDF-90: ENDF-6 Format; 640 groups extended SAND II structure. Nuclides: Li, B, F, Mg, Al, P, S, Sc, Ti, Cr, Mn, Fe, Co, Ni, Cu, Zn, Zr, Nb, Rh, Cd, Ir, Gd, Au, Th, U, Np, Pu, V. Damage cross section for Fe, Cr, Ni. Weighting spectrum: Maxwell spectrum, 1/E spectrum and Watt fission spectrum. - ZZ-IRDF-2002: ENDF-6 Format (pointwise cross-section data). SAND II 640 energy group structure (multigroup data). Nuclides: Li, B, F, Na, Mg, Al, P, S, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Y, Zr, Nb, Rh, Ag, In, I, La, Pr, Tm, Ta, W, Au, Hg, Pb, Th, U, Np, Pu, Am, Cd, Gd. Damage cross section for Fe, Cr, Ni, Si, GaAs displacement. Weighting spectrum: - Typical MTR spectrum used in the input of the cross-section uncertainty processing code. - Flat weighting spectrum used in converting the pointwise cross-section data to the extended SAND-II group structure. - ZZ-IRDF-2002-ACE: ACE Format (continuous energy cross-section data for Monte Carlo). Nuclides: Li, B, F, Na, Mg, Al, P, S, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Y, Zr, Nb, Rh, Ag, In, I, La, Pr, Tm, Ta, W, Au, Hg, Pb, Th, U, Np, Pu, Am, Cd, Gd. Damage cross section for Fe, Cr, Ni, Si, GaAs displacement. - (A) ZZ-IRDF-82: The 1982 version of the International Reactor Dosimetry File is composed of two different parts. The first part is made up of a collection of dosimetry Cross sections and the second part contains a collection of benchmark spectra. For ease of use in dosimetry applications both Cross sections and spectra are distributed in multigroup form. Each of these two parts is in the ENDF/B-V Format as a separate computer file. I) The dosimetry cross section library contains the following data: (1) The entire ENDF/B-V Dosimetry Library (Mod. 1) in the form of 620 group averaged Cross
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Shell model Monte Carlo methods
International Nuclear Information System (INIS)
We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs
Extending canonical Monte Carlo methods
Velazquez, L.; Curilef, S.
2010-02-01
In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Use of Monte Carlo Methods in brachytherapy
International Nuclear Information System (INIS)
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
Sánchez-Álvaro, E
2000-01-01
The process e/sup +/e/sup -/ to ZZ is studied at LEP at center-of- mass energies near 183 and 189 GeV. Cross sections are measured and found to be in agreement with the standard model expectations. Limits on anomalous ZZZ and ZZ gamma couplings are set. (6 refs).
Experience with the Monte Carlo Method
International Nuclear Information System (INIS)
Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed
Guideline for radiation transport simulation with the Monte Carlo method
International Nuclear Information System (INIS)
Today, the photon and neutron transport calculations with the Monte Carlo method have been progressed with advanced Monte Carlo codes and high-speed computers. Monte Carlo simulation is rather suitable expression than the calculation. Once Monte Carlo codes become more friendly and performance of computer progresses, most of the shielding problems will be solved by using the Monte Carlo codes and high-speed computers. As those codes prepare the standard input data for some problems, the essential techniques for solving the Monte Carlo method and variance reduction techniques of the Monte Carlo calculation might lose the interests to the general Monte Carlo users. In this paper, essential techniques of the Monte Carlo method and the variance reduction techniques, such as importance sampling method, selection of estimator, and biasing technique, are described to afford a better understanding of the Monte Carlo method and Monte Carlo code. (author)
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Extending canonical Monte Carlo methods: II
International Nuclear Information System (INIS)
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2(δE2) compatible with negative heat capacities, C α, as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14–0.18
Introduction to the Monte Carlo methods
International Nuclear Information System (INIS)
Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab
The Monte Carlo method the method of statistical trials
Shreider, YuA
1966-01-01
The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio
The Moment Guided Monte Carlo Method
Degond, Pierre; Dimarco, Giacomo; Pareschi, Lorenzo
2009-01-01
In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the p...
New Dynamic Monte Carlo Renormalization Group Method
Lacasse, Martin-D.; Vinals, Jorge; Grant, Martin
1992-01-01
The dynamical critical exponent of the two-dimensional spin-flip Ising model is evaluated by a Monte Carlo renormalization group method involving a transformation in time. The results agree very well with a finite-size scaling analysis performed on the same data. The value of $z = 2.13 \\pm 0.01$ is obtained, which is consistent with most recent estimates.
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Monte Carlo methods for applied scientists
Dimov, Ivan T
2007-01-01
The Monte Carlo method is inherently parallel and the extensive and rapid development in parallel computers, computational clusters and grids has resulted in renewed and increasing interest in this method. At the same time there has been an expansion in the application areas and the method is now widely used in many important areas of science including nuclear and semiconductor physics, statistical mechanics and heat and mass transfer. This book attempts to bridge the gap between theory and practice concentrating on modern algorithmic implementation on parallel architecture machines. Although
by means of FLUKA Monte Carlo method
Directory of Open Access Journals (Sweden)
Ermis Elif Ebru
2015-01-01
Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
The Moment Guided Monte Carlo Method
Degond, Pierre; Pareschi, Lorenzo
2009-01-01
In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the particle positions and velocities through moment equations so that the concurrent solution of the moment and kinetic models furnishes the same macroscopic quantities.
Reactor perturbation calculations by Monte Carlo methods
International Nuclear Information System (INIS)
Whilst Monte Carlo methods are useful for reactor calculations involving complicated geometry, it is difficult to apply them to the calculation of perturbation worths because of the large amount of computing time needed to obtain good accuracy. Various ways of overcoming these difficulties are investigated in this report, with the problem of estimating absorbing control rod worths particularly in mind. As a basis for discussion a method of carrying out multigroup reactor calculations by Monte Carlo methods is described. Two methods of estimating a perturbation worth directly, without differencing two quantities of like magnitude, are examined closely but are passed over in favour of a third method based on a correlation technique. This correlation method is described, and demonstrated by a limited range of calculations for absorbing control rods in a fast reactor. In these calculations control rod worths of between 1% and 7% in reactivity are estimated to an accuracy better than 10% (3 standard errors) in about one hour's computing time on the English Electric KDF.9 digital computer. (author)
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media
Introduction to Monte-Carlo method
International Nuclear Information System (INIS)
We recall first some well known facts about random variables and sampling. Then we define the Monte-Carlo method in the case where one wants to compute a given integral. Afterwards, we ship to discrete Markov chains for which we define random walks, and apply to finite difference approximations of diffusion equations. Finally we consider Markov chains with continuous state (but discrete time), transition probabilities and random walks, which are the main piece of this work. The applications are: diffusion and advection equations, and the linear transport equation with scattering
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Accelerated Monte Carlo Methods for Coulomb Collisions
Rosin, Mark; Ricketson, Lee; Dimits, Andris; Caflisch, Russel; Cohen, Bruce
2014-03-01
We present a new highly efficient multi-level Monte Carlo (MLMC) simulation algorithm for Coulomb collisions in a plasma. The scheme, initially developed and used successfully for applications in financial mathematics, is applied here to kinetic plasmas for the first time. The method is based on a Langevin treatment of the Landau-Fokker-Planck equation and has a rich history derived from the works of Einstein and Chandrasekhar. The MLMC scheme successfully reduces the computational cost of achieving an RMS error ɛ in the numerical solution to collisional plasma problems from (ɛ-3) - for the standard state-of-the-art Langevin and binary collision algorithms - to a theoretically optimal (ɛ-2) scaling, when used in conjunction with an underlying Milstein discretization to the Langevin equation. In the test case presented here, the method accelerates simulations by factors of up to 100. We summarize the scheme, present some tricks for improving its efficiency yet further, and discuss the method's range of applicability. Work performed for US DOE by LLNL under contract DE-AC52- 07NA27344 and by UCLA under grant DE-FG02-05ER25710.
Extending canonical Monte Carlo methods: II
Velazquez, L.; Curilef, S.
2010-04-01
We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2langδE2rang compatible with negative heat capacities, C < 0. Now, we improve this methodology by including the finite size effects that reduce the precision of a direct determination of the microcanonical caloric curve β(E) = ∂S(E)/∂E, as well as by carrying out a better implementation of the MC schemes. We show that, despite the modifications considered, the extended canonical MC methods lead to an impressive overcoming of the so-called supercritical slowing down observed close to the region of the temperature driven first-order phase transition. In this case, the size dependence of the decorrelation time τ is reduced from an exponential growth to a weak power-law behavior, \\tau (N)\\propto N^{\\alpha } , as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14-0.18.
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Energy Technology Data Exchange (ETDEWEB)
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Higgs bosons in ZZ production from pp collisions
International Nuclear Information System (INIS)
The contribution to pp → ZZ + X from gluon fusion (gg → ZZ) is determined and combined with the contributions from the subprocesses q anti q → ZZ, WW → ZZ, and ZZ → ZZ to find the total Higgs signal in ZZ production. 10 refs., 6 figs
Combinatorial nuclear level density by a Monte Carlo method
Cerf, N.
1993-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning t...
Neutron transport calculations using Quasi-Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Moskowitz, B.S.
1997-07-01
This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
International Nuclear Information System (INIS)
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Energy Technology Data Exchange (ETDEWEB)
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Ground-state properties of LiH by reptation quantum Monte Carlo methods.
Ospadov, Egor; Oblinsky, Daniel G; Rothstein, Stuart M
2011-05-01
We apply reptation quantum Monte Carlo to calculate one- and two-electron properties for ground-state LiH, including all tensor components for static polarizabilities and hyperpolarizabilities to fourth-order in the field. The importance sampling is performed with a large (QZ4P) STO basis set single determinant, directly obtained from commercial software, without incurring the overhead of optimizing many-parameter Jastrow-type functions of the inter-electronic and internuclear distances. We present formulas for the electrical response properties free from the finite-field approximation, which can be problematic for the purposes of stochastic estimation. The α, γ, A and C polarizability values are reasonably consistent with recent determinations reported in the literature, where they exist. A sum rule is obeyed for components of the B tensor, but B(zz,zz) as well as β(zzz) differ from what was reported in the literature. PMID:21445452
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
On the Markov Chain Monte Carlo (MCMC) method
Indian Academy of Sciences (India)
Rajeeva L Karandikar
2006-04-01
Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be speciﬁed indirectly. In this article, we give an introduction to this method along with some examples.
A Particle Population Control Method for Dynamic Monte Carlo
Sweezy, Jeremy; Nolen, Steve; Adams, Terry; Zukaitis, Anthony
2014-06-01
A general particle population control method has been derived from splitting and Russian Roulette for dynamic Monte Carlo particle transport. A well-known particle population control method, known as the particle population comb, has been shown to be a special case of this general method. This general method has been incorporated in Los Alamos National Laboratory's Monte Carlo Application Toolkit (MCATK) and examples of it's use are shown for both super-critical and sub-critical systems.
Problems in radiation shielding calculations with Monte Carlo methods
International Nuclear Information System (INIS)
The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)
Monte Carlo methods and applications in nuclear physics
International Nuclear Information System (INIS)
Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs
Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo
Energy Technology Data Exchange (ETDEWEB)
Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)
2011-07-01
This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)
Application of biasing techniques to the contributon Monte Carlo method
International Nuclear Information System (INIS)
Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables
Simulation and the Monte Carlo Method, Student Solutions Manual
Rubinstein, Reuven Y
2012-01-01
This accessible new edition explores the major topics in Monte Carlo simulation Simulation and the Monte Carlo Method, Second Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over twenty-five years ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, suc
A residual Monte Carlo method for discrete thermal radiative diffusion
International Nuclear Information System (INIS)
Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Comparison between Monte Carlo method and deterministic method
International Nuclear Information System (INIS)
A fast critical assembly consists of a lattice of plates of sodium, plutonium or uranium, resulting in a high inhomogeneity. The inhomogeneity in the lattice should be evaluated carefully to determine the bias factor accurately. Deterministic procedures are generally used for the lattice calculation. To reduce the required calculation time, various one-dimensional lattice models have been developed previously to replace multi-dimensional models. In the present study, calculations are made for a two-dimensional model and results are compared with those obtained with one-dimensional models in terms of the average microscopic cross section of a lattice and diffusion coefficient. Inhomogeneity in a lattice affects the effective cross section and distribution of neutrons in the lattice. The background cross section determined by the method proposed by Tone is used here to calculate the effective cross section, and the neutron distribution is determined by the collision probability method. Several other methods have been proposed to calculate the effective cross section. The present study also applies the continuous energy Monte Carlo method to the calculation. A code based on this method is employed to evaluate several one-dimensional models. (Nogami, K.)
Computing Functionals of Multidimensional Diffusions via Monte Carlo Methods
Jan Baldeaux; Eckhard Platen
2012-01-01
We discuss suitable classes of diffusion processes, for which functionals relevant to finance can be computed via Monte Carlo methods. In particular, we construct exact simulation schemes for processes from this class. However, should the finance problem under consideration require e.g. continuous monitoring of the processes, the simulation algorithm can easily be embedded in a multilevel Monte Carlo scheme. We choose to introduce the finance problems under the benchmark approach, and find th...
Computing Greeks with Multilevel Monte Carlo Methods using Importance Sampling
Euget, Thomas
2012-01-01
This paper presents a new efficient way to reduce the variance of an estimator of popular payoffs and greeks encounter in financial mathematics. The idea is to apply Importance Sampling with the Multilevel Monte Carlo recently introduced by M.B. Giles. So far, Importance Sampling was proved successful in combination with standard Monte Carlo method. We will show efficiency of our approach on the estimation of financial derivatives prices and then on the estimation of Greeks (i.e. sensitivitie...
A New Method for Parallel Monte Carlo Tree Search
Mirsoleimani, S. Ali; Plaat, Aske; Herik, Jaap van den; Vermaseren, Jos
2016-01-01
In recent years there has been much interest in the Monte Carlo tree search algorithm, a new, adaptive, randomized optimization algorithm. In fields as diverse as Artificial Intelligence, Operations Research, and High Energy Physics, research has established that Monte Carlo tree search can find good solutions without domain dependent heuristics. However, practice shows that reaching high performance on large parallel machines is not so successful as expected. This paper proposes a new method...
Monte Carlo methods and models in finance and insurance
Korn, Ralf
2010-01-01
Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Monte Carlo methods for the self-avoiding walk
Energy Technology Data Exchange (ETDEWEB)
Janse van Rensburg, E J [Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3 (Canada)], E-mail: rensburg@yorku.ca
2009-08-14
The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)
Monte Carlo methods for the self-avoiding walk
International Nuclear Information System (INIS)
The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
Monte Carlo method application to shielding calculations
International Nuclear Information System (INIS)
CANDU spent fuel discharged from the reactor core contains Pu, so it must be stressed in two directions: tracing for the fuel reactivity in order to prevent critical mass formation and personnel protection during the spent fuel manipulation. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculations in order to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. To perform photon dose rates calculations the Monte Carlo MORSE-SGC code incorporated in SAS4 sequence from SCALE system was used. The paper objective was to obtain the photon dose rates to the spent fuel transport cask wall, both in radial and axial directions. As source of radiation one spent CANDU fuel bundle was used. All the geometrical and material data related to the transport cask were considered according to the shipping cask type B model, whose prototype has been realized and tested in the Institute for Nuclear Research Pitesti. (authors)
Quantum Monte Carlo diagonalization method as a variational calculation
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1997-05-01
A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)
Auxiliary-field quantum Monte Carlo methods in nuclei
Alhassid, Y
2016-01-01
Auxiliary-field quantum Monte Carlo methods enable the calculation of thermal and ground state properties of correlated quantum many-body systems in model spaces that are many orders of magnitude larger than those that can be treated by conventional diagonalization methods. We review recent developments and applications of these methods in nuclei using the framework of the configuration-interaction shell model.
Observations on variational and projector Monte Carlo methods
International Nuclear Information System (INIS)
Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed
A Multivariate Time Series Method for Monte Carlo Reactor Analysis
International Nuclear Information System (INIS)
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor
Monte Carlo methods for the reliability analysis of Markov systems
International Nuclear Information System (INIS)
This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator
Introduction to Monte Carlo methods: sampling techniques and random numbers
International Nuclear Information System (INIS)
The Monte Carlo method describes a very broad area of science, in which many processes, physical systems and phenomena that are statistical in nature and are difficult to solve analytically are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions. As the number of individual events (called histories) is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. Assuming that the behavior of physical system can be described by probability density functions, then the Monte Carlo simulation can proceed by sampling from these probability density functions, which necessitates a fast and effective way to generate random numbers uniformly distributed on the interval (0,1). Particles are generated within the source region and are transported by sampling from probability density functions through the scattering media until they are absorbed or escaped the volume of interest. The outcomes of these random samplings or trials, must be accumulated or tallied in an appropriate manner to produce the desired result, but the essential characteristic of Monte Carlo is the use of random sampling techniques to arrive at a solution of the physical problem. The major components of Monte Carlo methods for random sampling for a given event are described in the paper
Frequency domain optical tomography using a Monte Carlo perturbation method
Yamamoto, Toshihiro; Sakamoto, Hiroki
2016-04-01
A frequency domain Monte Carlo method is applied to near-infrared optical tomography, where an intensity-modulated light source with a given modulation frequency is used to reconstruct optical properties. The frequency domain reconstruction technique allows for better separation between the scattering and absorption properties of inclusions, even for ill-posed inverse problems, due to cross-talk between the scattering and absorption reconstructions. The frequency domain Monte Carlo calculation for light transport in an absorbing and scattering medium has thus far been analyzed mostly for the reconstruction of optical properties in simple layered tissues. This study applies a Monte Carlo calculation algorithm, which can handle complex-valued particle weights for solving a frequency domain transport equation, to optical tomography in two-dimensional heterogeneous tissues. The Jacobian matrix that is needed to reconstruct the optical properties is obtained by a first-order "differential operator" technique, which involves less variance than the conventional "correlated sampling" technique. The numerical examples in this paper indicate that the newly proposed Monte Carlo method provides reconstructed results for the scattering and absorption coefficients that compare favorably with the results obtained from conventional deterministic or Monte Carlo methods.
Monte Carlo Form-Finding Method for Tensegrity Structures
Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping
2010-05-01
In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.
Extending the alias Monte Carlo sampling method to general distributions
International Nuclear Information System (INIS)
The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this method sampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs
Computing Functionals of Multidimensional Diffusions via Monte Carlo Methods
Baldeaux, Jan
2012-01-01
We discuss suitable classes of diffusion processes, for which functionals relevant to finance can be computed via Monte Carlo methods. In particular, we construct exact simulation schemes for processes from this class. However, should the finance problem under consideration require e.g. continuous monitoring of the processes, the simulation algorithm can easily be embedded in a multilevel Monte Carlo scheme. We choose to introduce the finance problems under the benchmark approach, and find that this approach allows us to exploit conveniently the analytical tractability of these diffusion processes.
MOSFET GATE CURRENT MODELLING USING MONTE-CARLO METHOD
Voves, J.; Vesely, J.
1988-01-01
The new technique for determining the probability of hot-electron travel through the gate oxide is presented. The technique is based on the Monte Carlo method and is used in MOSFET gate current modelling. The calculated values of gate current are compared with experimental results from direct measurements on MOSFET test chips.
Application of equivalence methods on Monte Carlo method based homogenization multi-group constants
International Nuclear Information System (INIS)
The multi-group constants generated via continuous energy Monte Carlo method do not satisfy the equivalence between reference calculation and diffusion calculation applied in reactor core analysis. To the satisfaction of the equivalence theory, general equivalence theory (GET) and super homogenization method (SPH) were applied to the Monte Carlo method based group constants, and a simplified reactor core and C5G7 benchmark were examined with the Monte Carlo constants. The results show that the calculating precision of group constants is improved, and GET and SPH are good candidates for the equivalence treatment of Monte Carlo homogenization. (authors)
A separable shadow Hamiltonian hybrid Monte Carlo method
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
Monte Carlo methods for pricing ﬁnancial options
Indian Academy of Sciences (India)
N Bolia; S Juneja
2005-04-01
Pricing ﬁnancial options is amongst the most important and challenging problems in the modern ﬁnancial industry. Except in the simplest cases, the prices of options do not have a simple closed form solution and efﬁcient computational methods are needed to determine them. Monte Carlo methods have increasingly become a popular computational tool to price complex ﬁnancial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the ‘curse of dimensionality’. However, even Monte-Carlo techniques can be quite slow as the problem-size increases, motivating research in variance reduction techniques to increase the efﬁciency of the simulations. In this paper, we review some of the popular variance reduction techniques and their application to pricing options. We particularly focus on the recent Monte-Carlo techniques proposed to tackle the difﬁcult problem of pricing American options. These include: regression-based methods, random tree methods and stochastic mesh methods. Further, we show how importance sampling, a popular variance reduction technique, may be combined with these methods to enhance their effectiveness. We also brieﬂy review the evolving options market in India.
On the Convergence of Adaptive Sequential Monte Carlo Methods
Beskos, Alexandros; Jasra, Ajay; Kantas, Nikolas; Thiery, Alexandre
2013-01-01
In several implementations of Sequential Monte Carlo (SMC) methods it is natural, and important in terms of algorithmic efficiency, to exploit the information of the history of the samples to optimally tune their subsequent propagations. In this article we provide a carefully formulated asymptotic theory for a class of such \\emph{adaptive} SMC methods. The theoretical framework developed here will cover, under assumptions, several commonly used SMC algorithms. There are only limited results a...
Bayesian Monte Carlo method for nuclear data evaluation
International Nuclear Information System (INIS)
A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using the nuclear model code TALYS and the experimental nuclear reaction database EXFOR. The method is applied to all nuclides at the same time. First, the global predictive power of TALYS is numerically assessed, which enables to set the prior space of nuclear model solutions. Next, the method gradually zooms in on particular experimental data per nuclide, until for each specific target nuclide its existing experimental data can be used for weighted Monte Carlo sampling. To connect to the various different schools of uncertainty propagation in applied nuclear science, the result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by the EXFOR-based weight. (orig.)
A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
A new method for commissioning Monte Carlo treatment planning systems
Aljarrah, Khaled Mohammed
2005-11-01
The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.
Non-analogue Monte Carlo method, application to neutron simulation
International Nuclear Information System (INIS)
With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Nowadays, only the Monte Carlo method offers such possibilities. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette
Efficient Monte Carlo methods for continuum radiative transfer
Juvela, M
2005-01-01
We discuss the efficiency of Monte Carlo methods in solving continuum radiative transfer problems. The sampling of the radiation field and convergence of dust temperature calculations in the case of optically thick clouds are both studied. For spherically symmetric clouds we find that the computational cost of Monte Carlo simulations can be reduced, in some cases by orders of magnitude, with simple importance weighting schemes. This is particularly true for models consisting of cells of different sizes for which the run times would otherwise be determined by the size of the smallest cell. We present a new idea of extending importance weighting to scattered photons. This is found to be useful in calculations of scattered flux and could be important for three-dimensional models when observed intensity is needed only for one general direction of observations. Convergence of dust temperature calculations is studied for models with optical depths 10-10000. We examine acceleration methods where radiative interactio...
Multi-way Monte Carlo Method for Linear Systems
Wu, Tao; Gleich, David F.
2016-01-01
We study the Monte Carlo method for solving a linear system of the form $x = H x + b$. A sufficient condition for the method to work is $\\| H \\| < 1$, which greatly limits the usability of this method. We improve this condition by proposing a new multi-way Markov random walk, which is a generalization of the standard Markov random walk. Under our new framework we prove that the necessary and sufficient condition for our method to work is the spectral radius $\\rho(H^{+}) < 1$, which is a weake...
Parallel Monte Carlo Synthetic Acceleration methods for discrete transport problems
Slattery, Stuart R.
This work researches and develops Monte Carlo Synthetic Acceleration (MCSA) methods as a new class of solution techniques for discrete neutron transport and fluid flow problems. Monte Carlo Synthetic Acceleration methods use a traditional Monte Carlo process to approximate the solution to the discrete problem as a means of accelerating traditional fixed-point methods. To apply these methods to neutronics and fluid flow and determine the feasibility of these methods on modern hardware, three complementary research and development exercises are performed. First, solutions to the SPN discretization of the linear Boltzmann neutron transport equation are obtained using MCSA with a difficult criticality calculation for a light water reactor fuel assembly used as the driving problem. To enable MCSA as a solution technique a group of modern preconditioning strategies are researched. MCSA when compared to conventional Krylov methods demonstrated improved iterative performance over GMRES by converging in fewer iterations when using the same preconditioning. Second, solutions to the compressible Navier-Stokes equations were obtained by developing the Forward-Automated Newton-MCSA (FANM) method for nonlinear systems based on Newton's method. Three difficult fluid benchmark problems in both convective and driven flow regimes were used to drive the research and development of the method. For 8 out of 12 benchmark cases, it was found that FANM had better iterative performance than the Newton-Krylov method by converging the nonlinear residual in fewer linear solver iterations with the same preconditioning. Third, a new domain decomposed algorithm to parallelize MCSA aimed at leveraging leadership-class computing facilities was developed by utilizing parallel strategies from the radiation transport community. The new algorithm utilizes the Multiple-Set Overlapping-Domain strategy in an attempt to reduce parallel overhead and add a natural element of replication to the algorithm. It
Monte Carlo methods and applications for the nuclear shell model
Dean, D. J.; White, J A
1998-01-01
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sdpf- shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.
Efficient Monte Carlo methods for light transport in scattering media
Jarosz, Wojciech
2008-01-01
In this dissertation we focus on developing accurate and efficient Monte Carlo methods for synthesizing images containing general participating media. Participating media such as clouds, smoke, and fog are ubiquitous in the world and are responsible for many important visual phenomena which are of interest to computer graphics as well as related fields. When present, the medium participates in lighting interactions by scattering or absorbing photons as they travel through the scene. Though th...
Calculating atomic and molecular properties using variational Monte Carlo methods
International Nuclear Information System (INIS)
The authors compute a number of properties for the 1 1S, 21S, and 23S states of helium as well as the ground states of H2 and H/+3 using Variational Monte Carlo. These are in good agreement with previous calculations (where available). Electric-response constants for the ground states of helium, H2 and H+3 are computed as derivatives of the total energy. The method used to calculate these quantities is discussed in detail
Monte Carlo Methods and Applications for the Nuclear Shell Model
International Nuclear Information System (INIS)
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems
Calculations of pair production by Monte Carlo methods
International Nuclear Information System (INIS)
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs
Calculations of pair production by Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Bottcher, C.; Strayer, M.R.
1991-01-01
We describe some of the technical design issues associated with the production of particle-antiparticle pairs in very large accelerators. To answer these questions requires extensive calculation of Feynman diagrams, in effect multi-dimensional integrals, which we evaluate by Monte Carlo methods on a variety of supercomputers. We present some portable algorithms for generating random numbers on vector and parallel architecture machines. 12 refs., 14 figs.
Higgs/ZZ searches in the 3 leptons + X channels
Kasmi, Azeddine
2009-05-01
The mechanism of spontaneously broken symmetries is one of the key problems in particles physics. Hence understanding the Higgs mechanism, by which the fundamental particles gain mass, is one of the primary goals of the LHC. Another area of great interest is ZZ diboson production. In the Standard Model(SM), the triple neutral gauge couplings (ZZZ and ZZγ) are absent and ZZ searches provide a test for any gauge-coupling anomalies and hence possible new physics beyond the SM. Production of ZZ dibosons is an irreducible background for the Higgs production with a 4 lepton decay mode (particularly at high mass). To maximize the sensitivity of Higgs searches, the 3l+ X channels were considered as they have higher a acceptance than the 4l channel due to inefficiencies in lepton reconstruction. I pursued an exclusive search for the Higgs/ZZ signal in the 3l+X channel using clustering algorithms for finding unidentified electrons. The motivations for a cluster based algorithm are: 1) no assumption of a cluster width is required 2) the cluster centric based algorithm has greater η coverage than the standard electron identification methods and 3) the cluster based algorithm does not split the cluster in the crack regions. The background in the 3l+X channel is very challenging. In this work, I present a set of selection criteria along with a likelihood method for particle identification to achieve an acceptable signal-over-background.
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
A new lattice Monte Carlo method for simulating dielectric inhomogeneity
Duan, Xiaozheng; Wang, Zhen-Gang; Nakamura, Issei
We present a new lattice Monte Carlo method for simulating systems involving dielectric contrast between different species by modifying an algorithm originally proposed by Maggs et al. The original algorithm is known to generate attractive interactions between particles that have different dielectric constant than the solvent. Here we show that such attractive force is spurious, arising from incorrectly biased statistical weight caused by the particle motion during the Monte Carlo moves. We propose a new, simple algorithm to resolve this erroneous sampling. We demonstrate the application of our algorithm by simulating an uncharged polymer in a solvent with different dielectric constant. Further, we show that the electrostatic fields in ionic crystals obtained from our simulations with a relatively small simulation box correspond well with results from the analytical solution. Thus, our Monte Carlo method avoids the need for the Ewald summation in conventional simulation methods for charged systems. This work was supported by the National Natural Science Foundation of China (21474112 and 21404103). We are grateful to Computing Center of Jilin Province for essential support.
A new hybrid method--combined heat flux method with Monte-Carlo method to analyze thermal radiation
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A new hybrid method, Monte-Carlo-Heat-Flux (MCHF) method, was presented to analyze the radiative heat transfer of participating medium in a three-dimensional rectangular enclosure using combined the Monte-Carlo method with the heat flux method. Its accuracy and reliability was proved by comparing the computational results with exact results from classical "Zone Method".
Finite population-size effects in projection Monte Carlo methods
International Nuclear Information System (INIS)
Projection (Green's function and diffusion) Monte Carlo techniques sample a wave function by a stochastic iterative procedure. It is shown that these methods converge to a stationary distribution which is unexpectedly biased, i.e., differs from the exact ground state wave function, and that this bias occurs because of the introduction of a replication procedure. It is demonstrated that these biased Monte Carlo algorithms lead to a modified effective mass which is equal to the desired mass only in the limit of an infinite population of walkers. In general, the bias scales as 1/N for a population of walkers of size N. Various strategies to reduce this bias are considered. (authors). 29 refs., 3 figs
Monte Carlo methods in electron transport problems. Pt. 1
International Nuclear Information System (INIS)
The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage
Improved criticality convergence via a modified Monte Carlo iteration method
Energy Technology Data Exchange (ETDEWEB)
Booth, Thomas E [Los Alamos National Laboratory; Gubernatis, James E [Los Alamos National Laboratory
2009-01-01
Nuclear criticality calculations with Monte Carlo codes are normally done using a power iteration method to obtain the dominant eigenfunction and eigenvalue. In the last few years it has been shown that the power iteration method can be modified to obtain the first two eigenfunctions. This modified power iteration method directly subtracts out the second eigenfunction and thus only powers out the third and higher eigenfunctions. The result is a convergence rate to the dominant eigenfunction being |k{sub 3}|/k{sub 1} instead of |k{sub 2}|/k{sub 1}. One difficulty is that the second eigenfunction contains particles of both positive and negative weights that must sum somehow to maintain the second eigenfunction. Summing negative and positive weights can be done using point detector mechanics, but this sometimes can be quite slow. We show that an approximate cancellation scheme is sufficient to accelerate the convergence to the dominant eigenfunction. A second difficulty is that for some problems the Monte Carlo implementation of the modified power method has some stability problems. We also show that a simple method deals with this in an effective, but ad hoc manner.
Condensed history Monte Carlo methods for photon transport problems
International Nuclear Information System (INIS)
We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models - one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes - can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions
Iridium 192 dosimetric study by Monte-Carlo method
International Nuclear Information System (INIS)
The Monte-Carlo method was applied to a dosimetry of iridium192 in water and in air; an iridium-platinum alloy seed, enveloped by a platinum can, is used as source. The radioactive decay of this nuclide and the transport of emitted particles from the seed-source in the can and in the irradiated medium are simulated successively. The photons energy spectra outside the source, as well as dose distributions, are given. Phi(d) function is calculated and our results with various experimental values are compared
Research on Monte Carlo simulation method of industry CT system
International Nuclear Information System (INIS)
There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)
The macro response Monte Carlo method for electron transport
Svatos, M M
1999-01-01
This thesis demonstrates the feasibility of basing dose calculations for electrons in radiotherapy on first-principles single scatter physics, in a calculation time that is comparable to or better than current electron Monte Carlo methods. The macro response Monte Carlo (MRMC) method achieves run times that have potential to be much faster than conventional electron transport methods such as condensed history. The problem is broken down into two separate transport calculations. The first stage is a local, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position, and trajectory after leaving the local geometry, a small sphere or "kugel." A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25-8 MeV) and sizes (0.025 to 0.1 cm in radius). The second transport stage is a global calculation, in which steps that conform to the size of the kugels in the...
'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods
International Nuclear Information System (INIS)
The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)
Application of Monte Carlo methods in tomotherapy and radiation biophysics
Hsiao, Ya-Yun
Helical tomotherapy is an attractive treatment for cancer therapy because highly conformal dose distributions can be achieved while the on-board megavoltage CT provides simultaneous images for accurate patient positioning. The convolution/superposition (C/S) dose calculation methods typically used for Tomotherapy treatment planning may overestimate skin (superficial) doses by 3-13%. Although more accurate than C/S methods, Monte Carlo (MC) simulations are too slow for routine clinical treatment planning. However, the computational requirements of MC can be reduced by developing a source model for the parts of the accelerator that do not change from patient to patient. This source model then becomes the starting point for additional simulations of the penetration of radiation through patient. In the first section of this dissertation, a source model for a helical tomotherapy is constructed by condensing information from MC simulations into series of analytical formulas. The MC calculated percentage depth dose and beam profiles computed using the source model agree within 2% of measurements for a wide range of field sizes, which suggests that the proposed source model provides an adequate representation of the tomotherapy head for dose calculations. Monte Carlo methods are a versatile technique for simulating many physical, chemical and biological processes. In the second major of this thesis, a new methodology is developed to simulate of the induction of DNA damage by low-energy photons. First, the PENELOPE Monte Carlo radiation transport code is used to estimate the spectrum of initial electrons produced by photons. The initial spectrum of electrons are then combined with DNA damage yields for monoenergetic electrons from the fast Monte Carlo damage simulation (MCDS) developed earlier by Semenenko and Stewart (Purdue University). Single- and double-strand break yields predicted by the proposed methodology are in good agreement (1%) with the results of published
A new DNB design method using the system moment method combined with Monte Carlo simulation
International Nuclear Information System (INIS)
A new statistical method of core thermal design for pressurized water reactors is presented. It not only quantifies the DNBR parameter uncertainty by the system moment method, but also combines the DNBR parameter with correlation uncertainty using Monte Carlo technique. The randomizing function for Monte Carlo simulation was expressed in a form of reciprocal-multiplication of DNBR parameter and correlation uncertainty factors. The results of comparisons with the conventional methods show that the DNBR limit calculated by this method is in good agreement with that by the SCU method with less computational effort and it is considered applicable to the current DNB design
The macro response Monte Carlo method for electron transport
Energy Technology Data Exchange (ETDEWEB)
Svatos, M M
1998-09-01
The main goal of this thesis was to prove the feasibility of basing electron depth dose calculations in a phantom on first-principles single scatter physics, in an amount of time that is equal to or better than current electron Monte Carlo methods. The Macro Response Monte Carlo (MRMC) method achieves run times that are on the order of conventional electron transport methods such as condensed history, with the potential to be much faster. This is possible because MRMC is a Local-to-Global method, meaning the problem is broken down into two separate transport calculations. The first stage is a local, in this case, single scatter calculation, which generates probability distribution functions (PDFs) to describe the electron's energy, position and trajectory after leaving the local geometry, a small sphere or "kugel" A number of local kugel calculations were run for calcium and carbon, creating a library of kugel data sets over a range of incident energies (0.25 MeV - 8 MeV) and sizes (0.025 cm to 0.1 cm in radius). The second transport stage is a global calculation, where steps that conform to the size of the kugels in the library are taken through the global geometry. For each step, the appropriate PDFs from the MRMC library are sampled to determine the electron's new energy, position and trajectory. The electron is immediately advanced to the end of the step and then chooses another kugel to sample, which continues until transport is completed. The MRMC global stepping code was benchmarked as a series of subroutines inside of the Peregrine Monte Carlo code. It was compared to Peregrine's class II condensed history electron transport package, EGS4, and MCNP for depth dose in simple phantoms having density inhomogeneities. Since the kugels completed in the library were of relatively small size, the zoning of the phantoms was scaled down from a clinical size, so that the energy deposition algorithms for spreading dose across 5-10 zones per kugel could
A CNS calculation line based on a Monte Carlo method
International Nuclear Information System (INIS)
Full text: The design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. Decisions taken in this sense affect not only the neutron flux in the source neighborhood, which can be evaluated by a standard empirical method, but also the neutron flux values in experimental positions far away of the neutron source. At long distances from the neutron source, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures. Standard and typical terminology such as average neutron flux, neutron current, angular flux, luminosity, are magnitudes very difficult to evaluate in positions located several meters away from the neutron source. The Monte Carlo method is a unique and powerful tool to transport neutrons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The proper use of MCNP as the main tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors. The design goal is to evaluate the performance of the neutron sources, their beam tubes and neutron guides at specific experimental locations in the reactor hall as well as in the neutron or experimental hall. In this work, the calculation methodology used to design Cold, Thermal and Hot Neutron Sources and their associated Neutron Beam Transport Systems, based on the use of the MCNP code, is presented. This work also presents some changes made to the cross section libraries in order to cope with cryogenic moderators such as liquid hydrogen and liquid deuterium. (author)
Hybrid Deterministic-Monte Carlo Methods for Neutral Particle Transport
International Nuclear Information System (INIS)
In the history of transport analysis methodology for nuclear systems, there have been two fundamentally different methods, i.e., deterministic and Monte Carlo (MC) methods. Even though these two methods coexisted for the past 60 years and are complementary each other, they never been coded in the same computer codes. Recently, however, researchers have started to consider to combine these two methods in a computer code to make use of the strengths of two algorithms and avoid weaknesses. Although the advanced modern deterministic techniques such as method of characteristics (MOC) can solve a multigroup transport equation very accurately, there are still uncertainties in the MOC solutions due to the inaccuracy of the multigroup cross section data caused by approximations in the process of multigroup cross section generation, i.e., equivalence theory, interference effects, etc. Conversely, the MC method can handle the resonance shielding effect accurately when sufficiently many neutron histories are used but it takes a long calculation time. There was also a research to combine a multigroup transport and a continuous energy transport solver in a computer code system depending on the energy range. This paper proposes a hybrid deterministic-MC method in which a multigroup MOC method is used for high and low energy range and continuous MC method is used for the intermediate resonance energy range for efficient and accurate transport analysis
International Nuclear Information System (INIS)
White dwarf pulsators (ZZ Ceti variables) osub solar acccur in the extension of the radial pulsation envelope ionization instability strip to the observed luminosities of 3 x 10-3L sub solar according to van Horn. Investigations were underway to see if the driving mechanisms of hydrogen and helium ionization can cause radial pulsations as they do for the Cepheids, the RR Lyrae variables, and the delta Scuti variables. Masses used in this study are 0.60 and 0.75 M sub solar for T/sub e/ between 10,000 K and 14,000 K, the observed range in T/sub e/. Helium rich surface compositions like Y = 0.78,, Z = 0.02 as well as Y = 0.28, Z = 0.02 were used in spite of observations showing only hydrogen lines in the spectrum. The deep layers are pure carbon, and several transition compositions are included. The models show radial pulsation instabilities for many overtone modes at periods between about 0.3 and 3 seconds. The driving mechanism is mostly helium ionization at 40,000 and 150,000 K. The blue edge at about 14,000 K is probably due to the driving region becoming too shallow, and the red edge at 10,000 K is due to so much convection in the pulsation deriving region that no radiative luminosity is available for modulation by the γ and kappa effects. It is speculated that the very long observed periods (100 to 1000 sec) of ZZ Ceti variables are not due to nonradial pulsations, but are possibly aliases due to data undersampling. 4 references
The derivation of Particle Monte Carlo methods for plasma modeling from transport equations
Longo, Savino
2008-01-01
We analyze here in some detail, the derivation of the Particle and Monte Carlo methods of plasma simulation, such as Particle in Cell (PIC), Monte Carlo (MC) and Particle in Cell / Monte Carlo (PIC/MC) from formal manipulation of transport equations.
Methods for variance reduction in Monte Carlo simulations
Bixler, Joel N.; Hokr, Brett H.; Winblad, Aidan; Elpers, Gabriel; Zollars, Byron; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, due to the probabilistic nature of these simulations, large numbers of photons are often required in order to generate relevant results. Here, we present methods for reduction in the variance of dose distribution in a computational volume. Dose distribution is computed via tracing of a large number of rays, and tracking the absorption and scattering of the rays within discrete voxels that comprise the volume. Variance reduction is shown here using quasi-random sampling, interaction forcing for weakly scattering media, and dose smoothing via bi-lateral filtering. These methods, along with the corresponding performance enhancements are detailed here.
Radiative heat transfer by the Monte Carlo method
Hartnett †, James P; Cho, Young I; Greene, George A; Taniguchi, Hiroshi; Yang, Wen-Jei; Kudo, Kazuhiko
1995-01-01
This book presents the basic principles and applications of radiative heat transfer used in energy, space, and geo-environmental engineering, and can serve as a reference book for engineers and scientists in researchand development. A PC disk containing software for numerical analyses by the Monte Carlo method is included to provide hands-on practice in analyzing actual radiative heat transfer problems.Advances in Heat Transfer is designed to fill the information gap between regularly scheduled journals and university level textbooks by providing in-depth review articles over a broader scope than journals or texts usually allow.Key Features* Offers solution methods for integro-differential formulation to help avoid difficulties* Includes a computer disk for numerical analyses by PC* Discusses energy absorption by gas and scattering effects by particles* Treats non-gray radiative gases* Provides example problems for direct applications in energy, space, and geo-environmental engineering
Modelling a gamma irradiation process using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2011-07-01
In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)
Monte Carlo Methods for Rough Free Energy Landscapes: Population Annealing and Parallel Tempering
Machta, Jon; Ellis, Richard S.
2011-01-01
Parallel tempering and population annealing are both effective methods for simulating equilibrium systems with rough free energy landscapes. Parallel tempering, also known as replica exchange Monte Carlo, is a Markov chain Monte Carlo method while population annealing is a sequential Monte Carlo method. Both methods overcome the exponential slowing associated with high free energy barriers. The convergence properties and efficiency of the two methods are compared. For large systems, populatio...
Reactor physics analysis method based on Monte Carlo homogenization
International Nuclear Information System (INIS)
Background: Many new concepts of nuclear energy systems with complicated geometric structures and diverse energy spectra have been put forward to meet the future demand of nuclear energy market. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multi-group cross section libraries. The Monte Carlo (MC) method predominates the suitability of geometry and spectrum, but faces the problems of long computation time and slow convergence. Purpose: This work aims to find a novel scheme to take the advantages of both methods drawn from the deterministic core analysis method and MC method. Methods: A new two-step core analysis scheme is proposed to combine the geometry modeling capability and continuous energy cross section libraries of MC method, as well as the higher computational efficiency of deterministic method. First of all, the MC simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Then, the core diffusion calculations can be done with these multi-group cross sections. Results: The new scheme can achieve high efficiency while maintain acceptable precision. Conclusion: The new scheme can be used as an effective tool for the design and analysis of innovative nuclear energy systems, which has been verified by numeric tests. (authors)
Interacting multiagent systems kinetic equations and Monte Carlo methods
Pareschi, Lorenzo
2014-01-01
The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...
International Nuclear Information System (INIS)
We discuss progress in quasi Monte Carlo methods for numerical calculation integrals or expected values and justify why these methods are more efficient than the classic Monte Carlo methods. Quasi Monte Carlo methods are found to be particularly efficient if the integrands have a low effective dimension. That's why We also discuss the concept of effective dimension and prove on the example of a stochastic Optimization model of the energy industry that such models can posses a low effective dimension. Modern quasi Monte Carlo methods are therefore for such models very promising.
Markov chain Monte Carlo methods: an introductory example
Klauenberg, Katy; Elster, Clemens
2016-02-01
When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.
Search for $ZW/ZZ \\to \\ell^+ \\ell^-$ + Jets Production in $p\\bar{p}$ Collisions at CDF
Energy Technology Data Exchange (ETDEWEB)
Ketchum, Wesley Robert [Univ. of Chicago, IL (United States)
2012-12-01
The Standard Model of particle physics describes weak interactions mediated by massive gauge bosons that interact with each other in well-defined ways. Observations of the production and decay of WW, WZ, and ZZ boson pairs are an opportunity to check that these self-interactions agree with the Standard Model predictions. Furthermore, final states that include quarks are very similar to the most prominent final state of Higgs bosons produced in association with a W or Z boson. Diboson production where WW is a significant component has been observed at the Tevatron collider in semi-hadronic decay modes. We present a search for ZW and ZZ production in a final state containing two charged leptons and two jets using 8.9 fb^{-1} of data recorded with the CDF detector at the Tevatron. We select events by identifying those that contain two charged leptons, two hadronic jets, and low transverse missing energy (E_{T} ). We increase our acceptance by using a wide suite of high-p_{T} lepton triggers and by relaxing many lepton identification requirements. We develop a new method for calculating corrections to jet energies based on whether the originating parton was a quark or gluon to improve the agreement between data and the Monte Carlo simulations used to model our diboson signal and dominant backgrounds. We also make use of neural-network-based discriminants that are trained to pick out jets originating from b quarks and light-flavor quarks, thereby increasing our sensitivity to Z → b$\\bar{b}$ and W=Z → q$\\bar{p'}$0 decays, respectively. The number of signal events is extracted through a simultaneous fit to the dijet mass spectrum in three channels: a heavy-flavor tagged channel, a light-flavor tagged channel, and an untagged channel. We measure σ_{ZW/ZZ}= 2.5^{+2.0}_{ -1.0} pb, which is consistent with the SM cross section of 5.1 pb. We establish an upper limit on the cross section of σ_{ZW/ZZ} < 6.1 pb
On polarization parameters of spin-$1$ particles and anomalous couplings in $e^+e^-\\to ZZ/Z\\gamma$
Rahaman, Rafiqul
2016-01-01
We propose a complete set of asymmetries to construct the polarization density matrix for a massive spin-$1$ particle at colliders. We study their sensitivity to the anomalous trilinear gauge couplings of neutral gauge bosons in $e^+e^-\\to ZZ/Z\\gamma$ processes with unpolarized initial beams. We use these polarization asymmetries, along with the cross-section, to obtain a simultaneous limit on all the anomalous coupling using Markov Chain Monte Carlo (MCMC) method. For an $e^+e^-$ collider running at $500$ GeV center-of-mass energy and $100$ fb$^{-1}$ of integrated luminosity the simultaneous limits on the anomalous couplings are $1\\sim3\\times 10^{-3}$.
A Comparison of Advanced Monte Carlo Methods for Open Systems: CFCMC vs CBMC
A. Torres-Knoop; S.P. Balaji; T.J.H. Vlugt; D. Dubbeldam
2014-01-01
Two state-of-the-art simulation methods for computing adsorption properties in porous materials like zeolites and metal-organic frameworks are compared: the configurational bias Monte Carlo (CBMC) method and the recently proposed continuous fractional component Monte Carlo (CFCMC) method. We show th
Formulation and Application of Quantum Monte Carlo Method to Fractional Quantum Hall Systems
Suzuki, Sei; Nakajima, Tatsuya
2003-01-01
Quantum Monte Carlo method is applied to fractional quantum Hall systems. The use of the linear programming method enables us to avoid the negative-sign problem in the Quantum Monte Carlo calculations. The formulation of this method and the technique for avoiding the sign problem are described. Some numerical results on static physical quantities are also reported.
LISA data analysis using Markov chain Monte Carlo methods
International Nuclear Information System (INIS)
The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions
Seriation in paleontological data using markov chain Monte Carlo methods.
Directory of Open Access Journals (Sweden)
Kai Puolamäki
2006-02-01
Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.
Limit theorems for weighted samples with applications to sequential Monte Carlo methods
Douc, R.; Moulines, France E.
2008-01-01
In the last decade, sequential Monte Carlo methods (SMC) emerged as a key tool in computational statistics [see, e.g., Sequential Monte Carlo Methods in Practice (2001) Springer, New York, Monte Carlo Strategies in Scientific Computing (2001) Springer, New York, Complex Stochastic Systems (2001) 109–173]. These algorithms approximate a sequence of distributions by a sequence of weighted empirical measures associated to a weighted population of particles, which are generated recursively. ¶ ...
International Nuclear Information System (INIS)
This work introduces a new approach for calculating the sensitivity of generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The GEneralized Adjoint Responses in Monte Carlo (GEAR-MC) method has enabled the calculation of high resolution sensitivity coefficients for multiple, generalized neutronic responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here and proof of principle is demonstrated by calculating sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications. (author)
Diffusion Monte Carlo methods applied to Hamaker Constant evaluations
Hongo, Kenta
2016-01-01
We applied diffusion Monte Carlo (DMC) methods to evaluate Hamaker constants of liquids for wettabilities, with practical size of a liquid molecule, Si$_6$H$_{12}$ (cyclohexasilane). The evaluated constant would be justified in the sense that it lies within the expected dependence on molecular weights among similar kinds of molecules, though there is no reference experimental values available for this molecule. Comparing the DMC with vdW-DFT evaluations, we clarified that some of the vdW-DFT evaluations could not describe correct asymptotic decays and hence Hamaker constants even though they gave reasonable binding lengths and energies, and vice versa for the rest of vdW-DFTs. We also found the advantage of DMC for this practical purpose over CCSD(T) because of the large amount of BSSE/CBS corrections required for the latter under the limitation of basis set size applicable to the practical size of a liquid molecule, while the former is free from such limitations to the extent that only the nodal structure of...
Dose calculation of 6 MV Truebeam using Monte Carlo method
International Nuclear Information System (INIS)
The purpose of this work is to simulate 6 MV Varian Truebeam linac dosimeter characteristics using Monte Carlo method and to investigate the availability of phase space file and the accuracy of the simulation. With the phase space file at linac window supplied by Varian to be a source, the patient-dependent part was simulated. Dose distributions in a water phantom with a 10 cm × 10 cm field were calculated and compared with measured data for validation. Evident time reduction was obtained from 4-5 h which a whole simulation cost on the same computer to around 48 minutes. Good agreement between simulations and measurements in water was observed. Dose differences are less than 3% for depth doses in build-up region and also for dose profiles inside the 80% field size, and the effect in penumbra is good. It demonstrate that the simulation using existing phase space file as the EGSnrc source is efficient. Dose differences between calculated data and measured data could meet the requirements for dose calculation. (authors)
Medical Imaging Image Quality Assessment with Monte Carlo Methods
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Kalyvas, N. I.; Martini, Niki; Koukou, Vaia; Valais, I. G.; Kandarakis, I. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations.
Gas Swing Options: Introduction and Pricing using Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Václavík Tomáš
2016-02-01
Full Text Available Motivated by the changing nature of the natural gas industry in the European Union, driven by the liberalisation process, we focus on the introduction and pricing of gas swing options. These options are embedded in typical gas sales agreements in the form of offtake flexibility concerning volume and time. The gas swing option is actually a set of several American puts on a spread between prices of two or more energy commodities. This fact, together with the fact that the energy markets are fundamentally different from traditional financial security markets, is important for our choice of valuation technique. Due to the specific features of the energy markets, the existing analytic approximations for spread option pricing are hardly applicable to our framework. That is why we employ Monte Carlo methods to model the spot price dynamics of the underlying commodities. The price of an arbitrarily chosen gas swing option is then computed in accordance with the concept of risk-neutral expectations. Finally, our result is compared with the real payoff from the option realised at the time of the option execution and the maximum ex-post payoff that the buyer could generate in case he knew the future, discounted to the original time of the option pricing.
Monte Carlo method with complex-valued weights for frequency domain analyses of neutron noise
International Nuclear Information System (INIS)
Highlights: • The transport equation of the neutron noise is solved with the Monte Carlo method. • A new Monte Carlo algorithm where complex-valued weights are treated is developed.• The Monte Carlo algorithm is verified by comparing with analytical solutions. • The results with the Monte Carlo method are compared with the diffusion theory. - Abstract: A Monte Carlo algorithm to solve the transport equation of the neutron noise in the frequency domain has been developed to extend the conventional diffusion theory of the neutron noise to the transport theory. In this paper, the neutron noise is defined as the stationary fluctuation of the neutron flux around its mean value, and is induced by perturbations of the macroscopic cross sections. Since the transport equation of the neutron noise is a complex equation, a Monte Carlo technique for treating complex-valued weights that was recently proposed for neutron leakage-corrected calculations has been introduced to solve the complex equation. To cancel the positive and negative values of complex-valued weights, an algorithm that is similar to the power iteration method has been implemented. The newly-developed Monte Carlo algorithm is benchmarked to analytical solutions in an infinite homogeneous medium. The neutron noise spatial distributions have been obtained both with the newly-developed Monte Carlo method and the conventional diffusion method for an infinitely-long homogeneous cylinder. The results with the Monte Carlo method agree well with those of the diffusion method. However, near the noise source induced by a high frequency perturbation, significant differences are found between the diffusion method and Monte Carlo method. The newly-developed Monte Carlo algorithm is expected to contribute to the improvement of calculation accuracy of the neutron noise
Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Quantum Monte Carlo methods and lithium cluster properties
Energy Technology Data Exchange (ETDEWEB)
Owen, R.K.
1990-12-01
Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.
Latent uncertainties of the precalculated track Monte Carlo method
International Nuclear Information System (INIS)
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the
Development of 3d reactor burnup code based on Monte Carlo method and exponential Euler method
International Nuclear Information System (INIS)
Burnup analysis plays a key role in fuel breeding, transmutation and post-processing in nuclear reactor. Burnup codes based on one-dimensional and two-dimensional transport method have difficulties in meeting the accuracy requirements. A three-dimensional burnup analysis code based on Monte Carlo method and Exponential Euler method has been developed. The coupling code combines advantage of Monte Carlo method in complex geometry neutron transport calculation and FISPACT in fast and precise inventory calculation, meanwhile resonance Self-shielding effect in inventory calculation can also be considered. The IAEA benchmark text problem has been adopted for code validation. Good agreements were shown in the comparison with other participants' results. (authors)
Applications of Monte Carlo methods in nuclear science and engineering
International Nuclear Information System (INIS)
With the advent of inexpensive computing power over the past two decades and development of variance reduction techniques, applications of Monte Carlo radiation transport techniques have proliferated dramatically. The motivation of variance reduction technique is for computational efficiency. The typical variance reduction techniques worth mentioning here are: importance sampling, implicit capture, energy and angular biasing, Russian Roulette, exponential transform, next event estimator, weight window generator, range rejection technique (only for charged particles) etc. Applications of Monte Carlo in radiation transport include nuclear safeguards, accelerator applications, homeland security, nuclear criticality, health physics, radiological safety, radiography, radiotherapy physics, radiation standards, nuclear medicine (dosimetry and imaging) etc. Towards health care, Monte Carlo particle transport techniques offer exciting tools for radiotherapy research (cancer treatments involving photons, electrons, neutrons, protons, pions and other heavy ions) where they play an increasingly important role. Research and applications of Monte Carlo techniques in radiotherapy span a very wide range from fundamental studies of cross sections and development of particle transport algorithms, to clinical evaluation of treatment plans for a variety of radiotherapy modalities. Recent development is the voxel-based Monte Carlo Radiotherapy Treatment Planning involving external electron beam and patient data in the form of DICOM (Digital Imaging and Communications in Medicine) images. Articles relevant to the INIS are indexed separately
Method of tallying adjoint fluence and calculating kinetics parameters in Monte Carlo codes
International Nuclear Information System (INIS)
A method of using iterated fission probability to estimate the adjoint fluence during particles simulation, and using it as the weighting function to calculate kinetics parameters βeff and A in Monte Carlo codes, was introduced in this paper. Implements of this method in continuous energy Monte Carlo code MCNP and multi-group Monte Carlo code MCMG are both elaborated. Verification results show that, with regardless additional computing cost, using this method, the adjoint fluence accounted by MCMG matches well with the result computed by ANISN, and the kinetics parameters calculated by MCNP agree very well with benchmarks. This method is proved to be reliable, and the function of calculating kinetics parameters in Monte Carlo codes is carried out effectively, which could be the basement for Monte Carlo codes' utility in the analysis of nuclear reactors' transient behavior. (authors)
Simulating Compton scattering using Monte Carlo method: COSMOC library
Czech Academy of Sciences Publication Activity Database
Adámek, K.; Bursa, Michal
Opava: Silesian University, 2014 - (Stuchlík, Z.), s. 1-10. (Publications of the Institute of Physics. 7). ISBN 9788075101266. ISSN 2336-5668. [RAGtime /14.-16./. Opava (CZ), 18.09. 2012 -22.09. 2012 ] Institutional support: RVO:67985815 Keywords : Monte Carlo * Compton scattering * C++ Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Analysis of some splitting and roulette algorithms in shield calculations by the Monte Carlo method
International Nuclear Information System (INIS)
Different schemes of using the splitting and roulette methods in calculation of radiation transport in nuclear facility shields by the Monte Carlo method are considered. Efficiency of the considered schemes is estimated on the example of test calculations
Review of quantum Monte Carlo methods and results for Coulombic systems
Energy Technology Data Exchange (ETDEWEB)
Ceperley, D.
1983-01-27
The various Monte Carlo methods for calculating ground state energies are briefly reviewed. Then a summary of the charged systems that have been studied with Monte Carlo is given. These include the electron gas, small molecules, a metal slab and many-body hydrogen.
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Non-Standard ZZ Production with Leptonic Decays at the Large Hadron Collider
Sun, Hao
2012-04-01
The prospects of anomalous ZZγ and ZZZ triple gauge boson couplings are investigated at the Large Hadron Collider (LHC) through an excess of events in ZZ diboson production. Two such channels are selected and the tree level results including leptonic final states are discussed: ZZ → l1-l1+l2-l2+ and ZZ → l-l+νν¯(l, l1,2 = e, μ). The results in the full finite width method are compared with the narrow width approximation (NWA) method in detail. Besides the Z boson transverse momentum distributions, the azimuthal angle between the Z boson decay to fermions, ΔΦ, and their separations in the pseudo-rapidity-azimuthal angle plane, ΔR, as well as the sensitivity on anomalous couplings are displayed at the 14 TeV LHC.
A Residual Monte Carlo Method for Spatially Discrete, Angularly Continuous Radiation Transport
International Nuclear Information System (INIS)
Residual Monte Carlo provides exponential convergence of statistical error with respect to the number of particle histories. In the past, residual Monte Carlo has been applied to a variety of angularly discrete radiation-transport problems. Here, we apply residual Monte Carlo to spatially discrete, angularly continuous transport. By maintaining angular continuity, our method avoids the deficiencies of angular discretizations, such as ray effects. For planar geometry and step differencing, we use the corresponding integral transport equation to calculate an angularly independent residual from the scalar flux in each stage of residual Monte Carlo. We then demonstrate that the resulting residual Monte Carlo method does indeed converge exponentially to within machine precision of the exact step differenced solution.
Gallagher, Kerry; Sambridge, Malcolm; Drijkoningen, Guy
In providing a method for solving non-linear optimization problems Monte Carlo techniques avoid the need for linearization but, in practice, are often prohibitive because of the large number of models that must be considered. A new class of methods known as Genetic Algorithms have recently been devised in the field of Artificial Intelligence. We outline the basic concept of genetic algorithms and discuss three examples. We show that, in locating an optimal model, the new technique is far superior in performance to Monte Carlo techniques in all cases considered. However, Monte Carlo integration is still regarded as an effective method for the subsequent model appraisal.
Gamma ray energy loss spectra simulation in NaI detectors with the Monte Carlo method
International Nuclear Information System (INIS)
With the aim of studying and applying the Monte Carlo method, a computer code was developed to calculate the pulse height spectra and detector efficiencies for gamma rays incident on NaI (Tl) crystals. The basic detector processes in NaI (Tl) detectors are given together with an outline of Monte Carlo methods and a general review of relevant published works. A detailed description of the application of Monte Carlo methods to ν-ray detection in NaI (Tl) detectors is given. Comparisons are made with published, calculated and experimental, data. (Author)
Library Design in Combinatorial Chemistry by Monte Carlo Methods
Falcioni, Marco; Michael W. Deem
2000-01-01
Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities.
Quasi-Monte Carlo methods for lattice systems. A first look
International Nuclear Information System (INIS)
We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N-1/2, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N-1. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
Frequency-domain deviational Monte Carlo method for linear oscillatory gas flows
Ladiges, Daniel R.; Sader, John E.
2015-10-01
Oscillatory non-continuum low Mach number gas flows are often generated by nanomechanical devices in ambient conditions. These flows can be simulated using a range of particle based Monte Carlo techniques, which in their original form operate exclusively in the time-domain. Recently, a frequency-domain weight-based Monte Carlo method was proposed [D. R. Ladiges and J. E. Sader, "Frequency-domain Monte Carlo method for linear oscillatory gas flows," J. Comput. Phys. 284, 351-366 (2015)] that exhibits superior statistical convergence when simulating oscillatory flows. This previous method used the Bhatnagar-Gross-Krook (BGK) kinetic model and contains a "virtual-time" variable to maintain the inherent time-marching nature of existing Monte Carlo algorithms. Here, we propose an alternative frequency-domain deviational Monte Carlo method that facilitates the use of a wider range of molecular models and more efficient collision/relaxation operators. We demonstrate this method with oscillatory Couette flow and the flow generated by an oscillating sphere, utilizing both the BGK kinetic model and hard sphere particles. We also discuss how oscillatory motion of arbitrary time-dependence can be simulated using computationally efficient parallelization. As in the weight-based method, this deviational frequency-domain Monte Carlo method is shown to offer improved computational speed compared to the equivalent time-domain technique.
Growing lattice animals and Monte-Carlo methods
Reich, G. R.; Leath, P. L.
1980-01-01
We consider the search problems which arise in Monte-Carlo studies involving growing lattice animals. A new periodic hashing scheme (based on a periodic cell) especially suited to these problems is presented which takes advantage both of the connected geometric structure of the animals and the traversal-oriented nature of the search. The scheme is motivated by a physical analogy and tested numerically on compact and on ramified animals. In both cases the performance is found to be more efficient than random hashing, and to a degree depending on the compactness of the animals
Study of the quantitative analysis approach of maintenance by the Monte Carlo simulation method
International Nuclear Information System (INIS)
This study is examination of the quantitative valuation by Monte Carlo simulation method of maintenance activities of a nuclear power plant. Therefore, the concept of the quantitative valuation of maintenance that examination was advanced in the Japan Society of Maintenology and International Institute of Universality (IUU) was arranged. Basis examination for quantitative valuation of maintenance was carried out at simple feed water system, by Monte Carlo simulation method. (author)
An irreversible Markov-chain Monte Carlo method with skew detailed balance conditions
International Nuclear Information System (INIS)
An irreversible Markov-chain Monte Carlo (MCMC) method based on a skew detailed balance condition is discussed. Some recent theoretical works concerned with the irreversible MCMC method are reviewed and the irreversible Metropolis-Hastings algorithm for the method is described. We apply the method to ferromagnetic Ising models in two and three dimensions. Relaxation dynamics of the order parameter and the dynamical exponent are studied in comparison to those with the conventional reversible MCMC method with the detailed balance condition. We also examine how the efficiency of exchange Monte Carlo method is affected by the combined use of the irreversible MCMC method
Dynamical Monte Carlo methods for plasma-surface reactions
Guerra, Vasco; Marinov, Daniil
2016-08-01
Different dynamical Monte Carlo algorithms to investigate molecule formation on surfaces are developed, evaluated and compared with the deterministic approach based on reaction-rate equations. These include a null event algorithm, the n-fold way/BKL algorithm and an ‘hybrid’ variant of the latter. NO2 formation by NO oxidation on Pyrex and O recombination on silica with the formation of O2 are taken as case studies. The influence of the grid size on the CPU calculation time and the accuracy of the results is analysed. The role of Langmuir–Hinsehlwood recombination involving two physisorbed atoms and the effect of back diffusion and its inclusion in a deterministic formulation are investigated and discussed. It is shown that dynamical Monte Carlo schemes are flexible, simple to implement, describe easily elementary processes that are not straightforward to include in deterministic simulations, can run very efficiently if appropriately chosen and give highly reliable results. Moreover, the present approach provides a relatively simple procedure to describe fully coupled surface and gas phase chemistries.
MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Directory of Open Access Journals (Sweden)
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method
Institute of Scientific and Technical Information of China (English)
XI Jia-mi; YANG Geng-she
2008-01-01
Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.
Calculation of gamma-ray families by Monte Carlo method
International Nuclear Information System (INIS)
Extensive Monte Carlo calculation on gamma-ray families was carried out under appropriate model parameters which are currently used in high energy cosmic ray phenomenology. Characteristics of gamma-ray families are systematically investigated by the comparison of calculated results with experimental data obtained at mountain altitudes. The main point of discussion is devoted to examine the validity of Feynman scaling in the fragmentation region of the multiple meson production. It is concluded that experimental data cannot be reproduced under the assumption of scaling law when primary cosmic rays are dominated by protons. Other possibilities on primary composition and increase of interaction cross section are also examined. These assumptions are consistent with experimental data only when we introduce intense dominance of heavy primaries in E0>1015 eV region and very strong increase of interaction cross section (say sigma varies as Esub(0)sup(0.06)) simultaneously
Advanced computational methods for nodal diffusion, Monte Carlo, and S(sub N) problems
Martin, W. R.
1993-01-01
This document describes progress on five efforts for improving effectiveness of computational methods for particle diffusion and transport problems in nuclear engineering: (1) Multigrid methods for obtaining rapidly converging solutions of nodal diffusion problems. An alternative line relaxation scheme is being implemented into a nodal diffusion code. Simplified P2 has been implemented into this code. (2) Local Exponential Transform method for variance reduction in Monte Carlo neutron transport calculations. This work yielded predictions for both 1-D and 2-D x-y geometry better than conventional Monte Carlo with splitting and Russian Roulette. (3) Asymptotic Diffusion Synthetic Acceleration methods for obtaining accurate, rapidly converging solutions of multidimensional SN problems. New transport differencing schemes have been obtained that allow solution by the conjugate gradient method, and the convergence of this approach is rapid. (4) Quasidiffusion (QD) methods for obtaining accurate, rapidly converging solutions of multidimensional SN Problems on irregular spatial grids. A symmetrized QD method has been developed in a form that results in a system of two self-adjoint equations that are readily discretized and efficiently solved. (5) Response history method for speeding up the Monte Carlo calculation of electron transport problems. This method was implemented into the MCNP Monte Carlo code. In addition, we have developed and implemented a parallel time-dependent Monte Carlo code on two massively parallel processors.
On the feasibility of a homogenised multi-group Monte Carlo method in reactor analysis
International Nuclear Information System (INIS)
The use of homogenised multi-group cross sections to speed up Monte Carlo calculation has been studied to some extent, but the method is not widely implemented in modern calculation codes. This paper presents a calculation scheme in which homogenised material parameters are generated using the PSG continuous-energy Monte Carlo reactor physics code and used by MORA, a new full-core Monte Carlo code entirely based on homogenisation. The theory of homogenisation and its implementation in the Monte Carlo method are briefly introduced. The PSG-MORA calculation scheme is put to practice in two fundamentally different test cases: a small sodium-cooled fast reactor (JOYO) and a large PWR core. It is shown that the homogenisation results in a dramatic increase in efficiency. The results are in a reasonably good agreement with reference PSG and MCNP5 calculations, although fission source convergence becomes a problem in the PWR test case. (authors)
New methods for the Monte Carlo simulation of neutron noise experiments in Ads
International Nuclear Information System (INIS)
This paper presents two improvements to speed up the Monte-Carlo simulation of neutron noise experiments. The first one is to separate the actual Monte Carlo transport calculation from the digital signal processing routines, while the second is to introduce non-analogue techniques to improve the efficiency of the Monte Carlo calculation. For the latter method, adaptations to the theory of neutron noise experiments were made to account for the distortion of the higher-moments of the calculated neutron noise. Calculations were performed to test the feasibility of the above outlined scheme and to demonstrate the advantages of the application of the track length estimator. It is shown that the modifications improve the efficiency of these calculations to a high extent, which turns the Monte Carlo method into a powerful tool for the development and design of on-line reactivity measurement systems for ADS
Quantum trajectory Monte Carlo method describing the coherent dynamics of highly charged ions
International Nuclear Information System (INIS)
We present a theoretical framework for studying dynamics of open quantum systems. Our formalism gives a systematic path from Hamiltonians constructed by first principles to a Monte Carlo algorithm. Our Monte Carlo calculation can treat the build-up and time evolution of coherences. We employ a reduced density matrix approach in which the total system is divided into a system of interest and its environment. An equation of motion for the reduced density matrix is written in the Lindblad form using an additional approximation to the Born-Markov approximation. The Lindblad form allows the solution of this multi-state problem in terms of Monte Carlo sampling of quantum trajectories. The Monte Carlo method is advantageous in terms of computer storage compared to direct solutions of the equation of motion. We apply our method to discuss coherence properties of the internal state of a Kr35+ ion subject to spontaneous radiative decay. Simulations exhibit clear signatures of coherent transitions
Seeking CP violating couplings in ZZ production at LEP2
Biebel, Jochen
1998-02-01
The effects of CP violating anomalous ZZZ and γZZ vertices in ZZ production are determined. We present the differential cross-section for e+e--->ZZ with dependence on the spins of the Z bosons. It is shown that from the different spin combinations those with one longitudinally and one transversally polarized Z in the final state are the most sensitive to CP violating anomalous couplings.
Seeking $CP$ Violating Couplings in $ZZ$ Production at LEP2
Biebel, J
1999-01-01
The effects of CP violating anomalous ZZZ and gammaZZ vertices in ZZ production are determined. We present the differential cross-section for e+e- -> ZZ with dependence on the spins of the Z bosons. It is shown that from the different spin combinations those with one longitudinally and one transversally polarized Z in the final state are the most sensitive to CP violating anomalous couplings.
International Nuclear Information System (INIS)
Theoretical consideration is made for possibility to accelerate and judge convergence of a conventional Monte Carlo iterative calculation when it is used for a weak neutron interaction problem. And the clue for this consideration is rendered with some application analyses using the OECD/NEA source convergence benchmark problems. Some practical procedures are proposed to realize these acceleration and judgment methods in practical application using a Monte Carlo code. (author)
Inference in Kingman's Coalescent with Particle Markov Chain Monte Carlo Method
Chen, Yifei; Xie, Xiaohui
2013-01-01
We propose a new algorithm to do posterior sampling of Kingman's coalescent, based upon the Particle Markov Chain Monte Carlo methodology. Specifically, the algorithm is an instantiation of the Particle Gibbs Sampling method, which alternately samples coalescent times conditioned on coalescent tree structures, and tree structures conditioned on coalescent times via the conditional Sequential Monte Carlo procedure. We implement our algorithm as a C++ package, and demonstrate its utility via a ...
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
Energy Technology Data Exchange (ETDEWEB)
Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)
2014-06-15
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the
A new method for the calculation of diffusion coefficients with Monte Carlo
International Nuclear Information System (INIS)
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods. (author)
Hybrid Monte-Carlo method for simulating neutron and photon radiography
International Nuclear Information System (INIS)
We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable (E)quivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs
Hybrid Monte-Carlo method for simulating neutron and photon radiography
Wang, Han; Tang, Vincent
2013-11-01
We present a Hybrid Monte-Carlo method (HMCM) for simulating neutron and photon radiographs. HMCM utilizes the combination of a Monte-Carlo particle simulation for calculating incident film radiation and a statistical post-processing routine to simulate film noise. Since the method relies on MCNP for transport calculations, it is easily generalized to most non-destructive evaluation (NDE) simulations. We verify the method's accuracy through ASTM International's E592-99 publication, Standard Guide to Obtainable Equivalent Penetrameter Sensitivity for Radiography of Steel Plates [1]. Potential uses for the method include characterizing alternative radiological sources and simulating NDE radiographs.
Combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation
Energy Technology Data Exchange (ETDEWEB)
Saleur, H.; Derrida, B.
1985-07-01
In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.
A New Method for the Calculation of Diffusion Coefficients with Monte Carlo
Dorval, Eric
2014-06-01
This paper presents a new Monte Carlo-based method for the calculation of diffusion coefficients. One distinctive feature of this method is that it does not resort to the computation of transport cross sections directly, although their functional form is retained. Instead, a special type of tally derived from a deterministic estimate of Fick's Law is used for tallying the total cross section, which is then combined with a set of other standard Monte Carlo tallies. Some properties of this method are presented by means of numerical examples for a multi-group 1-D implementation. Calculated diffusion coefficients are in general good agreement with values obtained by other methods.
Spin-orbit interactions in electronic structure quantum Monte Carlo methods
Melton, Cody A.; Zhu, Minyi; Guo, Shi; Ambrosetti, Alberto; Pederiva, Francesco; Mitas, Lubos
2016-04-01
We develop generalization of the fixed-phase diffusion Monte Carlo method for Hamiltonians which explicitly depends on particle spins such as for spin-orbit interactions. The method is formulated in a zero-variance manner and is similar to the treatment of nonlocal operators in commonly used static-spin calculations. Tests on atomic and molecular systems show that it is very accurate, on par with the fixed-node method. This opens electronic structure quantum Monte Carlo methods to a vast research area of quantum phenomena in which spin-related interactions play an important role.
The S/sub N//Monte Carlo response matrix hybrid method
International Nuclear Information System (INIS)
A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples
International Nuclear Information System (INIS)
Monte Carlo method is widely used for solving neutron transport equation. Basically Monte Carlo method treats continuous angle, space and energy. It gives very accurate solution when enough many particle histories are used, but it takes too long computation time. To reduce computation time, discrete Monte Carlo method was proposed. It is called Discrete Transport Monte Carlo (DTMC) method. It uses discrete space but continuous angle in mono energy one dimension problem and uses lump, linear-discontinuous (LLD) equation to make probabilities of leakage, scattering, and absorption. LLD may cause negative angular fluxes in highly scattering problem, so two scatter variance reduction method is applied to DTMC and shows very accurate solution in various problems. In transport Monte Carlo calculation, the particle history does not end for scattering event. So it also takes much computation time in highly scattering problem. To further reduce computation time, Discrete Diffusion Monte Carlo (DDMC) method is implemented. DDMC uses diffusion equation to make probabilities and has no scattering events. So DDMC takes very short computation time comparing with DTMC and shows very well-agreed results with cell-centered diffusion results. It is known that diffusion result may not be good in boundaries. So in hybrid method of DTMC and DDMC, boundary regions are calculated by DTMC and the other regions are calculated by DDMC. In this thesis, DTMC, DDMC and hybrid methods and their results of several problems are presented. The results show that DDMC and DTMC are well agreed with deterministic diffusion and transport results, respectively. The hybrid method shows transport-like results in problems where diffusion results are poor. The computation time of hybrid method is between DDMC and DTMC, as expected
Progress on burnup calculation methods coupling Monte Carlo and depletion codes
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: lesinki@cab.cnea.gob.ar
2005-07-01
Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author)
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T; Graziani, Carlo; Couch, Sean M; Jordan, George C; Lamb, Donald Q; Moses, Gregory A
2013-01-01
We explore the application of Implicit Monte Carlo (IMC) and Discrete Diffusion Monte Carlo (DDMC) to radiation transport in strong fluid outflows with structured opacity. The IMC method of Fleck & Cummings is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking Monte Carlo particles through optically thick materials. The DDMC method of Densmore accelerates an IMC computation where the domain is diffusive. Recently, Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent neutrino transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally grey DDMC method. In this article we rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. The method described is suitable for a large variety of non-mono...
New simpler method of matching NLO corrections with parton shower Monte Carlo
Jadach, S.; Placzek, W.; Sapeta, S.(CERN PH-TH, CH-1211, Geneva 23, Switzerland); Siodmok, A.; Skrzypek, M.
2016-01-01
Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higg...
New simpler method of matching NLO corrections with parton shower Monte Carlo
Jadach, S; Sapeta, S; Siodmok, A; Skrzypek, M
2016-01-01
Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higgs-boson production process are also presented.
International Nuclear Information System (INIS)
A Computer program MCVIEW calculates the radiation view factor between surfaces for three dimensional geometries. MCVIEW was developed to calculate view factors for input data to heat transfer analysis programs TRUMP, HEATING-5, HEATING-6 and so on. In the paper, brief illustration of calculation method using Monte Carlo for view factor is presented. The second section presents comparisons between view factors of other methods such as area integration, line integration and cross string and Monte Carlo methods, concerning with calculation error and computer execution time. The third section provides a user's input guide for MCVIEW. (author)
International Nuclear Information System (INIS)
The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)
Metric conjoint segmentation methods : A Monte Carlo comparison
Vriens, M; Wedel, M; Wilms, T
1996-01-01
The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared co
International Nuclear Information System (INIS)
Computer development has a bearing on the choice of methods and their possible uses. The authors discuss the possible uses of the diffusion and transport theories and their limitations. Most of the problems encountered in regard to criticality involve fissile materials in simple or multiple assemblies. These entail the use of methods of calculation based on different principles. There are approximate methods of calculation, but very often, for economic reasons or with a view to practical application, a high degree of accuracy is required in determining the reactivity of the assemblies in question, and the methods based on the Monte Carlo principle are then the most valid. When these methods are used, accuracy is linked with the calculation time, so that the usefulness of the codes derives from their speed. With a view to carrying out the work in the best conditions, depending on the geometry and the nature of the materials involved, various codes must be used. Four principal codes are described, as are their variants; some typical possibilities and certain fundamental results are presented. Finally the accuracies of the various methods are compared. (author)
The factorization method for Monte Carlo simulations of systems with a complex with
Ambjørn, J.; Anagnostopoulos, K. N.; Nishimura, J.; Verbaarschot, J. J. M.
2004-03-01
We propose a method for Monte Carlo simulations of systems with a complex action. The method has the advantages of being in principle applicable to any such system and provides a solution to the overlap problem. In some cases, like in the IKKT matrix model, a finite size scaling extrapolation can provide results for systems whose size would make it prohibitive to simulate directly.
Remarkable moments in the history of neutron transport Monte Carlo methods
International Nuclear Information System (INIS)
I highlight a few results from the past of the neutron and photon transport Monte Carlo methods which have caused me a great pleasure for their ingenuity and wittiness and which certainly merit to be remembered even when tricky methods are not needed anymore. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Bredenstein, A.
2006-05-08
In this work we provide precision calculations for the processes {gamma}{gamma} {yields} 4 fermions and H {yields} WW/ZZ {yields} 4 fermions. At a {gamma}{gamma} collider precise theoretical predictions are needed for the {gamma}{gamma} {yields} WW {yields} 4f processes because of their large cross section. These processes allow a measurement of the gauge-boson couplings {gamma}WW and {gamma}{gamma}WW. Furthermore, the reaction {gamma}{gamma} {yields} H {yields} WW/ZZ {yields} 4f arises through loops of virtual charged, massive particles. Thus, the coupling {gamma}{gamma}H can be measured and Higgs bosons with a relatively large mass could be produced. For masses M{sub H} >or(sim) 135 GeV the Higgs boson predominantly decays into W- or Z-boson pairs and subsequently into four leptons. The kinematical reconstruction of these decays is influenced by quantum corrections, especially real photon radiation. Since off-shell effects of the gauge bosons have to be taken into account below M{sub H} {approx} 2M{sub W/Z}, the inclusion of the decays of the gauge bosons is important. In addition, the spin and the CP properties of the Higgs boson can be determined by considering angular and energy distributions of the decay fermions. For a comparison of theoretical predictions with experimental data Monte Carlo generators are useful tools. We construct such programs for the processes {gamma}{gamma} {yields} WW {yields} 4f and H {yields} WW/ZZ {yields} 4f. On the one hand, they provide the complete predictions at lowest order of perturbation theory. On the other hand, they contain quantum corrections, which ca be classified into real corrections, connected with photons bremsstrahlung, and virtual corrections. Whereas the virtual quantum corrections to {gamma}{gamma} {yields} WW {yields} 4f are calculated in the double-pole approximation, i.e. only doubly-resonant contributions are taken into account, we calculate the complete O({alpha}) corrections for the H {yields} WW/ZZ
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL
2012-01-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.
Directory of Open Access Journals (Sweden)
GHAREHPETIAN, G. B.
2009-06-01
Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.
Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport
International Nuclear Information System (INIS)
Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations
Measurement of the ZZ production cross section with ATLAS
Energy Technology Data Exchange (ETDEWEB)
Ellinghaus, Frank; Schmitz, Simon; Tapprogge, Stefan [Institut fuer Physik, Johannes Gutenberg-Universitaet Mainz (Germany); Collaboration: ATLAS-Collaboration
2015-07-01
The study of the ZZ production has an excellent potential to test the electroweak sector of the Standard Model, where Z boson pairs can be produced via non-resonant processes or via Higgs decays. A deviation from the Standard Model expectation for the ZZ production cross section would be an indication for new physics. This could manifest itself in so called triple gauge couplings via ZZZ or ZZγ, which the Standard Model forbids at tree level. The measurement of the ZZ production cross section is based on an integrated luminosity of 20.3 fb{sup -1} of proton-proton collision data at √(s) = 8 TeV recorded with the ATLAS detector in 2012. Measurements of differential cross sections as well as searches for triple gauge couplings have been performed. This talk presents the measurement and analysis details of the ZZ production in the ZZ → 4l channel.
External individual monitoring: experiments and simulations using Monte Carlo Method
International Nuclear Information System (INIS)
In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF2:NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF2:NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF2:NaCl compound estimated by simulation to be 2,20(25) mm-1 was introduced. Conversion coefficients Cp from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low price can
Quasi-Monte Carlo methods for lattice systems. A first look
Energy Technology Data Exchange (ETDEWEB)
Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik
2013-02-15
We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.
Monte Carlo boundary methods for RF-heating of fusion plasma
International Nuclear Information System (INIS)
A fusion plasma can be heated by launching an electromagnetic wave into the plasma with a frequency close to the cyclotron frequency of a minority ion species. This heating process creates a non-Maxwellian distribution function, that is difficult to solve numerically in toroidal geometry. Solutions have previously been found using a Monte Carlo code FIDO. However the computations are rather time consuming. Therefore methods to speed up the computations, using Monte Carlo boundary methods have been studied. The ion cyclotron frequency heating mainly perturbs the high velocity distribution, while the low velocity distribution remains approximately Maxwellian. An hybrid model is therefore proposed, assuming a Maxwellian at low velocities and calculating the high velocity distribution with a Monte Carlo method. Three different methods to treat the boundary between the low and the high velocity regime are presented. A Monte Carlo code HYBRID has been developed to test the most promising method, the 'Modified differential equation' method, for a one dimensional problem. The results show good agreement with analytical solutions
MEASURING THE EVOLUTIONARY RATE OF COOLING OF ZZ Ceti
Energy Technology Data Exchange (ETDEWEB)
Mukadam, Anjum S.; Fraser, Oliver; Riecken, T. S.; Kronberg, M. E. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Bischoff-Kim, Agnes [Georgia College and State University, Milledgeville, GA 31061 (United States); Corsico, A. H. [Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata (Argentina); Montgomery, M. H.; Winget, D. E.; Hermes, J. J.; Winget, K. I.; Falcon, Ross E.; Reaves, D. [Department of Astronomy, University of Texas at Austin, Austin, TX 78759 (United States); Kepler, S. O.; Romero, A. D. [Universidade Federal do Rio Grande do Sul, Porto Alegre 91501-970, RS (Brazil); Chandler, D. W. [Meyer Observatory, Central Texas Astronomical Society, 3409 Whispering Oaks, Temple, TX 76504 (United States); Kuehne, J. W. [McDonald Observatory, Fort Davis, TX 79734 (United States); Sullivan, D. J. [Victoria University of Wellington, P.O. Box 600, Wellington (New Zealand); Von Hippel, T. [Embry-Riddle Aeronautical University, 600 South Clyde Morris Boulevard, Daytona Beach, FL 32114 (United States); Mullally, F. [SETI Institute, NASA Ames Research Center, MS 244-30, Moffet Field, CA 94035 (United States); Shipman, H. [Delaware Asteroseismic Research Center, Mt. Cuba Observatory, Greenville, DE 19807 (United States); and others
2013-07-01
We have finally measured the evolutionary rate of cooling of the pulsating hydrogen atmosphere (DA) white dwarf ZZ Ceti (Ross 548), as reflected by the drift rate of the 213.13260694 s period. Using 41 yr of time-series photometry from 1970 November to 2012 January, we determine the rate of change of this period with time to be dP/dt = (5.2 {+-} 1.4) Multiplication-Sign 10{sup -15} s s{sup -1} employing the O - C method and (5.45 {+-} 0.79) Multiplication-Sign 10{sup -15} s s{sup -1} using a direct nonlinear least squares fit to the entire lightcurve. We adopt the dP/dt obtained from the nonlinear least squares program as our final determination, but augment the corresponding uncertainty to a more realistic value, ultimately arriving at the measurement of dP/dt = (5.5 {+-} 1.0) Multiplication-Sign 10{sup -15} s s{sup -1}. After correcting for proper motion, the evolutionary rate of cooling of ZZ Ceti is computed to be (3.3 {+-} 1.1) Multiplication-Sign 10{sup -15} s s{sup -1}. This value is consistent within uncertainties with the measurement of (4.19 {+-} 0.73) Multiplication-Sign 10{sup -15} s s{sup -1} for another similar pulsating DA white dwarf, G 117-B15A. Measuring the cooling rate of ZZ Ceti helps us refine our stellar structure and evolutionary models, as cooling depends mainly on the core composition and stellar mass. Calibrating white dwarf cooling curves with this measurement will reduce the theoretical uncertainties involved in white dwarf cosmochronometry. Should the 213.13 s period be trapped in the hydrogen envelope, then our determination of its drift rate compared to the expected evolutionary rate suggests an additional source of stellar cooling. Attributing the excess cooling to the emission of axions imposes a constraint on the mass of the hypothetical axion particle.
Methods of Monte Carlo biasing using two-dimensional discrete ordinates adjoint flux
Energy Technology Data Exchange (ETDEWEB)
Tang, J.S.; Stevens, P.N.; Hoffman, T.J.
1976-06-01
Methods of biasing three-dimensional deep penetration Monte Carlo calculations using importance functions obtained from a two-dimensional discrete ordinates adjoint calculation have been developed. The important distinction was made between the applications of the point value and the event value to alter the random walk in Monte Carlo analysis of radiation transport. The biasing techniques developed are the angular probability biasing which alters the collision kernel using the point value as the importance function and the path length biasing which alters the transport kernel using the event value as the importance function. Source location biasings using the step importance function and the scalar adjoint flux obtained from the two-dimensional discrete ordinates adjoint calculation were also investigated. The effects of the biasing techniques to Monte Carlo calculations have been investigated for neutron transport through a thick concrete shield with a penetrating duct. Source location biasing, angular probability biasing, and path length biasing were employed individually and in various combinations. Results of the biased Monte Carlo calculations were compared with the standard Monte Carlo and discrete ordinates calculations.
Markov Chain Monte Carlo methods in computational statistics and econometrics
Czech Academy of Sciences Publication Activity Database
Volf, Petr
Plzeň : University of West Bohemia in Pilsen, 2006 - (Lukáš, L.), s. 525-530 ISBN 978-80-7043-480-2. [Mathematical Methods in Economics 2006. Plzeň (CZ), 13.09.2006-15.09.2006] R&D Projects: GA ČR GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : Random search * MCMC * optimization Subject RIV: BB - Applied Statistics, Operational Research
The application of Monte Carlo method to electron and photon beams transport
International Nuclear Information System (INIS)
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs
International Nuclear Information System (INIS)
An approach to (normalized) infinite dimensional integrals, including normalized oscillatory integrals, through a sequence of evaluations in the spirit of the Monte Carlo method for probability measures is proposed. in this approach the normalization through the partition function is included in the definition. For suitable sequences of evaluations, the ('classical') expectation values of cylinder functions are recovered.
Magnot, Jean-Pierre
2012-01-01
An approach to (normalized) infinite dimensional integrals, including normalized oscillatory integrals, through a sequence of evaluations in the spirit of the Monte Carlo method for probability measures is proposed. in this approach the normalization through the partition function is included in the definition. For suitable sequences of evaluations, the ("classical") expectation values of cylinder functions are recovered
Lowest-order relativistic corrections of helium computed using Monte Carlo methods
International Nuclear Information System (INIS)
We have calculated the lowest-order relativistic effects for the three lowest states of the helium atom with symmetry 1S, 1P, 1D, 3S, 3P, and 3D using variational Monte Carlo methods and compact, explicitly correlated trial wave functions. Our values are in good agreement with the best results in the literature.
The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.
On the use of the continuous-energy Monte Carlo method for lattice physics applications
International Nuclear Information System (INIS)
This paper is a general overview of the Serpent Monte Carlo reactor physics burnup calculation code. The Serpent code is a project carried out at VTT Technical Research Centre of Finland, in an effort to extend the use of the continuous-energy Monte Carlo method to lattice physics applications, including group constant generation for coupled full-core reactor simulator calculations. The main motivation of going from deterministic transport methods to Monte Carlo simulation is the capability to model any fuel or reactor type using the same fundamental neutron interaction data without major approximations. This capability is considered important especially for the development of next-generation reactor technology, which often lies beyond the modeling capabilities of conventional LWR codes. One of the main limiting factors for the Monte Carlo method is still today the prohibitively long computing time, especially in burnup calculation. The Serpent code uses certain dedicated calculation techniques to overcome this limitation. The overall running time is reduced significantly, in some cases by almost two orders of magnitude. The main principles of the calculation methods and the general capabilities of the code are introduced. The results section presents a collection of validation cases in which Serpent calculations are compared to reference MCNP4C and CASMO-4E results. (author)
A Monte Carlo Green's function method for three-dimensional neutron transport
International Nuclear Information System (INIS)
This paper describes a Monte Carlo transport kernel capability, which has recently been incorporated into the RACER continuous-energy Monte Carlo code. The kernels represent a Green's function method for neutron transport from a fixed-source volume out to a particular volume of interest. This method is very powerful transport technique. Also, since kernels are evaluated numerically by Monte Carlo, the problem geometry can be arbitrarily complex, yet exact. This method is intended for problems where an ex-core neutron response must be determined for a variety of reactor conditions. Two examples are ex-core neutron detector response and vessel critical weld fast flux. The response is expressed in terms of neutron transport kernels weighted by a core fission source distribution. In these types of calculations, the response must be computed for hundreds of source distributions, but the kernels only need to be calculated once. The advance described in this paper is that the kernels are generated with a highly accurate three-dimensional Monte Carlo transport calculation instead of an approximate method such as line-of-sight attenuation theory or a synthesized three-dimensional discrete ordinates solution
Transpor properties of electrons in GaAs using random techniques (Monte-Carlo Method)
International Nuclear Information System (INIS)
We study the transport properties of electrons in GaAs using random techniques (Monte-Carlo method). With a simple non parabolic band model for this semiconductor we obtain the electron stationary against the electric field in this material, cheking these theoretical results with the experimental ones given by several authors. (Author)
An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.
Kim, Seock-Ho
2001-01-01
Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…
Kim, Jee-Seon; Bolt, Daniel M.
2007-01-01
The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…
Stability of few-body systems and quantum Monte-Carlo methods
International Nuclear Information System (INIS)
Quantum Monte-Carlo methods are well suited to study the stability of few-body systems. Their capabilities are illustrated by studying the critical stability of the hydrogen molecular ion whose nuclei and electron interact through the Yukawa potential, and the stability of small helium clusters. Refs. 16 (author)
A Monte-Carlo-Based Network Method for Source Positioning in Bioluminescence Tomography
Zhun Xu; Xiaolei Song; Xiaomeng Zhang; Jing Bai
2007-01-01
We present an approach based on the improved Levenberg Marquardt (LM) algorithm of backpropagation (BP) neural network to estimate the light source position in bioluminescent imaging. For solving the forward problem, the table-based random sampling algorithm (TBRS), a fast Monte Carlo simulation method ...
Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods
International Nuclear Information System (INIS)
The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author)
A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation
Energy Technology Data Exchange (ETDEWEB)
Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany)
2012-11-01
This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented.
Stabilizing Canonical-Ensemble Calculations in the Auxiliary-Field Monte Carlo Method
Gilbreth, C N
2014-01-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
International Nuclear Information System (INIS)
We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature
Sequential Monte Carlo methods for nonlinear discrete-time filtering
Bruno, Marcelo GS
2013-01-01
In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable.We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in line
Markov chain Monte Carlo methods in directed graphical models
DEFF Research Database (Denmark)
Højbjerre, Malene
Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...... tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...
An energy transfer method for 4D Monte Carlo dose calculation
Siebers, Jeffrey V; Zhong, Hualiang
2008-01-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy ...
TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging
International Nuclear Information System (INIS)
Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 107 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual
Constrained-Realization Monte-Carlo Method for Hypothesis Testing
Theiler, J; Theiler, James; Prichard, Dean
1996-01-01
We compare two theoretically distinct approaches to generating artificial (or ``surrogate'') data for testing hypotheses about a given data set. The first and more straightforward approach is to fit a single ``best'' model to the original data, and then to generate surrogate data sets that are ``typical realizations'' of that model. The second approach concentrates not on the model but directly on the original data; it attempts to constrain the surrogate data sets so that they exactly agree with the original data for a specified set of sample statistics. Examples of these two approaches are provided for two simple cases: a test for deviations from a gaussian distribution, and a test for serial dependence in a time series. Additionally, we consider tests for nonlinearity in time series based on a Fourier transform (FT) method and on more conventional autoregressive moving-average (ARMA) fits to the data. The comparative performance of hypothesis testing schemes based on these two approaches is found to depend ...
International Nuclear Information System (INIS)
Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes
A Hamiltonian Monte–Carlo method for Bayesian inference of supermassive black hole binaries
International Nuclear Information System (INIS)
We investigate the use of a Hamiltonian Monte–Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte–Carlo (MCMC) methods, such as Metropolis–Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte–Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a 106 iteration chain from ∼109 to ∼106. The result is in an implementation of the Hamiltonian Monte–Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC. (paper)
Energy Technology Data Exchange (ETDEWEB)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J. [Universite de Franche-Comte, Equipe IRMA/ENISYS/FEMTO-ST, UMR6174 CNRS, 25 - Montbeliard (France); Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M. [Universite de Franche-Comte, Equipe AND/LIFC, 90 - Belfort (France)
2009-01-15
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Survival in severe alpha-1-antitrypsin deficiency (PiZZ
Directory of Open Access Journals (Sweden)
Nilsson Jan-Åke
2010-04-01
Full Text Available Abstract Background Previous studies of the natural history of alpha-1-antitrypsin (AAT deficiency are mostly based on highly selected patients. The aim of this study was to analyse the mortality of PiZZ individuals. Methods Data from 1339 adult PiZZ individuals from the Swedish National AAT Deficiency Registry, followed from 1991 to 2008, were analysed. Forty-three percent of these individuals were identified by respiratory symptoms (respiratory cases, 32% by liver diseases and other diseases (non-respiratory cases and 25% by screening (screened cases. Smoking status was divided into two groups: smokers 737 (55% and 602 (45% never-smokers. Results During the follow-up 315 individuals (24% died. The standardised mortality rate (SMR for respiratory cases was 4.70 (95% Confidence Interval (CI 4.10-5.40, 3.0 (95%CI 2.35-3.70 for the non-respiratory cases and 2.30 (95% CI 1.46-3.46 for the screened cases. The smokers had a higher mortality risk than never-smokers, with a SMR of 4.80 (95%CI 4.20-5.50 for the smokers and 2.80(95%CI 2.30-3.40 for the never-smokers. The Rate Ratio (RR was 1.70 (95% CI 1.35-2.20. Also among the screened cases, the mortality risk for the smokers was significantly higher than in the general Swedish population (SMR 3.40 (95% CI 1.98-5.40. Conclusion Smokers with severe AAT deficiency, irrespective of mode of identification, have a significantly higher mortality risk than the general Swedish population.
Energy Technology Data Exchange (ETDEWEB)
Henniger, Juergen; Jakobi, Christoph [Technische Univ. Dresden (Germany). Arbeitsgruppe Strahlungsphysik (ASP)
2015-07-01
From a mathematical point of view Monte Carlo methods are the numerical solution of certain integrals and integral equations using a random experiment. There are several advantages compared to the classical stepwise integration. The time required for computing increases for multi-dimensional problems only moderately with increasing dimension. The only requirements for the integral kernel are its capability of being integrated in the considered integration area and the possibility of an algorithmic representation. These are the important properties of Monte Carlo methods that allow the application in every scientific area. Besides that Monte Carlo algorithms are often more intuitive than conventional numerical integration methods. The contribution demonstrates these facts using the example of dead time corrections for counting measurements.
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
A Method for Estimating Annual Energy Production Using Monte Carlo Wind Speed Simulation
Directory of Open Access Journals (Sweden)
Birgir Hrafnkelsson
2016-04-01
Full Text Available A novel Monte Carlo (MC approach is proposed for the simulation of wind speed samples to assess the wind energy production potential of a site. The Monte Carlo approach is based on historical wind speed data and reserves the effect of autocorrelation and seasonality in wind speed observations. No distributional assumptions are made, and this approach is relatively simple in comparison to simulation methods that aim at including the autocorrelation and seasonal effects. Annual energy production (AEP is simulated by transforming the simulated wind speed values via the power curve of the wind turbine at the site. The proposed Monte Carlo approach is generic and is applicable for all sites provided that a sufficient amount of wind speed data and information on the power curve are available. The simulated AEP values based on the Monte Carlo approach are compared to both actual AEP and to simulated AEP values based on a modified Weibull approach for wind speed simulation using data from the Burfell site in Iceland. The comparison reveals that the simulated AEP values based on the proposed Monte Carlo approach have a distribution that is in close agreement with actual AEP from two test wind turbines at the Burfell site, while the simulated AEP of the Weibull approach is such that the P50 and the scale are substantially lower and the P90 is higher. Thus, the Weibull approach yields AEP that is not in line with the actual variability in AEP, while the Monte Carlo approach gives a realistic estimate of the distribution of AEP.
A recursive Monte Carlo method for estimating importance functions in deep penetration problems
International Nuclear Information System (INIS)
A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems
Shaw, W. T.; Luu, T.; Brickman, N.
2009-01-01
With financial modelling requiring a better understanding of model risk, it is helpful to be able to vary assumptions about underlying probability distributions in an efficient manner, preferably without the noise induced by resampling distributions managed by Monte Carlo methods. This paper presents differential equations and solution methods for the functions of the form Q(x) = F −1(G(x)), where F and G are cumulative distribution functions. Such functions allow the direct recycling of Mont...
Zhang, Xiaofeng
2012-03-01
Image formation in fluorescence diffuse optical tomography is critically dependent on construction of the Jacobian matrix. For clinical and preclinical applications, because of the highly heterogeneous characteristics of the medium, Monte Carlo methods are frequently adopted to construct the Jacobian. Conventional adjoint Monte Carlo method typically compute the Jacobian by multiplying the photon density fields radiated from the source at the excitation wavelength and from the detector at the emission wavelength. Nonetheless, this approach assumes that the source and the detector in Green's function are reciprocal, which is invalid in general. This assumption is particularly questionable in small animal imaging, where the mean free path length of photons is typically only one order of magnitude smaller than the representative dimension of the medium. We propose a new method that does not rely on the reciprocity of the source and the detector by tracing photon propagation entirely from the source to the detector. This method relies on the perturbation Monte Carlo theory to account for the differences in optical properties of the medium at the excitation and the emission wavelengths. Compared to the adjoint methods, the proposed method is more valid in reflecting the physical process of photon transport in diffusive media and is more efficient in constructing the Jacobian matrix for densely sampled configurations.
Search for non-standard model signatures in the WZ/ZZ final state at CDF run II
Energy Technology Data Exchange (ETDEWEB)
Norman, Matthew [Univ. of California, San Diego, CA (United States)
2009-01-01
This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb ^{-1} of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.
Search for non-standard model signatures in the WZ/ZZ final state at CDF run II
Energy Technology Data Exchange (ETDEWEB)
Norman, Matthew; /UC, San Diego
2009-01-01
This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb{sup -1} of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high {cflx s}. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.
Estimation of magnetocaloric properties by using Monte Carlo method for AMRR cycle
Arai, R.; Tamura, R.; Fukuda, H.; Li, J.; Saito, A. T.; Kaji, S.; Nakagome, H.; Numazawa, T.
2015-12-01
In order to achieve a wide refrigerating temperature range in magnetic refrigeration, it is effective to layer multiple materials with different Curie temperatures. It is crucial to have a detailed understanding of physical properties of materials to optimize the material selection and the layered structure. In the present study, we discuss methods for estimating a change in physical properties, particularly the Curie temperature when some of the Gd atoms are substituted for non-magnetic elements for material design, based on Gd as a ferromagnetic material which is a typical magnetocaloric material. For this purpose, whilst making calculations using the S=7/2 Ising model and the Monte Carlo method, we made a specific heat measurement and a magnetization measurement of Gd-R alloy (R = Y, Zr) to compare experimental values and calculated ones. The results showed that the magnetic entropy change, specific heat, and Curie temperature can be estimated with good accuracy using the Monte Carlo method.
Time-step limits for a Monte Carlo Compton-scattering method
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffery D [Los Alamos National Laboratory; Warsa, James S [Los Alamos National Laboratory; Lowrie, Robert B [Los Alamos National Laboratory
2008-01-01
Compton scattering is an important aspect of radiative transfer in high energy density applications. In this process, the frequency and direction of a photon are altered by colliding with a free electron. The change in frequency of a scattered photon results in an energy exchange between the photon and target electron and energy coupling between radiation and matter. Canfield, Howard, and Liang have presented a Monte Carlo method for simulating Compton scattering that models the photon-electron collision kinematics exactly. However, implementing their technique in multiphysics problems that include the effects of radiation-matter energy coupling typically requires evaluating the material temperature at its beginning-of-time-step value. This explicit evaluation can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and present time-step limits that avoid instabilities and nonphysical oscillations by considering a spatially independent, purely scattering radiative-transfer problem. Examining a simplified problem is justified because it isolates the effects of Compton scattering, and existing Monte Carlo techniques can robustly model other physics (such as absorption, emission, sources, and photon streaming). Our analysis begins by simplifying the equations that are solved via Monte Carlo within each time step using the Fokker-Planck approximation. Next, we linearize these approximate equations about an equilibrium solution such that the resulting linearized equations describe perturbations about this equilibrium. We then solve these linearized equations over a time step and determine the corresponding eigenvalues, quantities that can predict the behavior of solutions generated by a Monte Carlo simulation as a function of time-step size and other physical parameters. With these results, we develop our time-step limits. This approach is similar to our recent investigation of time discretizations for the
Energy Technology Data Exchange (ETDEWEB)
Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)
2012-07-01
Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)
Research of Monte Carlo method used in simulation of different maintenance processes
International Nuclear Information System (INIS)
The paper introduces two kinds of Monte Carlo methods used in equipment life process simulation under the least maintenance: condition: method of producing the interval of lifetime, method of time scale conversion. The paper also analyzes the characteristics and the using scope of the two methods. By using the conception of service age reduction factor, the model of equipment's life process under incomplete maintenance condition is established, and also the life process simulation method applicable to this situation is invented. (authors)
A measurement of the $ZZ \\rightarrow \\ell^{-}\\ell^{+}\
AUTHOR|(INSPIRE)INSPIRE-00236623
This thesis presents a measurement of the $ZZ$ diboson production cross section using the dataset from proton-proton collisions at $\\sqrt{s} = 8$ TeV collected in 2012 by the ATLAS experiment at the Large Hadron Collider at CERN, corresponding to a total integrated luminosity of $20.3$ fb$^{-1}$. The ATLAS detector and its component subsystems are described, with particular focus on the subsystems which have the largest impact on the analysis. Events are selected by requiring one pair of opposite-sign, same-flavour leptons and large missing transverse momentum, consistent with on-shell $ZZ$ production. Two separate measurements of the $pp \\rightarrow ZZ$ cross section are made, first in a restricted ("fiducial") phase space in each of the decay modes $ZZ \\rightarrow e^{-}e^{+}\
Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe
Martin, Nicolas
This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic
Kianoush Fathi Vajargah
2014-01-01
The accuracy of Monte Carlo and quasi-Monte Carlo methods decreases in problems of high dimensions. Therefore, the objective of this study was to present an optimum method to increase the accuracy of the answer. As the problem gets larger, the resulting accuracy will be higher. In this respect, this study combined the two previous methods, QMC and MC, and presented a hybrid method with efficiency higher than that of those two methods.
Tang, Jin-Bao; Sun, Xi-Feng; Yang, Hong-Ming; Zhang, Bao-Gang; Li, Zhi-Jian; Lin, Zhi-Juan; Gao, Zhi-Qin
2013-05-01
The site specificity and bioactivity retention of antibodies immobilized on a solid substrate are crucial requirements for solid phase immunoassays. A fusion protein between an immunoglobulin G (IgG)-binding protein (ZZ protein) and a polystyrene-binding peptide (PS-tag) was constructed, and then used to develop a simple method for the oriented immobilization of the ZZ protein onto a PS support by the specific attachment of the PS-tag onto a hydrophilic PS. The orientation of intact IgG was achieved via the interaction of the ZZ protein and the constant fragment (Fc), thereby displayed the Fab fragment for binding antigen. The interaction between rabbit IgG anti-horseradish peroxidase (anti-HRP) and its binding partner HRP was analyzed. Results showed that the oriented ZZ-PS-tag yielded an IgG-binding activity that is fivefold higher than that produced by the passive immobilization of the ZZ protein. The advantage of the proposed immunoassay strategy was demonstrated through an enzyme-linked immunosorbent assay, in which monoclonal mouse anti-goat IgG and HRP-conjugated rabbit F(ab')2 anti-goat IgG were used to detect goat IgG. The ZZ-PS-tag presented a tenfold higher sensitivity and a wider linear range than did the passively immobilized ZZ protein. The proposed approach may be an attractive strategy for a broad range of applications involving the oriented immobilization of intact IgGs onto PS supports, in which only one type of phi-PS (ZZ-PS-tag) surface is used. PMID:23601284
International Nuclear Information System (INIS)
A zero-variance (ZV) Monte Carlo transport method is a theoretical construct that, if it could be implemented on a practical computer, would produce the exact result after any number of histories. Unfortunately, ZV methods are impractical; to implement them, one must have complete knowledge of a certain adjoint flux, and acquiring this knowledge is an infinitely greater task than solving the original criticality or source-detector problem. (In fact, the adjoint flux itself yields the desired result, with no need of a Monte Carlo simulation) Nevertheless, ZV methods are of practical interest because it is possible to approximate them in ways that yield efficient variance-reduction schemes. Such implementations must be done carefully. For example, one must not change the mean of the final answer) The goal of variance reduction is to estimate the true mean with greater efficiency. In this paper, we describe new ZV methods for Monte Carlo criticality and source-detector problems. These methods have the same requirements (and disadvantages) as described earlier. However, their implementation is very different. Thus, the concept of approximating them to obtain practical variance-reduction schemes opens new possibilities. In previous ZV methods, (a) a single characteristic parameter (the k-eigenvalue or a detector response) of a forward transport problem is sought; (b) the exact solution of an adjoint problem must be known for all points in phase-space; and (c) a non-analog process, defined in terms of the adjoint solution, transports forward Monte Carlo particles from the source to the detector (in criticality problems, from the fission region, where a generation n fission neutron is born, back to the fission region, where generation n+1 fission neutrons are born). In the non-analog transport process, Monte Carlo particles (a) are born in the source region with weight equal to the desired characteristic parameter, (b) move through the system by an altered transport
Energy Technology Data Exchange (ETDEWEB)
Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1994-12-31
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.
Multilevel Monte Carlo methods for computing failure probability of porous media flow systems
Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.
2016-08-01
We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.
A step beyond the Monte Carlo method in economics: Application of multivariate normal distribution
Kabaivanov, S.; Malechkova, A.; Marchev, A.; Milev, M.; Markovska, V.; Nikolova, K.
2015-11-01
In this paper we discuss the numerical algorithm of Milev-Tagliani [25] used for pricing of discrete double barrier options. The problem can be reduced to accurate valuation of an n-dimensional path integral with probability density function of a multivariate normal distribution. The efficient solution of this problem with the Milev-Tagliani algorithm is a step beyond the classical application of Monte Carlo for option pricing. We explore continuous and discrete monitoring of asset path pricing, compare the error of frequently applied quantitative methods such as the Monte Carlo method and finally analyze the accuracy of the Milev-Tagliani algorithm by presenting the profound research and important results of Honga, S. Leeb and T. Li [16].
Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method
International Nuclear Information System (INIS)
A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology.
Monte Carlo Methods Development and Applications in Conformational Sampling of Proteins
DEFF Research Database (Denmark)
Tian, Pengfei
sampling methods to address these two problems. First of all, a novel technique has been developed for reliably estimating diffusion coefficients for use in the enhanced sampling of molecular simulations. A broad applicability of this method is illustrated by studying various simulation problems such as...... sufficient to provide an accurate structural and dynamical description of certain properties of proteins, (2), it is difficult to obtain correct statistical weights of the samples generated, due to lack of equilibrium sampling. In this dissertation I present several new methodologies based on Monte Carlo...... protein folding and aggregation. Second, by combining Monte Carlo sampling with a flexible probabilistic model of NMR chemical shifts, a series of simulation strategies are developed to accelerate the equilibrium sampling of free energy landscapes of proteins. Finally, a novel approach is presented to...
Bratchenko, M I
2001-01-01
A novel method of Monte Carlo simulation of small-angle reflection of charged particles from solid surfaces has been developed. Instead of atomic-scale simulation of particle-surface collisions the method treats the reflection macroscopically as 'condensed history' event. Statistical parameters of reflection are sampled from the theoretical distributions upon energy and angles. An efficient sampling algorithm based on combination of inverse probability distribution function method and rejection method has been proposed and tested. As an example of application the results of statistical modeling of particles flux enhancement near the bottom of vertical Wehner cone are presented and compared with simple geometrical model of specular reflection.
A vectorized Monte Carlo method with pseudo-scattering for neutron transport analysis
International Nuclear Information System (INIS)
A vectorized Monte Carlo method has been developed for the neutron transport analysis on the vector supercomputer HITAC S810. In this method, a multi-particle tracking algorithm is adopted and fundamental processing such as pseudo-random number generation is modified to use the vector processor effectively. The flight analysis of this method is characterized by the new algorithm with pseudo-scattering. This algorithm was verified by comparing its results with those of the conventional one. The method realized a speed-up of factor 10; about 7 times by vectorization and 1.5 times by the new algorithm for flight analysis
Monte-Carlo method for electron transport in a material with electron field
International Nuclear Information System (INIS)
The precise mathematical and physical foundations of the Monte-Carlo method for electron transport with the electromagnetic field are established. The condensed histories method given by M.J. Berger is generalized to the case where electromagnetic field exists in the material region. The full continuous-slowing-down method and the coupling method of continuous-slowing-down and catastrophic collision are compared. Using the approximation of homogeneous electronic field, the thickness of material for shielding the supra-thermal electrons produced by laser light irradiated target is evaluated
A study of orientational disorder in ND4Cl by the reverse Monte Carlo method
International Nuclear Information System (INIS)
The total structure factor for deuterated ammonium chloride measured by neutron diffraction has been modeled using the reverse Monte Carlo method. The results show that the orientational disorder of the ammonium ions consists of a local librational motion with an average angular amplitude α = 17 deg and reorientations of ammonium ions by 90 deg jumps around two-fold axes. Reorientations around three-fold axes have a very low probability
The massive Schwinger model on the lattice studied via a local Hamiltonian Monte-Carlo method
International Nuclear Information System (INIS)
A local Hamiltonian Monte-Carlo method is used to study the massive Schwinger model. A non-vanishing quark condensate is found and the dependence of the condensate and the string tension on the background field is calculated. These results reproduce well the expected continuum results. We study also the first-order phase transition which separates the weak and strong coupling regimes and find evidence for the behaviour conjectured by Coleman. (author)
Study of the tritium production in a 1-D blanket model with Monte Carlo methods
Cubí Ricart, Álvaro
2015-01-01
In this work a method to collapse a 3D geometry into a mono dimensional model of a fusion reactor blanket is developed and tested. Using this model, neutron and photon uxes and its energy deposition will be obtained with a Monte Carlo code. This results will allow to calculate the TBR and the thermal power of the blanket and will be able to be integrated in the AINA code.
Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA
International Nuclear Information System (INIS)
Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)
R and D on automatic modeling methods for Monte Carlo codes FLUKA
International Nuclear Information System (INIS)
FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)
Multilevel markov chain monte carlo method for high-contrast single-phase flow problems
Efendiev, Yalchin R.
2014-12-19
In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.
Calculation of neutron cross-sections in the unresolved resonance region by the Monte Carlo method
International Nuclear Information System (INIS)
The Monte-Carlo method is used to produce neutron cross-sections and functions of the cross-section probabilities in the unresolved energy region and a corresponding Fortran programme (ONERS) is described. Using average resonance parameters, the code generates statistical distribution of level widths and spacing between resonance for S and P waves. Some neutron cross-sections for U238 and U235 are shown as examples
International Nuclear Information System (INIS)
Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ''Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ''local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the
Quantifying and reducing uncertainty in life cycle assessment using the Bayesian Monte Carlo method
International Nuclear Information System (INIS)
The traditional life cycle assessment (LCA) does not perform quantitative uncertainty analysis. However, without characterizing the associated uncertainty, the reliability of assessment results cannot be understood or ascertained. In this study, the Bayesian method, in combination with the Monte Carlo technique, is used to quantify and update the uncertainty in LCA results. A case study of applying the method to comparison of alternative waste treatment options in terms of global warming potential due to greenhouse gas emissions is presented. In the case study, the prior distributions of the parameters used for estimating emission inventory and environmental impact in LCA were based on the expert judgment from the intergovernmental panel on climate change (IPCC) guideline and were subsequently updated using the likelihood distributions resulting from both national statistic and site-specific data. The posterior uncertainty distribution of the LCA results was generated using Monte Carlo simulations with posterior parameter probability distributions. The results indicated that the incorporation of quantitative uncertainty analysis into LCA revealed more information than the deterministic LCA method, and the resulting decision may thus be different. In addition, in combination with the Monte Carlo simulation, calculations of correlation coefficients facilitated the identification of important parameters that had major influence to LCA results. Finally, by using national statistic data and site-specific information to update the prior uncertainty distribution, the resultant uncertainty associated with the LCA results could be reduced. A better informed decision can therefore be made based on the clearer and more complete comparison of options
The discrete angle technique combined with the subgroup Monte Carlo method
International Nuclear Information System (INIS)
We are investigating the use of the discrete angle technique for taking into account anisotropy scattering in the case of a subgroup (or multiband) Monte Carlo algorithm implemented in the DRAGON lattice code. In order to use the same input library data already available for deterministic methods, only Legendre moments of the isotopic transfer cross sections are available, typically computed by the GROUPR module of NJOY. However the direct use of these data is impractical into a Monte Carlo algorithm, due to the occurrence of negative parts into these distributions. To deal with this limitation, Legendre expansions are consistently converted by a moment method into sums of Dirac-delta distributions. These probability tables can then be directly used to sample the scattering cosine. In this proposed approach, the same moment approach is used to compute probability tables for the scattering angle and for the resonant cross sections. The applicability of the moment approach shall however be thoroughly investigated, due to the presence of incoherent Legendre moments. When Dirac angles can not be computed, the discrete angle technique is substituted by legacy semi-analytic methods. We provide numerical examples to illustrate the methodology by comparison with SN and legacy Monte Carlo codes on several benchmarks from the ICSBEP. (author)
Investigation of neutral particle leakages in lacunary media to speed up Monte Carlo methods
International Nuclear Information System (INIS)
This research aims at optimizing calculation methods which are used for long duration penetration problems in radiation protection when vacuum media are involved. After having recalled the main notions of the transport theory, the various numerical methods which are used to solve them, the fundamentals of the Monte Carlo method, and problems related to long duration penetration, the report focuses on the problem of leaks through vacuum. It describes the bias introduced in the TRIPOLI code, reports the search for an optimal bias in cylindrical configurations by using the JANUS code. It reports the application to a simple straight tube
Ebru Ermis, Elif; Celiktas, Cuneyt
2015-07-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Application of Macro Response Monte Carlo method for electron spectrum simulation
International Nuclear Information System (INIS)
During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations
International Nuclear Information System (INIS)
Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)
International Nuclear Information System (INIS)
A Monte Carlo method for calculation of the distribution of angular deflections of fast charged particles passing through thin layer of matter is described on the basis of Moliere theory of multiple scattering. The distribution of the angular deflections obtained as the result of calculations is compared with Moliere theory. The method proposed is useful to calculate the electron transport in matter by Monte Carlo method. (author)
Search for WZ+ZZ Production with Missing Transverse Energy and b Jets at CDF
Energy Technology Data Exchange (ETDEWEB)
Poprocki, Stephen [Cornell Univ., Ithaca, NY (United States)
2013-01-01
Observation of diboson processes at hadron colliders is an important milestone on the road to discovery or exclusion of the standard model Higgs boson. Since the decay processes happen to be closely related, methods, tools, and insights obtained through the more common diboson decays can be incorporated into low-mass standard model Higgs searches. The combined WW + WZ + ZZ diboson cross section has been measured at the Tevatron in hadronic decay modes. In this thesis we take this one step closer to the Higgs by measuring just the WZ + ZZ cross section, exploiting a novel arti cial neural network based b-jet tagger to separate the WW background. The number of signal events is extracted from data events with large E_{T} using a simultaneous t in events with and without two jets consistent with B hadron decays. Using 5:2 fb^{-1} of data from the CDF II detector, we measure a cross section of (p $\\bar{p}$ → WZ,ZZ) = 5:8^{+3.6}_{ -3.0} pb, in agreement with the standard model.
Simulation of clinical X-ray tube using the Monte Carlo Method - PENELOPE code
International Nuclear Information System (INIS)
Breast cancer is the most common type of cancer among women. The main strategy to increase the long-term survival of patients with this disease is the early detection of the tumor, and mammography is the most appropriate method for this purpose. Despite the reduction of cancer deaths, there is a big concern about the damage caused by the ionizing radiation to the breast tissue. To evaluate these measures it was modeled a mammography equipment, and obtained the depth spectra using the Monte Carlo method - PENELOPE code. The average energies of the spectra in depth and the half value layer of the mammography output spectrum. (author)
International Nuclear Information System (INIS)
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Comparing Subspace Methods for Closed Loop Subspace System Identification by Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
David Di Ruscio
2009-10-01
Full Text Available A novel promising bootstrap subspace system identification algorithm for both open and closed loop systems is presented. An outline of the SSARX algorithm by Jansson (2003 is given and a modified SSARX algorithm is presented. Some methods which are consistent for closed loop subspace system identification presented in the literature are discussed and compared to a recently published subspace algorithm which works for both open as well as for closed loop data, i.e., the DSR_e algorithm as well as the bootstrap method. Experimental comparisons are performed by Monte Carlo simulations.
Energy Technology Data Exchange (ETDEWEB)
Datema, C.P. E-mail: c.datema@iri.tudelft.nl; Bom, V.R.; Eijk, C.W.E. van
2002-08-01
Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found.
A study of potential energy curves from the model space quantum Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)
2015-12-07
We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.
Ermis Elif Ebru; Celiktas Cuneyt
2015-01-01
Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals) were carried out by means of FLUKA Monte Carlo (MC) method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST) values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded f...
International Nuclear Information System (INIS)
Reliability assessments based on probabilistic fracture mechanics can give insight into the effects of changes in design parameters, operational conditions and maintenance schemes. Although they are often not capable of providing absolute reliability values, these methods at least allow the ranking of different solutions among alternatives. Due to the variety of possible solutions for design, operation and maintenance problems numerous probabilistic reliability assessments have to be carried out. This is a laborous task especially for crack containing welds of nuclear pipes subjected to fatigue. The objective of this paper is to compare the Monte Carlo simulation method and a newly developed approximative approach using the Markov process ansatz for this task
On the Calculation of Reactor Time Constants Using the Monte Carlo Method
International Nuclear Information System (INIS)
Full-core reactor dynamics calculation involves the coupled modelling of thermal hydraulics and the time-dependent behaviour of core neutronics. The reactor time constants include prompt neutron lifetimes, neutron reproduction times, effective delayed neutron fractions and the corresponding decay constants, typically divided into six or eight precursor groups. The calculation of these parameters is traditionally carried out using deterministic lattice transport codes, which also produce the homogenised few-group constants needed for resolving the spatial dependence of neutron flux. In recent years, there has been a growing interest in the production of simulator input parameters using the stochastic Monte Carlo method, which has several advantages over deterministic transport calculation. This paper reviews the methodology used for the calculation of reactor time constants. The calculation techniques are put to practice using two codes, the PSG continuous-energy Monte Carlo reactor physics code and MORA, a new full-core Monte Carlo neutron transport code entirely based on homogenisation. Both codes are being developed at the VTT Technical Research Centre of Finland. The results are compared to other codes and experimental reference data in the CROCUS reactor kinetics benchmark calculation. (author)
Uncertainty Assessment of the Core Thermal-Hydraulic Analysis Using the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Choi, Sun Rock; Yoo, Jae Woon; Hwang, Dae Hyun; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2010-10-15
In the core thermal-hydraulic design of a sodium cooled fast reactor, the uncertainty factor analysis is a critical issue in order to assure safe and reliable operation. The deviations from the nominal values need to be quantitatively considered by statistical thermal design methods. The hot channel factors (HCF) were employed to evaluate the uncertainty in the early design such as the CRBRP. The improved thermal design procedure (ISTP) calculates the overall uncertainty based on the Root Sum Square technique and sensitivity analyses of each design parameters. Another way to consider the uncertainties is to use the Monte Carlo method (MCM). In this method, all the input uncertainties are randomly sampled according to their probability density functions and the resulting distribution for the output quantity is analyzed. It is able to directly estimate the uncertainty effects and propagation characteristics for the present thermalhydraulic model. However, it requires a huge computation time to get a reliable result because the accuracy is dependent on the sampling size. In this paper, the analysis of uncertainty factors using the Monte Carlo method is described. As a benchmark model, the ORNL 19 pin test is employed to validate the current uncertainty analysis method. The thermal-hydraulic calculation is conducted using the MATRA-LMR program which was developed at KAERI based on the subchannel approach. The results are compared with those of the hot channel factors and the improved thermal design procedure
A CNS calculation line based on a Monte-Carlo method
International Nuclear Information System (INIS)
The neutronic design of the moderator cell of a Cold Neutron Source (CNS) involves many different considerations regarding geometry, location, and materials. The decisions taken in this sense affect not only the neutron flux in the source neighbourhood, which can be evaluated by a standard deterministic method, but also the neutron flux values in experimental positions far away from the neutron source. At long distances from the CNS, very time consuming 3D deterministic methods or Monte Carlo transport methods are necessary in order to get accurate figures of standard and typical magnitudes such as average neutron flux, neutron current, angular flux, and luminosity. The Monte Carlo method is a unique and powerful tool to calculate the transport of neutrons and photons. Its use in a bootstrap scheme appears to be an appropriate solution for this type of systems. The use of MCNP as the main neutronic design tool leads to a fast and reliable method to perform calculations in a relatively short time with low statistical errors, if the proper scheme is applied. The design goal is to evaluate the performance of the CNS, its beam tubes and neutron guides, at specific experimental locations in the reactor hall and in the neutron or experimental hall. In this work, the calculation methodology used to design a CNS and its associated Neutron Beam Transport Systems (NBTS), based on the use of the MCNP code, is presented. (author)
Research on Reliability Modelling Method of Machining Center Based on Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Chuanhai Chen
2013-03-01
Full Text Available The aim of this study is to get the reliability of series system and analyze the reliability of machining center. So a modified method of reliability modelling based on Monte Carlo simulation for series system is proposed. The reliability function, which is built by the classical statistics method based on the assumption that machine tools were repaired as good as new, may be biased in the real case. The reliability functions of subsystems are established respectively and then the reliability model is built according to the reliability block diagram. Then the fitting reliability function of machine tools is established using the failure data of sample generated by Monte Carlo simulation, whose inverse reliability function is solved by the linearization technique based on radial basis function. Finally, an example of the machining center is presented using the proposed method to show its potential application. The analysis results show that the proposed method can provide an accurate reliability model compared with the conventional method.
International Nuclear Information System (INIS)
The transmission/escape probability (TEP) method for neutral particle transport has recently been introduced and implemented for the calculation of 2-D neutral atom transport in the edge plasma and divertor regions of tokamaks. The results of an evaluation of the accuracy of the approximations made in the calculation of the basic TEP transport parameters are summarized. Comparisons of the TEP and Monte Carlo calculations for model problems using tokamak experimental geometries and for the analysis of measured neutral densities in DIII-D are presented. The TEP calculations are found to agree rather well with Monte Carlo results, for the most part, but the need for a few extensions of the basic TEP transport methodology and for inclusion of molecular effects and a better wall reflection model in the existing code is suggested by the study. (author)
Probing Neutral Gauge Boson Self-interactions in ZZ Production at the Tevatron
Baur, Ulrich
2001-01-01
We present an analysis of ZZ production at the upgraded Fermilab Tevatron for general ZZZ and ZZ-gamma couplings. Achievable limits on these couplings are shown to be a significant improvement over the limits currently obtained by LEP II.
Emulation of higher-order tensors in manifold Monte Carlo methods for Bayesian Inverse Problems
Lan, Shiwei; Bui-Thanh, Tan; Christie, Mike; Girolami, Mark
2016-03-01
The Bayesian approach to Inverse Problems relies predominantly on Markov Chain Monte Carlo methods for posterior inference. The typical nonlinear concentration of posterior measure observed in many such Inverse Problems presents severe challenges to existing simulation based inference methods. Motivated by these challenges the exploitation of local geometric information in the form of covariant gradients, metric tensors, Levi-Civita connections, and local geodesic flows have been introduced to more effectively locally explore the configuration space of the posterior measure. However, obtaining such geometric quantities usually requires extensive computational effort and despite their effectiveness affects the applicability of these geometrically-based Monte Carlo methods. In this paper we explore one way to address this issue by the construction of an emulator of the model from which all geometric objects can be obtained in a much more computationally feasible manner. The main concept is to approximate the geometric quantities using a Gaussian Process emulator which is conditioned on a carefully chosen design set of configuration points, which also determines the quality of the emulator. To this end we propose the use of statistical experiment design methods to refine a potentially arbitrarily initialized design online without destroying the convergence of the resulting Markov chain to the desired invariant measure. The practical examples considered in this paper provide a demonstration of the significant improvement possible in terms of computational loading suggesting this is a promising avenue of further development.
International Nuclear Information System (INIS)
We develop a 'Local' Exponential Transform method which distributes the particles nearly uniformly across the system in Monte Carlo transport calculations. An exponential approximation to the continuous transport equation is used in each mesh cell to formulate biasing parameters. The biasing parameters, which resemble those of the conventional exponential transform, tend to produce a uniform sampling of the problem geometry when applied to a forward Monte Carlo calculation, and thus they help to minimize the maximum variance of the flux. Unlike the conventional exponential transform, the biasing parameters are spatially dependent, and are automatically determined from a forward diffusion calculation. We develop two versions of the forward Local Exponential Transform method, one with spatial biasing only, and one with spatial and angular biasing. The method is compared to conventional geometry splitting/Russian roulette for several sample one-group problems in X-Y geometry. The forward Local Exponential Transform method with angular biasing is found to produce better results than geometry splitting/Russian roulette in terms of minimizing the maximum variance of the flux. (orig.)
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process
International Nuclear Information System (INIS)
The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)
Integration of the adjoint gamma quantum transport equation by the Monte Carlo method
International Nuclear Information System (INIS)
Comparative description and analysis of the direct and adjoint algorithms of calculation of gamma-quantum transmission in shielding using the Monte Carlo method have been carried out. Adjoint estimations for a number of monoenergetic sources have been considered. A brief description of ''COMETA'' program for BESM-6 computer reazaling direct and adjoint algorithms is presented. The program is modular-constructed which allows to widen it the new module-units being joined. Results of solution by the adjoint branch of two analog problems as compared to the analytical data are presented. These results confirm high efficiency of ''COMETA'' program
Microlens assembly error analysis for light field camera based on Monte Carlo method
Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping
2016-08-01
This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.
International Nuclear Information System (INIS)
We present a hierarchical Bayesian method for estimating the density and size distribution of subclad-flaws in French Pressurized Water Reactor (PWR) vessels. This model takes into account in-service inspection (ISI) data, a flaw size-dependent probability of detection (different functions are considered) with a threshold of detection, and a flaw sizing error distribution (different distributions are considered). The resulting model is identified through a Markov Chain Monte Carlo (MCMC) algorithm. The article includes discussion for choosing the prior distribution parameters and an illustrative application is presented highlighting the model's ability to provide good parameter estimates even when a small number of flaws are observed
Percolation conductivity of Penrose tiling by the transfer-matrix Monte Carlo method
Babalievski, Filip V.
1992-03-01
A generalization of the Derrida and Vannimenus transfer-matrix Monte Carlo method has been applied to calculations of the percolation conductivity in a Penrose tiling. Strips with a length~10 4 and widths from 3 to 19 have been used. Disregarding the differences for smaller strip widths (up to 7), the results show that the percolative conductivity of a Penrose tiling has a value very close to that of a square lattice. The estimate for the percolation transport exponent once more confirms the universality conjecture for the 0-1 distribution of resistors.
Forward-walking Green's function Monte Carlo method for correlation functions
International Nuclear Information System (INIS)
The forward-walking Green's Function Monte Carlo method is used to compute expectation values for the transverse Ising model in (1 + 1)D, and the results are compared with exact values. The magnetisation Mz and the correlation function pz (n) are computed. The algorithm reproduces the exact results, and convergence for the correlation functions seems almost as rapid as for local observables such as the magnetisation. The results are found to be sensitive to the trial wavefunction, however, especially at the critical point. Copyright (1999) CSIRO Australia
Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy
International Nuclear Information System (INIS)
The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.
Directory of Open Access Journals (Sweden)
S.V. Kryuchkov
2015-03-01
Full Text Available The power of the elliptically polarized electromagnetic radiation absorbed by band-gap graphene in presence of constant magnetic field is calculated. The linewidth of cyclotron absorption is shown to be non-zero even if the scattering is absent. The calculations are performed analytically with the Boltzmann kinetic equation and confirmed numerically with the Monte Carlo method. The dependence of the linewidth of the cyclotron absorption on temperature applicable for a band-gap graphene in the absence of collisions is determined analytically.
Kienle, Alwin; Hibst, Raimund
1996-05-01
Treatment of leg telangiectasia with a pulsed laser is investigated theoretically. The Monte Carlo method is used to calculate light propagation and absorption in the epidermis, dermis and the ectatic blood vessel. Calculations are made for different diameters and depths of the vessel in the dermis. In addition, the scattering and the absorption coefficients of the dermis are varied. On the basis of the considered damage model it is found that for vessels with diameters between 0.3 mm and 0.5 mm wavelengths about 600 nm are optimal to achieve selective photothermolysis.
DEFF Research Database (Denmark)
Anders, Annett; Nishijima, Kazuyoshi
The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind the...
Fast Monte Carlo Electron-Photon Transport Method and Application in Accurate Radiotherapy
Hao, Lijuan; Sun, Guangyao; Zheng, Huaqing; Song, Jing; Chen, Zhenping; Li, Gui
2014-06-01
Monte Carlo (MC) method is the most accurate computational method for dose calculation, but its wide application on clinical accurate radiotherapy is hindered due to its poor speed of converging and long computation time. In the MC dose calculation research, the main task is to speed up computation while high precision is maintained. The purpose of this paper is to enhance the calculation speed of MC method for electron-photon transport with high precision and ultimately to reduce the accurate radiotherapy dose calculation time based on normal computer to the level of several hours, which meets the requirement of clinical dose verification. Based on the existing Super Monte Carlo Simulation Program (SuperMC), developed by FDS Team, a fast MC method for electron-photon coupled transport was presented with focus on two aspects: firstly, through simplifying and optimizing the physical model of the electron-photon transport, the calculation speed was increased with slightly reduction of calculation accuracy; secondly, using a variety of MC calculation acceleration methods, for example, taking use of obtained information in previous calculations to avoid repeat simulation of particles with identical history; applying proper variance reduction techniques to accelerate MC method convergence rate, etc. The fast MC method was tested by a lot of simple physical models and clinical cases included nasopharyngeal carcinoma, peripheral lung tumor, cervical carcinoma, etc. The result shows that the fast MC method for electron-photon transport was fast enough to meet the requirement of clinical accurate radiotherapy dose verification. Later, the method will be applied to the Accurate/Advanced Radiation Therapy System ARTS as a MC dose verification module.
The Asteroseismology of ZZ Ceti star GD1212
Guifang, Lin; Jie, Su
2015-01-01
The ZZ Ceti star GD 1212 was detected to have 19 independent modes from the two-wheel-controlled Kepler Spacecraft in 2014. By asymptotic analysis, we identify most of pulsation modes. We find out two set of complete triplets, and four sets of doublet which are interpreted as rotation modes with $l=1$. For the other five modes, the four modes $f_{13}$, $f_{15}$, $f_{16}$ and $f_{4}$ are identified as ones with $l=2$; and the mode $f_{7}$ is identified to be the one with $l=1$. Meanwhile we derive a mean rotation period of $6.65\\pm0.21$ h for GD 1212 according to the rotation splitting. Using the method of matching the observed periods to theoretical ones, we obtain the best-fitting model with the four parameters as $M_{\\rm{*}}/M_{\\rm{\\odot}} = 0.775$, $T_{\\rm{eff}} = 11400$ K, $\\log (M_{\\rm{H}}/M_{\\rm{*}}) = -5.0$, $\\log (M_{\\rm{He}}/M_{\\rm{*}})=-2.5$ for GD 1212. We find that due to the gradient of C/O abundance in the interior of white dwarf, some modes can not propagate to the stellar interior, which leads...
Numerical simulation of C/O spectroscopy in logging by Monte-Carlo method
International Nuclear Information System (INIS)
Numerical simulation of ratio of C/O spectroscopy in logging by Monte-Carlo method is made in this paper. Agree well with the measured spectra, the simulated spectra can meet the requirement of logging practice. Vari- ous kinds of C/O ratios affected by different formation oil saturations,borehole oil fractions, casing sizes and concrete ring thicknesses are investigated. In order to achieve accurate results of processing the spectra, this paper presents a new method for unfolding the C/O inelastic gamma spectroscopy, and analysis for the spectra using the method, The result agrees with the fact. These rules and method can be used as calibrating tools and logging interpretation. (authors)
Institute of Scientific and Technical Information of China (English)
LIU Bang-gui; ZHANG Kai-cheng; LI Ying
2007-01-01
The Kinetic Monte Carlo (KMC) method based on the transition-state theory, powerful and famous for sim-ulating atomic epitaxial growth of thin films and nanostruc-tures, was used recently to simulate the nanoferromagnetism and magnetization dynamics of nanomagnets with giant mag-netic anisotropy. We present a brief introduction to the KMC method and show how to reformulate it for nanoscale spin systems. Large enough magnetic anisotropy, observed exper-imentally and shown theoretically in terms of first-principle calculation, is not only essential to stabilize spin orientation but also necessary in making the transition-state barriers dur-ing spin reversals for spin KMC simulation. We show two applications of the spin KMC method to monatomic spin chains and spin-polarized-current controlled composite nano-magnets with giant magnetic anisotropy. This spin KMC method can be applied to other anisotropic nanomagnets and composite nanomagnets as long as their magnetic anisotropy energies are large enough.
Differential Monte Carlo method for computing seismogram envelopes and their partial derivatives
Takeuchi, Nozomu
2016-05-01
We present an efficient method that is applicable to waveform inversions of seismogram envelopes for structural parameters describing scattering properties in the Earth. We developed a differential Monte Carlo method that can simultaneously compute synthetic envelopes and their partial derivatives with respect to structural parameters, which greatly reduces the required CPU time. Our method has no theoretical limitations to apply to the problems with anisotropic scattering in a heterogeneous background medium. The effects of S wave polarity directions and phase differences between SH and SV components are taken into account. Several numerical examples are presented to show that the intrinsic and scattering attenuation at the depth range of the asthenosphere have different impacts on the observed seismogram envelopes, thus suggesting that our method can potentially be applied to inversions for scattering properties in the deep Earth.
International Nuclear Information System (INIS)
Measurement of grating pitch by optical diffraction is one of the few methods currently available for establishing traceability to the definition of the meter on the nanoscale; therefore, understanding all aspects of the measurement is imperative for accurate dissemination of the SI meter. A method for evaluating the component of measurement uncertainty associated with coherent scattering in the diffractometer instrument is presented. The model equation for grating pitch calibration by optical diffraction is an example where Monte Carlo (MC) methods can vastly simplify evaluation of measurement uncertainty. This paper includes discussion of the practical aspects of implementing MC methods for evaluation of measurement uncertainty in grating pitch calibration by diffraction. Downloadable open-source software is demonstrated. (technical design note)
International Nuclear Information System (INIS)
Computed tomography (CT) is one of the most used techniques in medical diagnosis, and its use has become one of the main sources of exposure of the population to ionising radiation. This work concentrates on the paediatric patients, since children exhibit higher radiosensitivity than adults. Nowadays, patient doses are estimated through two standard CT dose index (CTDI) phantoms as a reference to calculate CTDI volume (CTDIvol) values. This study aims at improving the knowledge about the radiation exposure to children and to better assess the accuracy of the CTDIvol method. The effectiveness of the CTDIvol method for patient dose estimation was then investigated through a sensitive study, taking into account the doses obtained by three methods: CTDIvol measured, CTDIvol values simulated with Monte Carlo (MC) code MCNPX and the recent proposed method Size-Specific Dose Estimate (SSDE). In order to assess organ doses, MC simulations were executed with paediatric voxel phantoms. (authors)
Biases in approximate solution to the criticality problem and alternative Monte Carlo method
International Nuclear Information System (INIS)
The solution to the problem of criticality for the neutron transport equation using the source iteration method is addressed. In particular, the question of convergence of the iterations is examined. It is concluded that slow convergence problems will occur in cases where the optical thickness of the space region in question is large. Furthermore it is shown that in general, the final result of the iterative process is strongly affected by an insufficient accuracy of the individual iterations. To avoid these problems, a modified method of the solution is suggested. This modification is based on the results of the theory of positive operators. The criticality problem is solved by means of the Monte Carlo method by constructing special random variables so that the differences between the observed and exact results are arbitrarily small. The efficiency of the method is discussed and some numerical results are presented
International Nuclear Information System (INIS)
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (1) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (2) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59-64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets. (author)
On solution to the problem of criticality by alternative MONTE CARLO method
International Nuclear Information System (INIS)
The contribution deals with solution to the problem of criticality for neutron transport equation. The problem is transformed to equivalent one in a suitable set of complex functions and existence and uniqueness of its solution is shown. Then the source iteration method of the solution is discussed. It is pointed out that final result of iterative process is strongly affected by the fact that individual iterations are not computed with sufficient accuracy. To avoid this problem a modified method of the solution is suggested and presented. The modification is based on results of the theory of positive operators and problem of criticality is solved by Monte Carlo method constructing special random process and variable so that differences between results obtained and the exact ones would be arbitrarily small. Efficiency of this alternative method is analysed as well (Author)
A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry
International Nuclear Information System (INIS)
The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)
Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H
2014-01-01
The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.
Werner, M J; Sornette, D
2009-01-01
In meteorology, engineering and computer sciences, data assimilation is routinely employed as the optimal way to combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than can be achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant to the seismic gap hypothesis, models of characteristic earthquakes and to recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating ar...
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
Monte Carlo method for polarized radiative transfer in gradient-index media
International Nuclear Information System (INIS)
Light transfer in gradient-index media generally follows curved ray trajectories, which will cause light beam to converge or diverge during transfer and induce the rotation of polarization ellipse even when the medium is transparent. Furthermore, the combined process of scattering and transfer along curved ray path makes the problem more complex. In this paper, a Monte Carlo method is presented to simulate polarized radiative transfer in gradient-index media that only support planar ray trajectories. The ray equation is solved to the second order to address the effect induced by curved ray trajectories. Three types of test cases are presented to verify the performance of the method, which include transparent medium, Mie scattering medium with assumed gradient index distribution, and Rayleigh scattering with realistic atmosphere refractive index profile. It is demonstrated that the atmospheric refraction has significant effect for long distance polarized light transfer. - Highlights: • A Monte Carlo method for polarized radiative transfer in gradient index media. • Effect of curved ray paths on polarized radiative transfer is considered. • Importance of atmospheric refraction for polarized light transfer is demonstrated
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Energy Technology Data Exchange (ETDEWEB)
Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
Analysis of uncertainty quantification method by comparing Monte-Carlo method and Wilk's formula
International Nuclear Information System (INIS)
An analysis of the uncertainty quantification related to LBLOCA using the Monte-Carlo calculation has been performed and compared with the tolerance level determined by the Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LOCA phenomena were determined based on previous PIRT results and documentation during the BEMUSE project. Calulations were conducted on 3,500 cases within a 2-week CPU time on a 14-PC cluster system. The Monte-Carlo exercise shows that the 95% upper limit PCT value can be obtained well, with a 95% confidence level using the Wilks' formula, although we have to endure a 5% risk of PCT under-prediction. The results also show that the statistical fluctuation of the limit value using Wilks' first-order is as large as the uncertainty value itself. It is therefore desirable to increase the order of the Wilks' formula to be higher than the second-order to estimate the reliable safety margin of the design features. It is also shown that, with its ever increasing computational capability, the Monte-Carlo method is accessible for a nuclear power plant safety analysis within a realistic time frame.
Self-optimizing Monte Carlo method for nuclear well logging simulation
Liu, Lianyan
1997-09-01
In order to increase the efficiency of Monte Carlo simulation for nuclear well logging problems, a new method has been developed for variance reduction. With this method, an importance map is generated in the regular Monte Carlo calculation as a by-product, and the importance map is later used to conduct the splitting and Russian roulette for particle population control. By adopting a spatial mesh system, which is independent of physical geometrical configuration, the method allows superior user-friendliness. This new method is incorporated into the general purpose Monte Carlo code MCNP4A through a patch file. Two nuclear well logging problems, a neutron porosity tool and a gamma-ray lithology density tool are used to test the performance of this new method. The calculations are sped up over analog simulation by 120 and 2600 times, for the neutron porosity tool and for the gamma-ray lithology density log, respectively. The new method enjoys better performance by a factor of 4~6 times than that of MCNP's cell-based weight window, as per the converged figure-of-merits. An indirect comparison indicates that the new method also outperforms the AVATAR process for gamma-ray density tool problems. Even though it takes quite some time to generate a reasonable importance map from an analog run, a good initial map can create significant CPU time savings. This makes the method especially suitable for nuclear well logging problems, since one or several reference importance maps are usually available for a given tool. Study shows that the spatial mesh sizes should be chosen according to the mean-free-path. The overhead of the importance map generator is 6% and 14% for neutron and gamma-ray cases. The learning ability towards a correct importance map is also demonstrated. Although false-learning may happen, physical judgement can help diagnose with contributon maps. Calibration and analysis are performed for the neutron tool and the gamma-ray tool. Due to the fact that a very
Monte Carlo simulation methods of determining red bone marrow dose from external radiation
International Nuclear Information System (INIS)
Objective: To provide evidence for a more reasonable method of determining red bone marrow dose by analyzing and comparing existing simulation methods. Methods: By utilizing Monte Carlo simulation software MCNPX, the absorbed doses of red hone marrow of Rensselaer Polytechnic Institute (RPI) adult female voxel phantom were calculated through 4 different methods: direct energy deposition.dose response function (DRF), King-Spiers factor method and mass-energy absorption coefficient (MEAC). The radiation sources were defined as infinite plate.sources with the energy ranging from 20 keV to 10 MeV, and 23 sources with different energies were simulated in total. The source was placed right next to the front of the RPI model to achieve a homogeneous anteroposterior radiation scenario. The results of different simulated photon energy sources through different methods were compared. Results: When the photon energy was lower than 100 key, the direct energy deposition method gave the highest result while the MEAC and King-Spiers factor methods showed more reasonable results. When the photon energy was higher than 150 keV taking into account of the higher absorption ability of red bone marrow at higher photon energy, the result of the King-Spiers factor method was larger than those of other methods. Conclusions: The King-Spiers factor method might be the most reasonable method to estimate the red bone marrow dose from external radiation. (authors)
Wind Turbine Placement Optimization by means of the Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
S. Brusca
2014-01-01
Full Text Available This paper defines a new procedure for optimising wind farm turbine placement by means of Monte Carlo simulation method. To verify the algorithm’s accuracy, an experimental wind farm was tested in a wind tunnel. On the basis of experimental measurements, the error on wind farm power output was less than 4%. The optimization maximises the energy production criterion; wind turbines’ ground positions were used as independent variables. Moreover, the mathematical model takes into account annual wind intensities and directions and wind turbine interaction. The optimization of a wind farm on a real site was carried out using measured wind data, dominant wind direction, and intensity data as inputs to run the Monte Carlo simulations. There were 30 turbines in the wind park, each rated at 20 kW. This choice was based on wind farm economics. The site was proportionally divided into 100 square cells, taking into account a minimum windward and crosswind distance between the turbines. The results highlight that the dominant wind intensity factor tends to overestimate the annual energy production by about 8%. Thus, the proposed method leads to a more precise annual energy evaluation and to a more optimal placement of the wind turbines.
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-04-01
Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
International Nuclear Information System (INIS)
Even with state of the art treatment planning systems the photon dose calculation can be erroneous under certain circumstances. In these cases Monte Carlo methods promise a higher accuracy. We have used the photon transport code CHILD of the GSF-Forschungszentrum, which was developed to calculate dose in diagnostic radiation protection matters. The code was refined for application in radiotherapy for high energy photon irradiation and should serve for dose verification in individual cases. The irradiation phantom can be entered as any desired 3D matrix or be generated automatically from an individual CT database. The particle transport takes into account pair production, photo, and Compton effect with certain approximations. Efficiency is increased by the method of 'fractional photons'. The generated secondary electrons are followed by the unscattered continuous-slowing-down-approximation (CSDA). The developed Monte Carlo code Monaco Matrix was tested with simple homogeneous and heterogeneous phantoms through comparisons with simulations of the well known but slower EGS4 code. The use of a point source with a direction independent energy spectrum as simplest model of the radiation field from the accelerator head is shown to be sufficient for simulation of actual accelerator depth dose curves. Good agreement (<2%) was found for depth dose curves in water and in bone. With complex test phantoms and comparisons with EGS4 calculated dose profiles some drawbacks in the code were found. Thus, the implementation of the electron multiple-scattering should lead us to step by step improvement of the algorithm. (orig.)
Monteray Mark-I: Computer program (PC-version) for shielding calculation with Monte Carlo method
International Nuclear Information System (INIS)
A computer program for gamma ray shielding calculation using Monte Carlo method has been developed. The program is written in WATFOR77 language. The MONTERAY MARH-1 is originally developed by James Wood. The program was modified by the authors that the modified version is easily executed. Applying Monte Carlo method the program observe photon gamma transport in an infinity planar shielding with various thick. A photon gamma is observed till escape from the shielding or when its energy less than the cut off energy. Pair production process is treated as pure absorption process that annihilation photons generated in the process are neglected in the calculation. The out put data calculated by the program are total albedo, build-up factor, and photon spectra. The calculation result for build-up factor of a slab lead and water media with 6 MeV parallel beam gamma source shows that they are in agreement with published data. Hence the program is adequate as a shielding design tool for observing gamma radiation transport in various media
International Nuclear Information System (INIS)
Although resonance neutron captures for 238U in water-moderated lattices are known to occur near moderator-fuel interfaces, the sharply attenuated spatial captures here have not been calculated by multigroup transport or Monte Carlo methods. Advances in computer speed and capacity have restored interest in applying Monte Carlo methods to evaluate spatial resonance captures in fueled lattices. Recently published studies have placed complete reliance on the ostensible precision of the Monte Carlo approach without auxiliary confirmation that resonance processes were followed adequately or that the Monte Carlo method was applied appropriately. Other methods of analysis that have evolved from early resonance integral theory have provided a basis for an alternative approach to determine radial resonance captures in fuel rods. A generalized method has been formulated and confirmed by comparison with published experiments of high spatial resolution for radial resonance captures in metallic uranium rods. The same analytical method has been applied to uranium-oxide fuels. The generalized method defined a spatial effective resonance cross section that is a continuous function of distance from the moderator-fuel interface and enables direct calculation of precise radial resonance capture distributions in fuel rods. This generalized method is used as a reference for comparison with two recent independent studies that have employed different Monte Carlo codes and cross-section libraries. Inconsistencies in the Monte Carlo application or in how pointwise cross-section libraries are sampled may exist. It is shown that refined Monte Carlo solutions with improved spatial resolution would not asymptotically approach the reference spatial capture distributions
Direct simulation Monte Carlo calculation of rarefied gas drag using an immersed boundary method
Jin, W.; Kleijn, C. R.; van Ommen, J. R.
2016-06-01
For simulating rarefied gas flows around a moving body, an immersed boundary method is presented here in conjunction with the Direct Simulation Monte Carlo (DSMC) method in order to allow the movement of a three dimensional immersed body on top of a fixed background grid. The simulated DSMC particles are reflected exactly at the landing points on the surface of the moving immersed body, while the effective cell volumes are taken into account for calculating the collisions between molecules. The effective cell volumes are computed by utilizing the Lagrangian intersecting points between the immersed boundary and the fixed background grid with a simple polyhedra regeneration algorithm. This method has been implemented in OpenFOAM and validated by computing the drag forces exerted on steady and moving spheres and comparing the results to that from conventional body-fitted mesh DSMC simulations and to analytical approximations.
A Monte Carlo simulation based inverse propagation method for stochastic model updating
Bao, Nuo; Wang, Chunjie
2015-08-01
This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.
DEFF Research Database (Denmark)
Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.
2002-01-01
A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...
Simulating rotationally inelastic collisions using a Direct Simulation Monte Carlo method
Schullian, O; Vaeck, N; van der Avoird, A; Heazlewood, B R; Rennick, C J; Softley, T P
2015-01-01
A new approach to simulating rotational cooling using a direct simulation Monte Carlo (DSMC) method is described and applied to the rotational cooling of ammonia seeded into a helium supersonic jet. The method makes use of ab initio rotational state changing cross sections calculated as a function of collision energy. Each particle in the DSMC simulations is labelled with a vector of rotational populations that evolves with time. Transfer of energy into translation is calculated from the mean energy transfer for this population at the specified collision energy. The simulations are compared with a continuum model for the on-axis density, temperature and velocity; rotational temperature as a function of distance from the nozzle is in accord with expectations from experimental measurements. The method could be applied to other types of gas mixture dynamics under non-uniform conditions, such as buffer gas cooling of NH$_3$ by He.
International Nuclear Information System (INIS)
In the fixed source problem such as a neutron deep penetration calculation with the Monte Carlo method, the application of the variance reduction method is most important for a high figure of merit (FOM) and the most reliable calculation. But, MCNP calculation inputs written in published literature are not to be best solution. The most concerned items are setting method for the lower weight bound on the weight window method and the exclusion radius for a point estimator. In those literatures, the lower weight bound is estimated by engineering judge or weight window generator in the MCNP. In the latter case, the lower weight bound is used with no turning process. Because of abnormal large lower weight bounds, many neutron are killed in no meaning by the Russian Roulette. The adjoint flux method for setting of lower weight bound should be adapted as a standard variance reduction method. The Monte Carlo calculation should be turned from the experience such as engineering judge to science such as adjoint method. (author)
Hull, Anthony B.; Ambruster, C.; Jewell, E.
2012-01-01
Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.
Application of Monte Carlo method for dose calculation in thyroid follicle
International Nuclear Information System (INIS)
The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)
International Nuclear Information System (INIS)
The standard implementation of the differential operator (Taylor series) perturbation method for Monte Carlo criticality problems has previously been shown to have a wide range of applicability. In this method, the unperturbed fission distribution is used as a fixed source to estimate the change in the keff eigenvalue of a system due to a perturbation. A new method, based on the deterministic perturbation theory assumption that the flux distribution (rather than the fission source distribution) is unchanged after a perturbation, is proposed in this paper. Dubbed the F-A method, the new method is implemented within the framework of the standard differential operator method by making tallies only in perturbed fissionable regions and combining the standard differential operator estimate of their perturbations according to the deterministic first-order perturbation formula. The F-A method, developed to extend the range of applicability of the differential operator method rather than as a replacement, was more accurate than the standard implementation for positive and negative density perturbations in a thin shell at the exterior of a computational Godiva model. The F-A method was also more accurate than the standard implementation at estimating reactivity worth profiles of samples with a very small positive reactivity worth (compared to actual measurements) in the Zeus critical assembly, but it was less accurate for a sample with a small negative reactivity worth
A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation
Saleur, H.; Derrida, B.
1985-01-01
In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents which confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.
A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation
International Nuclear Information System (INIS)
In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations
Harries, Tim J
2015-01-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the resu...
Sadi, M; Dabir, B
2003-01-01
Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.
Monte Carlo methods for localization of cones given multielectrode retinal ganglion cell recordings.
Sadeghi, K; Gauthier, J L; Field, G D; Greschner, M; Agne, M; Chichilnisky, E J; Paninski, L
2013-01-01
It has recently become possible to identify cone photoreceptors in primate retina from multi-electrode recordings of ganglion cell spiking driven by visual stimuli of sufficiently high spatial resolution. In this paper we present a statistical approach to the problem of identifying the number, locations, and color types of the cones observed in this type of experiment. We develop an adaptive Markov Chain Monte Carlo (MCMC) method that explores the space of cone configurations, using a Linear-Nonlinear-Poisson (LNP) encoding model of ganglion cell spiking output, while analytically integrating out the functional weights between cones and ganglion cells. This method provides information about our posterior certainty about the inferred cone properties, and additionally leads to improvements in both the speed and quality of the inferred cone maps, compared to earlier "greedy" computational approaches. PMID:23194406
Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa
We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.
Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot
International Nuclear Information System (INIS)
This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.
International Nuclear Information System (INIS)
This report provides absorbed dose rate and photon fluence rate distributions in rock salt around 30 testwise emplaced canisters containing high-level radioactive material (HAW project) and around a single canister containing radioactive material of a lower activity level (INHAW experiment). The site of this test emplacement was located in test galleries at the 800-m-level in the Asse salt mine. The data given were calculated using a Monte Carlo method simulating photon transport in complex geometries of differently composed materials. The aim of these calculations was to enable determination of the dose absorbed in any arbitrary sample of salt to be further examined in the future with sufficient reliability. The geometry of the test arrangement, the materials involved and the calculational method are characterised and the results are shortly described and some figures presenting selected results are shown. In the appendices, the results for emplacement of the highly radioactive canisters are given in tabular form. (orig.)
Using neutron source distinguish mustard gas bomb from the others with Monte Carlo simulation method
International Nuclear Information System (INIS)
After Japan was defeated, the chemical weapon that left in China injured people constantly. It made very grave lost to the Chinese because of people's innocent to it. In these accidents, mustard gas bomb is the most. It is more difficult to distinguish mustard gas bomb from other normal bomb in out because it embedded in the earth for long time; leakage, eroding and rust appearance looked very serious. So the untouched measure method, neutron source inducing γ spectrum, showed very important. The Monte Carlo method was used in this paper to compute the γ spectrum when using neutron source irradiate mustard gas bomb. The characteristic radial of Cl, S, Fe and the other elements can picked up clearly. The result play some referenced role in analyzing γ spectrum. (authors)
Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method
International Nuclear Information System (INIS)
An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators
Intra-operative radiation therapy optimization using the Monte Carlo method
International Nuclear Information System (INIS)
The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.)
Intra-operative radiation therapy optimization using the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Rosetti, M. [ENEA, Bologna (Italy); Benassi, M.; Bufacchi, A.; D' Andrea, M. [Ist. Regina Elena, Rome (Italy); Bruzzaniti, V. [ENEA, S. Maria di Galeria (Rome) (Italy)
2001-07-01
The problem addressed with reference to the treatment head optimization has been the choice of the proper design of the head of a new 12 MeV linear accelerator in order to have the required dose uniformity on the target volume while keeping the dose rate sufficiently high and the photon production and the beam impact with the head walls within acceptable limits. The second part of the optimization work, concerning the TPS, is based on the rationale that the TPSs generally used in radiotherapy use semi-empirical algorithms whose accuracy can be inadequate particularly when irregular surfaces and/or inhomogeneities, such as air cavities or bone, are present. The Monte Carlo method, on the contrary, is capable of accurately calculating the dose distribution under almost all circumstances. Furthermore it offers the advantage of allowing to start the simulation of the radiation transport in the patient from the beam data obtained with the transport through the specific treatment head used. Therefore the Monte Carlo simulations, which at present are not yet widely used for routine treatment planning due to the required computing time, can be employed as a benchmark and as an optimization tool for conventional TPSs. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ghassoun, Jillali; Jehoauni, Abdellatif [Nuclear physics and Techniques Lab., Faculty of Science, Semlalia, Marrakech (Morocco)
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
International Nuclear Information System (INIS)
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
Bianco, F. B.; Modjaz, M.; Oh, S. M.; Fierroz, D.; Liu, Y. Q.; Kewley, L.; Graur, O.
2016-07-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity calibrators, based on the original IDL code of Kewley and Dopita (2002) with updates from Kewley and Ellison (2008), and expanded to include more recently developed calibrators. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios (referred to as indicators) in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo sampling, better characterizes the statistical oxygen abundance confidence region including the effect due to the propagation of observational uncertainties. These uncertainties are likely to dominate the error budget in the case of distant galaxies, hosts of cosmic explosions. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 15 metallicity calibrators simultaneously, as well as for E(B- V) , and estimates their median values and their 68% confidence regions. We provide the option of outputting the full Monte Carlo distributions, and their Kernel Density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies (z https://github.com/nyusngroup/pyMCZ.
On stochastic error and computational efficiency of the Markov Chain Monte Carlo method
Li, Jun
2014-01-01
In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.
International Nuclear Information System (INIS)
The choice of the Monte Carlo method by the criticality service of the CEA is justified by the advantages of this method with regard to analytical codes. In this paper the authors present the advantages and the weakness of this method. Some studies for remediate at this weakness are presented
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Energy Technology Data Exchange (ETDEWEB)
Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
Theory and applications of the fission matrix method for continuous-energy Monte Carlo
International Nuclear Information System (INIS)
Highlights: • The fission matrix method is implemented into the MCNP Monte Carlo code. • Eigenfunctions and eigenvalues of power distributions are shown and studied. • Source convergence acceleration is demonstrated for a fuel storage vault problem. • Forward flux eigenmodes and relative uncertainties are shown for a reactor problem. • Eigenmodes expansions are performed during source convergence for a reactor problem. - Abstract: The fission matrix method can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission distribution. It can also be used to accelerate the convergence of power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. These aspects of the method are here both theoretically justified and demonstrated, and then used to investigate fundamental properties of the transport equation for a continuous-energy physics treatment. Implementation into the MCNP6 Monte Carlo code is also discussed, including a sparse representation of the fission matrix, which permits much larger and more accurate representations. Properties of the calculated eigenvalue spectrum of a 2D PWR problem are discussed: for a fine enough mesh and a sufficient degree of sampling, the spectrum both converges and has a negligible imaginary component. Calculation of the fundamental mode of the fission matrix for a fuel storage vault problem shows how convergence can be accelerated by over a factor of ten given a flat initial distribution. Forward fluxes and the relative uncertainties for a 2D PWR are shown, both of which qualitatively agree with expectation. Lastly, eigenmode expansions are performed during source convergence of the 2D PWR
Coherent-wave Monte Carlo method for simulating light propagation in tissue
Kraszewski, Maciej; Pluciński, Jerzy
2016-03-01
Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.
A spectroscopically predicted new ZZ Ceti variable - GD 165
International Nuclear Information System (INIS)
The paper reports the discovery that the DA white dwarf GD 165, recently identified as possibly having a brown dwarf companion, is also a ZZ Ceti variable. The analysis of two independent photometric observations shows brightness variations of up to 0.1 mag on time scale of 200-1800 s. Fourier analysis of the light curves indicates that, as is true in general for ZZ Ceti variables, multiple periodicities are present on both nights and the period structure of the star can change on a time scale of 2 days. In the first run, Fourier analysis shows a multiperiodic low-frequency spectrum, while on the second run, the low-frequency spectrum is dominated by an 1800-s period, which is the longest period ever observed in a ZZ Ceti white dwarf. A high-frequency peak of variable amplitude corresponding to a period of 120 s is present in both runs as well. The time variability of the period structure is interpreted as interaction between pulsation modes, as is seen in other ZZ Ceti variables. 15 refs
Future hadron physics: WW, WZ and ZZ final states
International Nuclear Information System (INIS)
A review is made of some interesting topics in future running at hadron colliders: the search for heavy top quarks and possible exotic isosinglet quarks; the search for a heavy Higgs boson; the search for possible strong interactions in the electroweak symmetry-breaking sector. They all lead to the study of final states containing two heavy gauge bosons WW, WZ or ZZ. (author)
Study of the $ZZ$ diboson production at CDF II
Energy Technology Data Exchange (ETDEWEB)
Bauce, Matteo [Univ. of Padua (Italy)
2013-01-01
The subject of this Thesis is the production of a pair of massive Z vector bosons in the proton antiproton collisions at the Tevatron, at the center-of-mass energy √s = 1.96 TeV. We measure the ZZ production cross section in two different leptonic decay modes: into four charged leptons (e or μ) and into two charged leptons plus two neutrinos. The results are based on the whole dataset collected by the Collider Detector at Fermilab (CDF), corresponding to 9.7 fb^{-1} of data. The combination of the two cross section measurements gives (p$\\bar{p}$→ZZ) = 1.38^{+0.28} _{-0.27} pb, and is the most precise ZZ cross section measurement at the Tevatron to date. We further investigate the four lepton final state searching for the production of the scalar Higgs particle in the decay H →ZZ(*) →ℓℓℓ'ℓ'. No evidence of its production has been seen in the data, hence was set a 95% Confidence Level upper limit on its production cross section as a function of the Higgs particle mass, mH, in the range from 120 to 300 GeV/c^{2}.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
International Nuclear Information System (INIS)
We present a new Monte Carlo method based upon the theoretical proposal of Claverie and Soto. By contrast with other Quantum Monte Carlo methods used so far, the present approach uses a pure diffusion process without any branching. The many-fermion problem (with the specific constraint due to the Pauli principle) receives a natural solution in the framework of this method: in particular, there is neither the fixed-node approximation not the nodal release problem which occur in other approaches (see, e.g., Ref. 8 for a recent account). We give some numerical results concerning simple systems in order to illustrate the numerical feasibility of the proposed algorithm
Application of multi-stage Monte Carlo method for solving machining optimization problems
Directory of Open Access Journals (Sweden)
Miloš Madić
2014-08-01
Full Text Available Enhancing the overall machining performance implies optimization of machining processes, i.e. determination of optimal machining parameters combination. Optimization of machining processes is an active field of research where different optimization methods are being used to determine an optimal combination of different machining parameters. In this paper, multi-stage Monte Carlo (MC method was employed to determine optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, abrasive waterjet machining, electrochemical discharge machining and electrochemical micromachining. Optimization solutions obtained by using multi-stage MC method were compared with the optimization solutions of past researchers obtained by using meta-heuristic optimization methods, e.g. genetic algorithm, simulated annealing algorithm, artificial bee colony algorithm and teaching learning based optimization algorithm. The obtained results prove the applicability and suitability of the multi-stage MC method for solving machining optimization problems with up to four independent variables. Specific features, merits and drawbacks of the MC method were also discussed.
Calculation of neutron importance function in fissionable assemblies using Monte Carlo method
International Nuclear Information System (INIS)
The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor
Generation of organic scintillators response function for fast neutrons using the Monte Carlo method
International Nuclear Information System (INIS)
A computer program (DALP) in Fortran-4-G language, has been developed using the Monte Carlo method to simulate the experimental techniques leading to the distribution of pulse heights due to monoenergetic neutrons reaching an organic scintillator. The calculation of the pulse height distribution has been done for two different systems: 1) Monoenergetic neutrons from a punctual source reaching the flat face of a cylindrical organic scintillator; 2) Environmental monoenergetic neutrons randomly reaching either the flat or curved face of the cylindrical organic scintillator. The computer program has been developed in order to be applied to the NE-213 liquid organic scintillator, but can be easily adapted to any other kind of organic scintillator. With this program one can determine the pulse height distribution for neutron energies ranging from 15 KeV to 10 MeV. (Author)
King, Julian; Mortlock, Daniel; Webb, John; Murphy, Michael
2010-11-01
Recent attempts to constrain cosmological variation in the fine structure constant, α, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for Δα/α, the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
King, Julian A; Webb, John K; Murphy, Michael T
2009-01-01
Recent attempts to constrain cosmological variation in the fine structure constant, alpha, using quasar absorption lines have yielded two statistical samples which initially appear to be inconsistent. One of these samples was subsequently demonstrated to not pass consistency tests; it appears that the optimisation algorithm used to fit the model to the spectra failed. Nevertheless, the results of the other hinge on the robustness of the spectral fitting program VPFIT, which has been tested through simulation but not through direct exploration of the likelihood function. We present the application of Markov Chain Monte Carlo (MCMC) methods to this problem, and demonstrate that VPFIT produces similar values and uncertainties for (Delta alpha)/(alpha), the fractional change in the fine structure constant, as our MCMC algorithm, and thus that VPFIT is reliable.
The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy
Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F
2010-01-01
Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...
International Nuclear Information System (INIS)
According to dose parameters calculation formula of seed source recommended by AAPM TG43U1, 125I-103Pd seed source dose parameters calculation formula and a variety of radionuclides composite seed source of dose parameters calculation formula can be obtain. Dose rate constant, radial dose function and anisotropy function of 125I-103Pd composite seed source are calculated by Monte-Carlo method, Empiric equations are obtained for radial dose function and anisotropy function by curve fitting. Comparisons with the relative data recommend by AAPM are performed. For the single source, the deviation of dose rate constant is 0.959 (cGy·h-1·U-1), and with 0.6093% from the AAPM. (authors)
Monte Carlo study of living polymers with the bond-fluctuation method
Rouault, Yannick; Milchev, Andrey
1995-06-01
The highly efficient bond-fluctuation method for Monte Carlo simulations of both static and dynamic properties of polymers is applied to a system of living polymers. Parallel to stochastic movements of monomers, which result in Rouse dynamics of the macromolecules, the polymer chains break, or associate at chain ends with other chains and single monomers, in the process of equilibrium polymerization. We study the changes in equilibrium properties, such as molecular-weight distribution, average chain length, and radius of gyration, and specific heat with varying density and temperature of the system. The results of our numeric experiments indicate a very good agreement with the recently suggested description in terms of the mean-field approximation. The coincidence of the specific heat maximum position at kBT=V/4 in both theory and simulation suggests the use of calorimetric measurements for the determination of the scission-recombination energy V in real experiments.
Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method
International Nuclear Information System (INIS)
We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels
Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods
Ferraioli, Luigi; Plagnol, Eric
2012-01-01
We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...
MAMONT program for neutron field calculation by the Monte Carlo method
International Nuclear Information System (INIS)
The MAMONT program (MAthematical MOdelling of Neutron Trajectories) designed for three-dimensional calculation of neutron transport by analogue and nonanalogue Monte Carlo methods in the range of energies from 15 MeV to the thermal ones is described. The program is written in FORTRAN and is realized at the BESM-6 computer. Group constants of the library modulus are compiled of the ENDL-83, ENDF/B-4 and JENDL-2 files. The possibility of calculation for the layer spherical, cylindrical and rectangular configurations is envisaged. Accumulation and averaging of slowing-down kinetics functionals (averaged logarithmic losses of energy, time of slowing- down, free paths, the number of collisions, age), diffusion parameters, leakage spectra and fluxes as well as formation of separate isotopes over zones are realized in the process of calculation. 16 tabs
Absorbed dose measurements in mammography using Monte Carlo method and ZrO2+PTFE dosemeters
International Nuclear Information System (INIS)
Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO2+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)
Directory of Open Access Journals (Sweden)
Ertekin Öztekin Öztekin
2015-12-01
Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.
Bianco, Federica B; Oh, Seung Man; Fierroz, David; Liu, Yuqian; Kewley, Lisa; Graur, Or
2015-01-01
We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In additi...
Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99mTc, 131I and 42K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)
Investigation of physical regularities in gamma gamma logging of oil wells by Monte Carlo method
International Nuclear Information System (INIS)
Some results are given of calculations by the Monte Carlo method of specific problems of gamma-gamma density logging. The paper considers the influence of probe length and volume density of the rocks; the angular distribution of the scattered radiation incident on the instrument; the spectra of the radiation being recorded and of the source radiation; depths of surveys, the effect of the mud cake, the possibility of collimating the source radiation; the choice of source, initial collimation angles, the optimum angle of recording scattered gamma-radiation and the radiation discrimination threshold; and the possibility of determining the mineralogical composition of rocks in sections of oil wells and of identifying once-scattered radiation. (author)
Application of Monte Carlo method in modelling physical and physico-chemical processes
International Nuclear Information System (INIS)
The seminar was held on September 9 and 10, 1982 at the Faculty of Nuclear Science and Technical Engineering of the Czech Technical University in Prague. The participants heard 11 papers of which 7 were inputed in INIS. The papers dealt with the use of the Monte Carlo method for modelling the transport and scattering of gamma radiation in layers of materials, the application of low-energy gamma radiation for the determination of secondary X radiation flux, the determination of self-absorption corrections for a 4π chamber, modelling the response function of a scintillation detector and the optimization of geometrical configuration in measuring material density using backscattered gamma radiation. The possibility was studied of optimizing modelling with regard to computer time, and the participants were informed of comouterized nuclear data libraries. (M.D.)
Simulation of nuclear material identification system based on Monte Carlo sampling method
International Nuclear Information System (INIS)
Background: Caused by the danger of radioactivity, nuclear material identification is sometimes a difficult problem. Purpose: In order to reflect the particle transport processes in nuclear fission and present the effectiveness of the signatures of Nuclear Materials Identification System (NMIS), based on physical principles and experimental statistical data. Methods: We established a Monte Carlo simulation model of nuclear material identification system and then acquired three channels of time domain pulse signal. Results: Auto-Correlation Functions (AC), Cross-Correlation Functions (CC), Auto Power Spectral Densities (APSD) and Cross Power Spectral Densities (CPSD) between channels can obtain several signatures, which can show some characters of nuclear material. Conclusions: The simulation results indicate that the way can help to further study the features of the system. (authors)
An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks
Kim, Stacy
2011-01-01
Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.
International Nuclear Information System (INIS)
The mathematics model of particle transportation was built, based on the sample of the impaction trace of the narrow beam γ photon in the medium according to the principle of interaction between γ photon and the material, and a computer procedure was organized to simulate the process of transportation for the γ photon in the medium and record the emission probability of γ photon and its corresponding thickness of medium with LabWindows/CVI, which was used to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. The results show that it is feasible for Monte Carlo method to calculate narrow beam γ ray mass attenuation coefficients of absorbing medium. (authors)
A Monte Carlo method for critical systems in infinite volume: the planar Ising model
Herdeiro, Victor
2016-01-01
In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.
International Nuclear Information System (INIS)
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La2/3Ca1/3MnO3, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn3+eg’–O–Mn3+eg, Mn3+eg–O–Mn4+d3 and Mn3+eg’–O–Mn4+d3. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions TC (Curie temperature) and TMI (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, TMI presented lower values than TC. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below TMI. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below TC. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below TMI by the vacancies effect. • The resistive hysteresis loop presents two peaks that are directly associated with the coercive field in the magnetic
Development of a software package for solid-angle calculations using the Monte Carlo method
International Nuclear Information System (INIS)
Solid-angle calculations play an important role in the absolute calibration of radioactivity measurement systems and in the determination of the activity of radioactive sources, which are often complicated. In the present paper, a software package is developed to provide a convenient tool for solid-angle calculations in nuclear physics. The proposed software calculates solid angles using the Monte Carlo method, in which a new type of variance reduction technique was integrated. The package, developed under the environment of Microsoft Foundation Classes (MFC) in Microsoft Visual C++, has a graphical user interface, in which, the visualization function is integrated in conjunction with OpenGL. One advantage of the proposed software package is that it can calculate the solid angle subtended by a detector with different geometric shapes (e.g., cylinder, square prism, regular triangular prism or regular hexagonal prism) to a point, circular or cylindrical source without any difficulty. The results obtained from the proposed software package were compared with those obtained from previous studies and calculated using Geant4. It shows that the proposed software package can produce accurate solid-angle values with a greater computation speed than Geant4. -- Highlights: • This software package (SAC) can give accurate solid-angle values. • SAC calculate solid angles using the Monte Carlo method and it has higher computation speed than Geant4. • A simple but effective variance reduction technique which was put forward by the authors has been applied in SAC. • A visualization function and a graphical user interface are also integrated in SAC
Application of a Monte Carlo method for modeling debris flow run-out
Luna, B. Quan; Cepeda, J.; Stumpf, A.; van Westen, C. J.; Malet, J. P.; van Asch, T. W. J.
2012-04-01
A probabilistic framework based on a Monte Carlo method for the modeling of debris flow hazards is presented. The framework is based on a dynamic model, which is combined with an explicit representation of the different parameter uncertainties. The probability distribution of these parameters is determined from an extensive collected database with information of back calibrated past events from different authors. The uncertainty in these inputs can be simulated and used to increase confidence in certain extreme run-out distances. In the Monte Carlo procedure; the input parameters of the numerical models simulating propagation and stoppage of debris flows are randomly selected. Model runs are performed using the randomly generated input values. This allows estimating the probability density function of the output variables characterizing the destructive power of debris flow (for instance depth, velocities and impact pressures) at any point along the path. To demonstrate the implementation of this method, a continuum two-dimensional dynamic simulation model that solves the conservation equations of mass and momentum was applied (MassMov2D). This general methodology facilitates the consistent combination of physical models with the available observations. The probabilistic model presented can be considered as a framework to accommodate any existing one or two dimensional dynamic model. The resulting probabilistic spatial model can serve as a basis for hazard mapping and spatial risk assessment. The outlined procedure provides a useful way for experts to produce hazard or risk maps for the typical case where historical records are either poorly documented or even completely lacking, as well as to derive confidence limits on the proposed zoning.
International Nuclear Information System (INIS)
The description of the equations in the fluid frame has been done recently. A simplification of the collision term is obtained, but the streaming term now has to include angular deviation and the Doppler shift. We choose the latter description which is more convenient for our purpose. We introduce some notations and recall some facts about stochastic kernels and the Monte-Carlo method. We show how to apply the Monte-Carlo method to a transport equation with an arbitrary streaming term; in particular we show that the track length estimator is unbiased. We review some properties of the radiation hydrodynamics equations, and show how energy conservation is obtained. Then, we apply the Monte-Carlo method explained in section 2 to the particular case of the transfer equation in the fluid frame. Finally, we describe a physical example and give some numerical results
A Monte-Carlo Method for Estimating Stellar Photometric Metallicity Distributions
Gu, Jiayin; jing, Yingjie; Zuo, Wenbo
2016-01-01
Based on the Sloan Digital Sky Survey (SDSS), we develop a new monte-carlo based method to estimate the photometric metallicity distribution function (MDF) for stars in the Milky Way. Compared with other photometric calibration methods, this method enables a more reliable determination of the MDF, in particular at the metal-poor and metal-rich ends. We present a comparison of our new method with a previous polynomial-based approach, and demonstrate its superiority. As an example, we apply this method to main-sequence stars with $0.2
International Nuclear Information System (INIS)
In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)
Verification of the spectral history correction method with fully coupled Monte Carlo code BGCore
International Nuclear Information System (INIS)
Recently, a new method for accounting for burnup history effects on few-group cross sections was developed and implemented in the reactor dynamic code DYN3D. The method relies on the tracking of the local Pu-239 density which serves as an indicator of burnup spectral history. The validity of the method was demonstrated in PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. Therefore, the purpose of the current work is to further investigate the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal hydraulic solvers and thus capable of providing a reference solution for 3D simulations. The results dearly show that neglecting the spectral history effects leads to a very large deviation (e.g. 2000 pcm in reactivity) from fee reference solution. However, a very good agreement between DYN3D and BGCore is observed (on the order of 200 pcm in reactivity), when the. Pu-correction method is applied. (author)
Directory of Open Access Journals (Sweden)
Kaisheng Yao
2004-11-01
Full Text Available We present a method for sequentially estimating time-varying noise parameters. Noise parameters are sequences of time-varying mean vectors representing the noise power in the log-spectral domain. The proposed sequential Monte Carlo method generates a set of particles in compliance with the prior distribution given by clean speech models. The noise parameters in this model evolve according to random walk functions and the model uses extended Kalman filters to update the weight of each particle as a function of observed noisy speech signals, speech model parameters, and the evolved noise parameters in each particle. Finally, the updated noise parameter is obtained by means of minimum mean square error (MMSE estimation on these particles. For efficient computations, the residual resampling and Metropolis-Hastings smoothing are used. The proposed sequential estimation method is applied to noisy speech recognition and speech enhancement under strongly time-varying noise conditions. In both scenarios, this method outperforms some alternative methods.
International Nuclear Information System (INIS)
It is noted that the analog Monte Carlo method has low calculation efficiency at deep penetration problems such as radiation shielding analysis. In order to increase the calculation efficiency, variance reduction techniques have been introduced and applied for the shielding calculation. To optimize the variance reduction technique, the hybrid Monte Carlo method was introduced. For the determination of the parameters using the hybrid Monte Carlo method, the adjoint flux should be calculated by the deterministic methods. In this study, the collision probability method is applied to calculate adjoint flux. The solution of integration transport equation in the collision probability method is modified to calculate the adjoint flux approximately even for complex and arbitrary geometries. For the calculation, C++ program is developed. By using the calculated adjoint flux, importance parameters of each cell in shielding material are determined and used for variance reduction of transport calculation. In order to evaluate calculation efficiency with the proposed method, shielding calculations are performed with MCNPX 2.7. In this study, a method to calculate the adjoint flux in using the Monte Carlo variance reduction was proposed to improve Monte Carlo calculation efficiency of thick shielding problem. The importance parameter for each cell of shielding material is determined by calculating adjoint flux with the modified collision probability method. In order to calculate adjoint flux with the proposed method, C++ program is developed. The results show that the proposed method can efficiently increase the FOM of transport calculation. It is expected that the proposed method can be utilize for the calculation efficiency in thick shielding calculation
International Nuclear Information System (INIS)
Bayesian analysis of Laser Interferometer Space Antenna (LISA) data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a delayed rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise.
International Nuclear Information System (INIS)
The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)
Report on some methods of determining the state of convergence of Monte Carlo risk estimates
International Nuclear Information System (INIS)
The Department of the Environment is developing a methodology for assessing potential sites for the disposal of low and intermediate level radioactive wastes. Computer models are used to simulate the groundwater transport of radioactive materials from a disposal facility back to man. Monte Carlo methods are being employed to conduct a probabilistic risk assessment (pra) of potential sites. The models calculate time histories of annual radiation dose to the critical group population. The annual radiation dose to the critical group in turn specifies the annual individual risk. The distribution of dose is generally highly skewed and many simulation runs are required to predict the level of confidence in the risk estimate i.e. to determine whether the risk estimate is converged. This report describes some statistical methods for determining the state of convergence of the risk estimate. The methods described include the Shapiro-Wilk test, calculation of skewness and kurtosis and normal probability plots. A method for forecasting the number of samples needed before the risk estimate is converged is presented. Three case studies were conducted to examine the performance of some of these techniques. (author)
Convex-based void filling method for CAD-based Monte Carlo geometry modeling
International Nuclear Information System (INIS)
Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time
Multiple-scaling methods for Monte Carlo simulations of radiative transfer in cloudy atmosphere
International Nuclear Information System (INIS)
Two multiple-scaling methods for Monte Carlo simulations were derived from integral radiative transfer equation for calculating radiance in cloudy atmosphere accurately and rapidly. The first one is to truncate sharp forward peaks of phase functions for each order of scattering adaptively. The truncated functions for forward peaks are approximated as quadratic functions; only one prescribed parameter is used to set maximum truncation fraction for various phase functions. The second one is to increase extinction coefficients in optically thin regions for each order scattering adaptively, which could enhance the collision chance adaptively in the regions where samples are rare. Several one-dimensional and three-dimensional cloud fields were selected to validate the methods. The numerical results demonstrate that the bias errors were below 0.2% for almost all directions except for glory direction (less than 0.4%) and the higher numerical efficiency could be achieved when quadratic functions were used. The second method could decrease radiance noise to 0.60% for cumulus and accelerate convergence in optically thin regions. In general, the main advantage of the proposed methods is that we could modify the atmospheric optical quantities adaptively for each order of scattering and sample important contribution according to the specific atmospheric conditions.
An energy transfer method for 4D Monte Carlo dose calculation.
Siebers, Jeffrey V; Zhong, Hualiang
2008-09-01
This article presents a new method for four-dimensional Monte Carlo dose calculations which properly addresses dose mapping for deforming anatomy. The method, called the energy transfer method (ETM), separates the particle transport and particle scoring geometries: Particle transport takes place in the typical rectilinear coordinate system of the source image, while energy deposition scoring takes place in a desired reference image via use of deformable image registration. Dose is the energy deposited per unit mass in the reference image. ETM has been implemented into DOSXYZnrc and compared with a conventional dose interpolation method (DIM) on deformable phantoms. For voxels whose contents merge in the deforming phantom, the doses calculated by ETM are exactly the same as an analytical solution, contrasting to the DIM which has an average 1.1% dose discrepancy in the beam direction with a maximum error of 24.9% found in the penumbra of a 6 MV beam. The DIM error observed persists even if voxel subdivision is used. The ETM is computationally efficient and will be useful for 4D dose addition and benchmarking alternative 4D dose addition algorithms. PMID:18841862
Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method
Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.
2008-06-01
An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.
International Nuclear Information System (INIS)
At the present time a Monte Carlo transport computer code is being designed and implemented at Lawrence Livermore National Laboratory to include the transport of: neutrons, photons, electrons and light charged particles as well as the coupling between all species of particles, e.g., photon induced electron emission. Since this code is being designed to handle all particles this approach is called the ''All Particle Method''. The code is designed as a test bed code to include as many different methods as possible (e.g., electron single or multiple scattering) and will be data driven to minimize the number of methods and models ''hard wired'' into the code. This approach will allow changes in the Livermore nuclear and atomic data bases, used to described the interaction and production of particles, to be used to directly control the execution of the program. In addition this approach will allow the code to be used at various levels of complexity to balance computer running time against the accuracy requirements of specific applications. This paper describes the current design philosophy and status of the code. Since the treatment of neutrons and photons used by the All Particle Method code is more or less conventional, emphasis in this paper is placed on the treatment of electron, and to a lesser degree charged particle, transport. An example is presented in order to illustrate an application in which the ability to accurately transport electrons is important. 21 refs., 1 fig
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
International Nuclear Information System (INIS)
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum
Implementation of the probability table method in a continuous-energy Monte Carlo code system
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B. [Lockheed Martin Corp., Schenectady, NY (United States)
1998-10-01
RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.
International Nuclear Information System (INIS)
A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted in the context of simplified performance models which elucidate key scaling regimes of the parallel algorithm.
International Nuclear Information System (INIS)
The study of particle coagulation and sintering processes is important in a variety of research studies ranging from cell fusion and dust motion to aerosol formation applications. These processes are traditionally simulated using either Monte-Carlo methods or integro-differential equations for particle number density functions. In this paper, we present a computational technique for cases where we believe that accurate closed evolution equations for a finite number of moments of the density function exist in principle, but are not explicitly available. The so-called equation-free computational framework is then employed to numerically obtain the solution of these unavailable closed moment equations by exploiting (through intelligent design of computational experiments) the corresponding fine-scale (here, Monte-Carlo) simulation. We illustrate the use of this method by accelerating the computation of evolving moments of uni- and bivariate particle coagulation and sintering through short simulation bursts of a constant-number Monte-Carlo scheme.
Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks
Ben Hammouda, Chiheb
2015-05-12
In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for
Search for the Higgs Boson in the H→ ZZ(*)→4μ Channel in CMS Using a Multivariate Analysis
International Nuclear Information System (INIS)
This note presents a Higgs boson search analysis in the CMS detector of the LHC accelerator (CERN, Geneva, Switzerland) in the H→ ZZ(*)→4μ channel, using a multivariate method. This analysis, based in a Higgs boson mass dependent likelihood, constructed from discriminant variables, provides a significant improvement of the Higgs boson discovery potential in a wide mass range with respect to the official analysis published by CMS, based in orthogonal cuts independent of the Higgs boson mass. (Author) 8 refs
Comparison of ISO-GUM and Monte Carlo Method for Evaluation of Measurement Uncertainty
International Nuclear Information System (INIS)
To supplement the ISO-GUM method for the evaluation of measurement uncertainty, a simulation program using the Monte Carlo method (MCM) was developed, and the MCM and GUM methods were compared. The results are as follows: (1) Even under a non-normal probability distribution of the measurement, MCM provides an accurate coverage interval; (2) Even if a probability distribution that emerged from combining a few non-normal distributions looks as normal, there are cases in which the actual distribution is not normal and the non-normality can be determined by the probability distribution of the combined variance; and (3) If type-A standard uncertainties are involved in the evaluation of measurement uncertainty, GUM generally offers an under-valued coverage interval. However, this problem can be solved by the Bayesian evaluation of type-A standard uncertainty. In this case, the effective degree of freedom for the combined variance is not required in the evaluation of expanded uncertainty, and the appropriate coverage factor for 95% level of confidence was determined to be 1.96
Testing planetary transit detection methods with grid-based Monte-Carlo simulations.
Bonomo, A. S.; Lanza, A. F.
The detection of extrasolar planets by means of the transit method is a rapidly growing field of modern astrophysics. The periodic light dips produced by the passage of a planet in front of its parent star can be used to reveal the presence of the planet itself, to measure its orbital period and relative radius, as well as to perform studies on the outer layers of the planet by analysing the light of the star passing through the planet's atmosphere. We have developed a new method to detect transits of Earth-sized planets in front of solar-like stars that allows us to reduce the impact of stellar microvariability on transit detection. A large Monte Carlo numerical experiment has been designed to test the performance of our approach in comparison with other transit detection methods for stars of different magnitudes and planets of different radius and orbital period, as will be observed by the space experiments CoRoT and Kepler. The large computational load of this experiment has been managed by means of the Grid infrastructure of the COMETA consortium.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Yong, Wang; Wen-Gan, Ma; Xiao-Zhou, Li; Lei, Guo
2016-01-01
In this paper we present the full NLO QCD + NLO EW corrections to the $Z$-boson pair production in association with a hard jet at the LHC. The subsequent $Z$-boson leptonic decays are included by adopting both the naive NWA and MadSpin methods for comparison. Since the $ZZ+{\\rm jet}$ production is an important background for single Higgs boson production and new physics search at hadron colliders, the theoretical predictions with high accuracy for the hadronic production of $ZZ+{\\rm jet}$ are necessary. We present the numerical results of the integrated cross section and various kinematic distributions of final particles, and conclude that it is necessary to take into account the spin correlation and finite width effects from the $Z$-boson leptonic decays. We also find that the NLO EW correction is quantitatively nonnegligible in matching the experimental accuracy at the LHC, particularly is significant in high transverse momentum region.
Guerra, Marta L.
2009-02-23
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method
Directory of Open Access Journals (Sweden)
KRSTIVOJEVIC, J. P.
2015-08-01
Full Text Available The results of a comprehensive investigation of the influence of current transformer (CT saturation on restricted earth fault (REF protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i laboratory measurements and (ii calculations based on the input data obtained by the Monte Carlo (MC simulation. To make a detailed assessment of the current transformer performance the uncertain input data for the CT model were obtained by applying the MC method. In this way, different levels of remanent flux in CT core are taken into consideration. By the generated CT secondary currents, the algorithm for REF protection based on phase comparison in time domain is tested. On the basis of the obtained results, a method of adjustment of the triggering threshold in order to ensure safe operation during transients, and thereby improve the algorithm security, has been proposed. The obtained results indicate that power transformer REF protection would be enhanced by using the proposed adjustment of triggering threshold in the algorithm which is based on phase comparison in time domain.
Monte Carlo Methods for Top-k Personalized PageRank Lists and Name Disambiguation
Avrachenkov, Konstantin; Nemirovsky, Danil A; Smirnova, Elena; Sokol, Marina
2010-01-01
We study a problem of quick detection of top-k Personalized PageRank lists. This problem has a number of important applications such as finding local cuts in large graphs, estimation of similarity distance and name disambiguation. In particular, we apply our results to construct efficient algorithms for the person name disambiguation problem. We argue that when finding top-k Personalized PageRank lists two observations are important. Firstly, it is crucial that we detect fast the top-k most important neighbours of a node, while the exact order in the top-k list as well as the exact values of PageRank are by far not so crucial. Secondly, a little number of wrong elements in top-k lists do not really degrade the quality of top-k lists, but it can lead to significant computational saving. Based on these two key observations we propose Monte Carlo methods for fast detection of top-k Personalized PageRank lists. We provide performance evaluation of the proposed methods and supply stopping criteria. Then, we apply ...
Use of Monte Carlo Bootstrap Method in the Analysis of Sample Sufficiency for Radioecological Data
International Nuclear Information System (INIS)
There are operational difficulties in obtaining samples for radioecological studies. Population data may no longer be available during the study and obtaining new samples may not be possible. These problems do the researcher sometimes work with a small number of data. Therefore, it is difficult to know whether the number of samples will be sufficient to estimate the desired parameter. Hence, it is critical do the analysis of sample sufficiency. It is not interesting uses the classical methods of statistic to analyze sample sufficiency in Radioecology, because naturally occurring radionuclides have a random distribution in soil, usually arise outliers and gaps with missing values. The present work was developed aiming to apply the Monte Carlo Bootstrap method in the analysis of sample sufficiency with quantitative estimation of a single variable such as specific activity of a natural radioisotope present in plants. The pseudo population was a small sample with 14 values of specific activity of 226Ra in forage palm (Opuntia spp.). Using the R software was performed a computational procedure to calculate the number of the sample values. The re sampling process with replacement took the 14 values of original sample and produced 10,000 bootstrap samples for each round. Then was calculated the estimated average θ for samples with 2, 5, 8, 11 and 14 values randomly selected. The results showed that if the researcher work with only 11 sample values, the average parameter will be within a confidence interval with 90% probability . (Author)
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
International Nuclear Information System (INIS)
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile
Institute of Scientific and Technical Information of China (English)
ZHANG Jun; GUO Fan
2015-01-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto
2015-12-01
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730-3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Simulation of Watts Bar initial startup tests with continuous energy Monte Carlo methods
International Nuclear Information System (INIS)
The Consortium for Advanced Simulation of Light Water Reactors is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable numeric reference for VERA neutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients. (author)
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions
Ricketson, Lee
2013-10-01
We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.
Adjoint-based deviational Monte Carlo methods for phonon transport calculations
Péraud, Jean-Philippe M.; Hadjiconstantinou, Nicolas G.
2015-06-01
In the field of linear transport, adjoint formulations exploit linearity to derive powerful reciprocity relations between a variety of quantities of interest. In this paper, we develop an adjoint formulation of the linearized Boltzmann transport equation for phonon transport. We use this formulation for accelerating deviational Monte Carlo simulations of complex, multiscale problems. Benefits include significant computational savings via direct variance reduction, or by enabling formulations which allow more efficient use of computational resources, such as formulations which provide high resolution in a particular phase-space dimension (e.g., spectral). We show that the proposed adjoint-based methods are particularly well suited to problems involving a wide range of length scales (e.g., nanometers to hundreds of microns) and lead to computational methods that can calculate quantities of interest with a cost that is independent of the system characteristic length scale, thus removing the traditional stiffness of kinetic descriptions. Applications to problems of current interest, such as simulation of transient thermoreflectance experiments or spectrally resolved calculation of the effective thermal conductivity of nanostructured materials, are presented and discussed in detail.
Systematic hierarchical coarse-graining with the inverse Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Lyubartsev, Alexander P., E-mail: alexander.lyubartsev@mmk.su.se [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Naômé, Aymeric, E-mail: aymeric.naome@unamur.be [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Vercauteren, Daniel P., E-mail: daniel.vercauteren@unamur.be [UCPTS Division, University of Namur, 61 Rue de Bruxelles, B 5000 Namur (Belgium); Laaksonen, Aatto, E-mail: aatto@mmk.su.se [Division of Physical Chemistry, Arrhenius Laboratory, Stockholm University, S 106 91 Stockholm (Sweden); Science for Life Laboratory, 17121 Solna (Sweden)
2015-12-28
We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile.
Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL
2014-01-01
The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.
Czech Academy of Sciences Publication Activity Database
Zapoměl, Jaroslav; Ferfecki, Petr; Kozánek, Jan
2014-01-01
Roč. 8, č. 1 (2014), s. 129-138. ISSN 1802-680X Institutional support: RVO:61388998 Keywords : uncertain parameters of rigid motors * magnetorheological dampers * force transmission * Monte Carlo method Subject RIV: BI - Acoustics http://www.kme.zcu.cz/acm/acm/article/view/247/275
International Nuclear Information System (INIS)
Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for keff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to keff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)
欧式期权定价的Monte-Carlo方法%Monte-Carlo methods for Pricing European-style options
Institute of Scientific and Technical Information of China (English)
张丽虹
2015-01-01
讨论各种欧式期权价格的Monte-Carlo方法。根据Black-Scholes期权定价模型以及风险中性理论，首先详细地讨论如何利用Monte-Carlo方法来计算标准欧式期权价格；然后讨论如何引入控制变量以及对称变量来提高Monte-Carlo方法的精确性；最后用Monte-Carlo方法来计算标准欧式期权、欧式—两值期权、欧式—回望期权以及欧式—亚式期权的价格，并讨论相关方法的优缺点。%We discuss Monte-Carlo methods for pricing European options.Based on the famous Black-Scholes model,we first discuss the Monte-Carlo simulation method to pricing standard European options according to Risk neutral theory.Methods to improve the Monte-Carlo simulation performance including introducing control variates and antithetic variates are also discussed.Finally we apply the proposed Monte-Carlo methods to price the European binary options,European lookback options and European Asian options.
Algorithms for modeling radioactive decays of π-and μ-mesons by the Monte-Carlo method
International Nuclear Information System (INIS)
Effective algorithms for modeling decays of μsup(→e) ννγ and πsup(→e)νγ by the Monte-Carlo method are described. The algorithms developed allowed to considerably reduce time needed to calculate the efficiency of decay detection. They were used for modeling in experiments on the study of pions and muons rare decays
Institute of Scientific and Technical Information of China (English)
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
DISCOVERY OF A ZZ CETI IN THE KEPLER MISSION FIELD
International Nuclear Information System (INIS)
We report the discovery of the first identified pulsating DA white dwarf, WD J1916+3938 (Kepler ID 4552982), in the field of the Kepler mission. This ZZ Ceti star was first identified through ground-based, time-series photometry, and follow-up spectroscopy confirms that it is a hydrogen-atmosphere white dwarf with T eff = 11,129 ± 115 K and log g = 8.34 ± 0.06, placing it within the empirical ZZ Ceti instability strip. The object shows up to 0.5% amplitude variability at several periods between 800 and 1450 s. Extended Kepler observations of WD J1916+3938 could yield the best light curve, to date, of any pulsating white dwarf, allowing us to directly study the interior of an evolved object representative of the fate of the majority of stars in our Galaxy.
Verification of Burned Core Modeling Method for Monte Carlo Simulation of HANARO
International Nuclear Information System (INIS)
The reactor core has been managed well by the HANARO core management system called HANAFMS. The heterogeneity of the irradiation device and core made the neutronic analysis difficult and sometimes doubtable. To overcome the deficiency, MCNP was utilized in neutron transport calculation of the HANARO. For the most part, a MCNP model with the assumption that all fuels are filled with fresh fuel assembly showed acceptable analysis results for a design of experimental devices and facilities. However, it sometimes revealed insufficient results in the design, which requires good accuracy like neutron transmutation doping (NTD), because it didn't consider the flux variation induced by depletion of the fuel. In this study, a depleted-core modeling method previously proposed was applied to build burned core model of HANARO and verified through a comparison of the calculated result from the depleted-core model and that from an experiment. The modeling method to establish a depleted-core model for the Monte Carlo simulation was verified by comparing the neutron flux distribution obtained by the zirconium activation method and the reaction rate of 30Si(n, γ) 31Si obtained by a resistivity measurement method. As a result, the reaction rate of 30Si(n, γ) 31Si also agreed well with about 3% difference. It was therefore concluded that the modeling method and resulting depleted-core model developed in this study can be a very reliable tool for the design of the planned experimental facility and a prediction of its performance in HANARO
Verification of Burned Core Modeling Method for Monte Carlo Simulation of HANARO
Energy Technology Data Exchange (ETDEWEB)
Cho, Dongkeun; Kim, Myongseop [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
The reactor core has been managed well by the HANARO core management system called HANAFMS. The heterogeneity of the irradiation device and core made the neutronic analysis difficult and sometimes doubtable. To overcome the deficiency, MCNP was utilized in neutron transport calculation of the HANARO. For the most part, a MCNP model with the assumption that all fuels are filled with fresh fuel assembly showed acceptable analysis results for a design of experimental devices and facilities. However, it sometimes revealed insufficient results in the design, which requires good accuracy like neutron transmutation doping (NTD), because it didn't consider the flux variation induced by depletion of the fuel. In this study, a depleted-core modeling method previously proposed was applied to build burned core model of HANARO and verified through a comparison of the calculated result from the depleted-core model and that from an experiment. The modeling method to establish a depleted-core model for the Monte Carlo simulation was verified by comparing the neutron flux distribution obtained by the zirconium activation method and the reaction rate of {sup 30}Si(n, γ) {sup 31}Si obtained by a resistivity measurement method. As a result, the reaction rate of {sup 30}Si(n, γ) {sup 31}Si also agreed well with about 3% difference. It was therefore concluded that the modeling method and resulting depleted-core model developed in this study can be a very reliable tool for the design of the planned experimental facility and a prediction of its performance in HANARO.
Energy Technology Data Exchange (ETDEWEB)
Agudelo-Giraldo, J.D. [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo, J. [Grupo de Magnetismo y Simulación, Instituto de Física, Universidad de Antioquia, A.A. 1226, Medellín (Colombia)
2015-10-01
The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La{sub 2/3}Ca{sub 1/3}MnO{sub 3}, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn{sup 3+eg’}–O–Mn{sup 3+eg}, Mn{sup 3+eg}–O–Mn{sup 4+d3} and Mn{sup 3+eg’}–O–Mn{sup 4+d3}. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions T{sub C} (Curie temperature) and T{sub MI} (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, T{sub MI} presented lower values than T{sub C}. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below T{sub MI}. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below T{sub C}. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below T{sub MI} by the vacancies effect. • The resistive hysteresis
International Nuclear Information System (INIS)
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs
Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Quantifying uncertainties in pollutant mapping studies using the Monte Carlo method
Tan, Yi; Robinson, Allen L.; Presto, Albert A.
2014-12-01
Routine air monitoring provides accurate measurements of annual average concentrations of air pollutants, but the low density of monitoring sites limits its capability in capturing intra-urban variation. Pollutant mapping studies measure air pollutants at a large number of sites during short periods. However, their short duration can cause substantial uncertainty in reproducing annual mean concentrations. In order to quantify this uncertainty for existing sampling strategies and investigate methods to improve future studies, we conducted Monte Carlo experiments with nationwide monitoring data from the EPA Air Quality System. Typical fixed sampling designs have much larger uncertainties than previously assumed, and produce accurate estimates of annual average pollution concentrations approximately 80% of the time. Mobile sampling has difficulties in estimating long-term exposures for individual sites, but performs better for site groups. The accuracy and the precision of a given design decrease when data variation increases, indicating challenges in sites intermittently impact by local sources such as traffic. Correcting measurements with reference sites does not completely remove the uncertainty associated with short duration sampling. Using reference sites with the addition method can better account for temporal variations than the multiplication method. We propose feasible methods for future mapping studies to reduce uncertainties in estimating annual mean concentrations. Future fixed sampling studies should conduct two separate 1-week long sampling periods in all 4 seasons. Mobile sampling studies should estimate annual mean concentrations for exposure groups with five or more sites. Fixed and mobile sampling designs have comparable probabilities in ordering two sites, so they may have similar capabilities in predicting pollutant spatial variations. Simulated sampling designs have large uncertainties in reproducing seasonal and diurnal variations at individual
Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations
International Nuclear Information System (INIS)
The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author)
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
International Nuclear Information System (INIS)
Monte Carlo codes GEANT 4 and MUSIC have been used to calculate background components of low-level HPGe gamma-ray spectrometers operating in a shallow underground laboratory. The simulated background gamma-ray spectra have been comparable with spectra measured at the Ogoya underground laboratory operating at the depth of 270 m w.e. (water equivalent). The Monte Carlo simulations have proved to be useful approach in estimation of background characteristics of HPGe spectrometers before their construction. (author)
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Dosimetric validation of Acuros XB with Monte Carlo methods for photon dose calculations
International Nuclear Information System (INIS)
Purpose: The dosimetric accuracy of the recently released Acuros XB advanced dose calculation algorithm (Varian Medical Systems, Palo Alto, CA) is investigated for single radiation fields incident on homogeneous and heterogeneous geometries, and a comparison is made to the analytical anisotropic algorithm (AAA). Methods: Ion chamber measurements for the 6 and 18 MV beams within a range of field sizes (from 4.0x4.0 to 30.0x30.0 cm2) are used to validate Acuros XB dose calculations within a unit density phantom. The dosimetric accuracy of Acuros XB in the presence of lung, low-density lung, air, and bone is determined using BEAMnrc/DOSXYZnrc calculations as a benchmark. Calculations using the AAA are included for reference to a current superposition/convolution standard. Results: Basic open field tests in a homogeneous phantom reveal an Acuros XB agreement with measurement to within ±1.9% in the inner field region for all field sizes and energies. Calculations on a heterogeneous interface phantom were found to agree with Monte Carlo calculations to within ±2.0%(σMC=0.8%) in lung (ρ=0.24 g cm-3) and within ±2.9%(σMC=0.8%) in low-density lung (ρ=0.1 g cm-3). In comparison, differences of up to 10.2% and 17.5% in lung and low-density lung were observed in the equivalent AAA calculations. Acuros XB dose calculations performed on a phantom containing an air cavity (ρ=0.001 g cm-3) were found to be within the range of ±1.5% to ±4.5% of the BEAMnrc/DOSXYZnrc calculated benchmark (σMC=0.8%) in the tissue above and below the air cavity. A comparison of Acuros XB dose calculations performed on a lung CT dataset with a BEAMnrc/DOSXYZnrc benchmark shows agreement within ±2%/2mm and indicates that the remaining differences are primarily a result of differences in physical material assignments within a CT dataset. Conclusions: By considering the fundamental particle interactions in matter based on theoretical interaction cross sections, the Acuros XB algorithm is
Evaluation of functioning of an extrapolation chamber using Monte Carlo method
International Nuclear Information System (INIS)
The extrapolation chamber is a parallel plate chamber and variable volume based on the Braff-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents a simulation for evaluating the functioning of an extrapolation chamber type 23392 of PTW, using the MCNPX Monte Carlo method. In the simulation, the fluence in the air collector cavity of the chamber was obtained. The influence of the materials that compose the camera on its response against beta radiation beam was also analysed. A comparison of the contribution of primary and secondary radiation was performed. The energy deposition in the air collector cavity for different depths was calculated. The component with the higher energy deposition is the Polymethyl methacrylate block. The energy deposition in the air collector cavity for chamber depth 2500 μm is greater with a value of 9.708E-07 MeV. The fluence in the air collector cavity decreases with depth. It's value is 1.758E-04 1/cm2 for chamber depth 500 μm. The values reported are for individual electron and photon histories. The graphics of simulated parameters are presented in the paper. (Author)
Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method
International Nuclear Information System (INIS)
In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.
Analysis of probabilistic short run marginal cost using Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R. [Inst. Tecnologico de Morelia, Michoacan (Mexico). Dept. de Ing. Electrica y Electronica; Mota-Palomino, R. [Inst. Politecnico Nacional (Mexico). Escuela Superior de Ingenieria Mecanica y Electrica
1999-11-01
The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.
Evaluation of the scattered radiation components produced in a gamma camera using Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Polo, Ivon Oramas, E-mail: ivonoramas67@gmail.com [Department of Nuclear Engineering, Faculty of Nuclear Sciences and Technologies, Higher Institute of Applied Science and Technology (InSTEC), La Habana (Cuba)
2014-07-01
Introduction: this paper presents a simulation for evaluation of the scattered radiation components produced in a gamma camera PARK using Monte Carlo code SIMIND. It simulates a whole body study with MDP (Methylene Diphosphonate) radiopharmaceutical based on Zubal anthropomorphic phantom, with some spinal lesions. Methods: the simulation was done by comparing 3 configurations for the detected photons. The corresponding energy spectra were obtained using Low Energy High Resolution collimator. The parameters related with the interactions and the fraction of events in the energy window, the simulated events of the spectrum and scatter events were calculated. Results: the simulation confirmed that the images without influence of scattering events have a higher number of valid recorded events and it improved the statistical quality of them. A comparison among different collimators was made. The parameters and detector energy spectrum were calculated for each simulation configuration with these collimators using {sup 99m}Tc. Conclusion: the simulation corroborated that LEHS collimator has higher sensitivity and HEHR collimator has lower sensitivity when they are used with low energy photons. (author)
Research of photon beam dose deposition kernel based on Monte Carlo method
International Nuclear Information System (INIS)
Using Monte Carlo program BEAMnrc to simulate Siemens accelerator 6 MV photon beam, using BEAMdp program to analyse the energy spectrum distribution and mean energy from phase space data of different field sizes, then building beam source, energy spectrum and mono-energy source, to use DOSXYZnrc program to calculate the dose deposition kernels at dmax in standard water phantom with different beam sources and make comparison with different dose deposition kernels. The results show that the dose difference using energy spectrum source is small, the maximum percentage dose discrepancy is 1.47%, but it is large using mono-energy source, which is 6.28%. The maximum dose difference for the kernels derived from energy spectrum source and mono-energy source of the same field is larger than 9%, up to 13.2%. Thus, dose deposition has dependence on photon energy, it can lead to larger errors only using mono-energy source because of the beam spectrum distribution of accelerator. A good method to calculate dose more accurately is to use deposition kernel of energy spectrum source. (authors)
Velazquez, L.; Castro-Palacio, J. C.
2015-03-01
Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .
A Monte-Carlo Method for Making SDSS $u$-Band Magnitude more accurate
Gu, Jiayin; Zuo, Wenbo; Jing, Yingjie; Wu, Zhenyu; Ma, Jun; Zhou, Xu
2016-01-01
We develop a new Monte-Carlo-based method to convert the SDSS (Sloan Digital Sky Survey) $u$-band magnitude to the SCUSS (South Galactic Cap of $u$-band Sky Survey) $u$-band magnitude. Due to more accuracy of SCUSS $u$-band measurements, the converted $u$-band magnitude becomes more accurate comparing with the original SDSS $u$-band magnitude, in particular at the faint end. The average $u$ (both SDSS and SCUSS) magnitude error of numerous main-sequence stars with $0.2
A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo
Chen, Yajun; Liu, Ding; Liang, Junli
2013-10-01
During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.
A model of carbon ion interactions in water using the classical trajectory Monte Carlo method
International Nuclear Information System (INIS)
In this paper, model calculations for interactions of C6+ of energies from 1 keV u-1 to 1 MeV u-1 in water are presented. The calculations were carried out using the classical trajectory Monte Carlo method, taking into account the dynamic screening of the target core. The total cross sections (TCS) for electron capture and ionisation, and the singly and doubly differential cross sections (SDCS and DDCS) for ionisation were calculated for the five potential energy levels of the water molecule. The peaks in the DDCS for the electron capture to continuum and for the binary-encounter collision were obtained for 500-keV u-1 carbon ions. The calculated SDCS agree reasonably well with the z2 scaled proton data for 500 keV u-1 and 1 MeV u-1 projectiles, but a large deviation of up to 8-folds was observed for 100-keV u-1 projectiles. The TCS for ionisation are in agreement with the values calculated from the first born approximation (FBA) at the highest energy region investigated, but become smaller than the values from the FBA at the lower-energy region. (authors)
Assessment of the Contrast to Noise Ratio in PET Scanners with Monte Carlo Methods
Michail, C. M.; Karpetas, G. E.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess the contrast to noise ratio (CNR) of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The PET scanner simulated was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution. Image quality was assessed in terms of the CNR. CNR was estimated from coronal reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL. OSMAPOSL reconstruction was assessed by using various subsets (3, 15 and 21) and various iterations (2 to 20). CNR values were found to decrease when both iterations and subsets increase. Two (2) iterations were found to be optimal. The simulated PET evaluation method, based on the TLC plane source, can be useful in image quality assessment of PET scanners.
Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics
International Nuclear Information System (INIS)
Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.
Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics
International Nuclear Information System (INIS)
Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation. (author)
Tracer diffusion in an ordered alloy: application of the path probability and Monte Carlo methods
International Nuclear Information System (INIS)
Tracer diffusion technique has been extensively utilized to investigate diffusion phenomena and has contributed a great deal to the understanding of the phenomena. However, except for self diffusion and impurity diffusion, the meaning of tracer diffusion is not yet satisfactorily understood. Here we try to extend the understanding to concentrated alloys. Our major interest here is directed towards understanding the physical factors which control diffusion through the comparison of results obtained by the Path Probability Method (PPM) and those by the Monte Carlo simulation method (MCSM). Both the PPM and the MCSM are basically in the same category of statistical mechanical approaches applicable to random processes. The advantage of the Path Probability method in dealing with phenomena which occur in crystalline systems has been well established. However, the approximations which are inevitably introduced to make the analytical treatment tractable, although their meaning may be well-established in equilibrium statistical mechanics, sometimes introduce unwarranted consequences the origin of which is often hard to trace. On the other hand, the MCSM which can be carried out in a parallel fashion to the PPM provides, with care, numerically exact results. Thus a side-by-side comparison can give insight into the effect of approximations in the PPM. It was found that in the pair approximation of the CVM, the distribution in the completely random state is regarded as homogeneous (without fluctuations), and hence, the fluctuation in distribution is not well represented in the PPM. These examples thus show clearly how the comparison of analytical results with carefully carried out calculations by the MCSM guides the progress of theoretical treatments and gives insights into the mechanism of diffusion
Evaluation of Monte Carlo Codes Regarding the Calculated Detector Response Function in NDP Method
International Nuclear Information System (INIS)
The basis of the NDP is the irradiation of a sample with a thermal or cold neutron beam and the subsequent release of charged particles due to neutron-induced exoergic charged particle reactions. Neutrons interact with the nuclei of elements and release mono-energetic charged particles, e.g. alpha particles or protons, and recoil atoms. Depth profile of the analyzed element can be obtained by making a linear transformation of the measured energy spectrum by using the stopping power of the sample material. A few micrometer of the material can be analyzed nondestructively, and on the order of 10nm depth resolution can be obtained depending on the material type with NDP method. In the NDP method, the one first steps of the analytical process is a channel-energy calibration. This calibration is normally made with the experimental measurement of NIST Standard Reference Material sample (SRM-93a). In this study, some Monte Carlo (MC) codes were tried to calculate the Si detector response function when this detector accounted the energy charges particles emitting from an analytical sample. In addition, these MC codes were also tried to calculate the depth distributions of some light elements (10B, 3He, 6Li, etc.) in SRM-93a and SRM-2137 samples. These calculated profiles were compared with the experimental profiles and SIMS profiles. In this study, some popular MC neutron transport codes are tried and tested to calculate the detector response function in the NDP method. The simulations were modeled based on the real CN-NDP system which is a part of Cold Neutron Activation Station (CONAS) at HANARO (KAERI). The MC simulations are very successful at predicting the alpha peaks in the measured energy spectrum. The net area difference between the measured and predicted alpha peaks are less than 1%. A possible explanation might be bad cross section data set usage in the MC codes for the transport of low energetic lithium atoms inside the silicon substrate
International Nuclear Information System (INIS)
A method of modelling the dynamic motion of multileaf collimators (MLCs) for intensity-modulated radiation therapy (IMRT) was developed and implemented into the Monte Carlo simulation. The simulation of the dynamic MLCs (DMLCs) was based on randomizing leaf positions during a simulation so that the number of particle histories being simulated for each possible leaf position was proportional to the monitor units delivered to that position. This approach was incorporated into an EGS4 Monte Carlo program, and was evaluated in simulating the DMLCs for Varian accelerators (Varian Medical Systems, Palo Alto, CA, USA). The MU index of each segment, which was specified in the DMLC-control data, was used to compute the cumulative probability distribution function (CPDF) for the leaf positions. This CPDF was then used to sample the leaf positions during a real-time simulation, which allowed for either the step-shoot or sweeping-leaf motion in the beam delivery. Dose intensity maps for IMRT fields were computed using the above Monte Carlo method, with its accuracy verified by film measurements. The DMLC simulation improved the operational efficiency by eliminating the need to simulate multiple segments individually. More importantly, the dynamic motion of the leaves could be simulated more faithfully by using the above leaf-position sampling technique in the Monte Carlo simulation. (author)
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao; Shindo, Ryuichi; Shiozawa, Shusaku [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment
1995-10-01
In case of a shielding analysis of the geometry having thick and complicated structures with a Monte Carlo code, it is a serious problem that it takes too much computer time to obtain results with good statistics. Therefore, it is very important to reduce variances in the calculation. In this study, a method to determine the importance function in 3-dimensional Monte Carlo calculation with geometry splitting with Russian roulette was developed for the shielding analysis of thick and complicated core shielding structures. Only two essential importance ratio curves for one material enable us to determine the importance function easily in the shielding calculation. The validity of this method was confirmed through a simple benchmark calculation. From the comparison with the result obtained by using weight window (W-W), it was shown that the present method can give an accurate result on the same level with W-W method with less trial and errors. And this method was applied to an actual reactor core shielding analysis to confirm its applicability to a 3-dimensional thick and complicated structure. Using this method, the variance reduced calculation can be easily realized with the developed importance determination procedure, especially in case that parameter survey calculations are required in order to determine the shield thickness in a design work of a thick and complicated structure. Accordingly, it became easier to use Monte Carlo method as a powerful tool for a reactor core shielding design. (author).
International Nuclear Information System (INIS)
In case of a shielding analysis of the geometry having thick and complicated structures with a Monte Carlo code, it is a serious problem that it takes too much computer time to obtain results with good statistics. Therefore, it is very important to reduce variances in the calculation. In this study, a method to determine the importance function in 3-dimensional Monte Carlo calculation with geometry splitting with Russian roulette was developed for the shielding analysis of thick and complicated core shielding structures. Only two essential importance ratio curves for one material enable us to determine the importance function easily in the shielding calculation. The validity of this method was confirmed through a simple benchmark calculation. From the comparison with the result obtained by using weight window (W-W), it was shown that the present method can give an accurate result on the same level with W-W method with less trial and errors. And this method was applied to an actual reactor core shielding analysis to confirm its applicability to a 3-dimensional thick and complicated structure. Using this method, the variance reduced calculation can be easily realized with the developed importance determination procedure, especially in case that parameter survey calculations are required in order to determine the shield thickness in a design work of a thick and complicated structure. Accordingly, it became easier to use Monte Carlo method as a powerful tool for a reactor core shielding design. (author)
MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method
Directory of Open Access Journals (Sweden)
Eslahchi Changiz
2010-08-01
Full Text Available Abstract Background A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. Results In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. Conclusions We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net.
Harries, Tim J.
2015-01-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two ...
Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan
2015-10-01
Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.
ZZ XCOM, Photon Cross-Section Library for Personal Computer
International Nuclear Information System (INIS)
1 - Description of program or function: Format: The input file FDAT produces the binary file UDAT (direct access un-formatted). This file is then used by the program XCOM1 to retrieve and display the photon cross-sections and attenuation coefficients. Number of groups: Photon cross-section data files (partial interaction coefficients and total attenuation coefficients) for 100 elements in the energy range 1 KeV to 100 GeV. Materials:H, He, Li, Be, B, C, N, O, F, Ne, Na, Mg, Al, Si, P, S, Cl, Ar, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, Ge, As, Se, Br, Kr, Rb, Sr, Y, Zr, Nb, Mo, Tc, Ru, Rh, Pd, Ag, Cd, In, Sn, Sb, Te, I, Xe, Cs, Ba, La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, Hf, Ta, W, Re, Os, Ir, Pt, Au, Hg, Tl, Pb, Bi, Po, At, Rn, Fr, Ra, Ac, Th, Pa, U, Np, Pu, Am, Cm, Bk, Cf, Es, Fm. Origin: Several sources. It is based on an experimental data base consisting of 21000 data points from 512 literature sources. Same sources as DLC-136/PHOTX. Weighting spectrum: The weighting factors, i.e., the fractions by weights of the atomic constituents, are calculated from the chemical formula entered by the user. The National Institute of Standards and Technology, through its Office of Standard Reference Data, has long maintained and published compilations of measured and evaluated photon cross sections. This compilation of XCOM Version 1.2, released on personal computer media, represents best values as determined in 1987. XCOM1 (Version 1.3, copyright 1991) is similar to XCOM but uses the direct-access un-formatted database file UDAT. 2 - Method of solution: The data from the National Institute of Standards and Technology are in binary files for 100 elements covering the energy range 1 keV to 100 GeV. The reactions considered are coherent and incoherent scattering, photoelectric absorption, and pair production. The XCOM data are derived from the same source as DLC-0136/ZZ-PHOTX
Self Consistent Monte Carlo Method to Study CSR Effects in Bunch Compressors
International Nuclear Information System (INIS)
In this paper we report on the results of a self-consistent calculation of CSR effects on a particle bunch moving through the benchmark Zeuthen bunch compressors. The theoretical framework is based on a 4D Vlasov-Maxwell approach including shielding from the vacuum chamber. We calculate the fields in the lab frame, where time is the independent variable, and evolve the phase space density/points in the beam frame, where arc length, s, along a reference orbit, is the independent variable. Some details are given in [2], where we also discuss three approaches, the unperturbed source model (UPS), the self consistent Monte Carlo (SCMC) method and the method of local characteristics. Results for the UPS have been presented for 5 GeV before [3], here we compare them with our new results from the SCMC and study the 500MeV case. Our work using the method of characteristics is in progress. The SCMC algorithm begins by randomly generating an initial ensemble of beam frame phase space points according to a given initial phase space density. The algorithm then reduces to laying out one arc length step. Assume that at arc length s we know the location of the phase space points and the history of the source prior to s. We then (1) create a smooth representation of the lab frame charge and current densities, ρL and JL, (2) calculate the fields at s from the history of ρL and JL, and (3) move the beam frame phase space points according to the beam frame equations of motion. This is then iterated. The UPS calculation is similar except the fields are calculated from a function of s computed a priori from the beam frame equations of motion without the self-fields. The phase space points are then evolved according to the equations of motion with these ''unperturbed'' fields. In the UPS we use a Gaussian initial density which evolves under the linear beam frame equations as a Gaussian. This gives us an analytic formula for the source, which significantly speeds up the field
Modeling radiation from the atmosphere of Io with Monte Carlo methods
Gratiy, Sergey
Conflicting observations regarding the dominance of either sublimation or volcanism as the source of the atmosphere on Io and disparate reports on the extent of its spatial distribution and the absolute column abundance invite the development of detailed computational models capable of improving our understanding of Io's unique atmospheric structure and origin. To validate a global numerical model of Io's atmosphere against astronomical observations requires a 3-D spherical-shell radiative transfer (RT) code to simulate disk-resolved images and disk-integrated spectra from the ultraviolet to the infrared spectral region. In addition, comparison of simulated and astronomical observations provides important information to improve existing atmospheric models. In order to achieve this goal, a new 3-D spherical-shell forward/backward photon Monte Carlo code capable of simulating radiation from absorbing/emitting and scattering atmospheres with an underlying emitting and reflecting surface was developed. A new implementation of calculating atmospheric brightness in scattered sunlight is presented utilizing the notion of an "effective emission source" function. This allows for the accumulation of the scattered contribution along the entire path of a ray and the calculation of the atmospheric radiation when both scattered sunlight and thermal emission contribute to the observed radiation---which was not possible in previous models. A "polychromatic" algorithm was developed for application with the backward Monte Carlo method and was implemented in the code. It allows one to calculate radiative intensity at several wavelengths simultaneously, even when the scattering properties of the atmosphere are a function of wavelength. The application of the "polychromatic" method improves the computational efficiency because it reduces the number of photon bundles traced during the simulation. A 3-D gas dynamics model of Io's atmosphere, including both sublimation and volcanic
Automating methods to improve precision in Monte-Carlo event generation for particle colliders
Energy Technology Data Exchange (ETDEWEB)
Gleisberg, Tanju
2008-07-01
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
Automating methods to improve precision in Monte-Carlo event generation for particle colliders
International Nuclear Information System (INIS)
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
International Nuclear Information System (INIS)
The equivalence theorem providing a relation between a homogeneous and a heterogeneous medium has been used in the resonance calculation for the heterogeneous system. The accuracy of the resonance calculation based on the equivalence theorem depends on how accurately the fuel collision probability is expressed by the rational terms. The fuel collision probability is related to the Dancoff factor in closely packed lattices. The calculation of the Dancoff factor is one of the most difficult problems in the core analysis because the actual configuration of fuel elements in the lattice is very complex. Most reactor physics codes currently used are based on the roughly calculated black Dancoff factor, where total cross section of the fuel is assumed to be infinite. Even the black Dancoff factors have not been calculated accurately though many methods have been proposed. The equivalence theorem based on the black Dancoff factor causes some errors inevitably due to the approximations involved in the Dancoff factor calculation and the derivation of the fuel collision probability, but they have not been evaluated seriously before. In this study, a Monte Carlo program - G-DANCOFF - was developed to calculate not only the traditional black Dancoff factor but also grey Dancoff factor where the medium is described realistically. G-DANCOFF calculates the Dancoff factor based on its collision probability definition in an arbitrary arrangement of cylindrical fuel pins in full three-dimensional fashion. G-DANCOFF was verified by comparing the black Dancoff factors calculated for the geometries where accurate solutions are available. With 100,000 neutron histories, the calculated results by G-DANCOFF were matched within maximum 1% and in most cases less than 0.2% with previous results. G-DANCOFF also provides graphical information on particle tracks which makes it possible to calculate the Dancoff factor independently. The effects of the Dancoff factor on the criticality calculation
International Nuclear Information System (INIS)
A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)
2005-07-01
We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)
International Nuclear Information System (INIS)
This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module
Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)
International Nuclear Information System (INIS)
Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author)
Harries, Tim J.
2015-04-01
We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelization method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion on to, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.
Directory of Open Access Journals (Sweden)
M. Kotbi
2013-03-01
Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.
International Nuclear Information System (INIS)
This work is based on the determination of the detection efficiency of 125I and 131I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131I and 125I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)
Environmental dose rate assessment of ITER using the Monte Carlo method
Directory of Open Access Journals (Sweden)
Karimian Alireza
2014-01-01
Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.
Nanothermodynamics of large iron clusters by means of a flat histogram Monte Carlo method
International Nuclear Information System (INIS)
The thermodynamics of iron clusters of various sizes, from 76 to 2452 atoms, typical of the catalyst particles used for carbon nanotubes growth, has been explored by a flat histogram Monte Carlo (MC) algorithm (called the σ-mapping), developed by Soudan et al. [J. Chem. Phys. 135, 144109 (2011), Paper I]. This method provides the classical density of states, gp(Ep) in the configurational space, in terms of the potential energy of the system, with good and well controlled convergence properties, particularly in the melting phase transition zone which is of interest in this work. To describe the system, an iron potential has been implemented, called “corrected EAM” (cEAM), which approximates the MEAM potential of Lee et al. [Phys. Rev. B 64, 184102 (2001)] with an accuracy better than 3 meV/at, and a five times larger computational speed. The main simplification concerns the angular dependence of the potential, with a small impact on accuracy, while the screening coefficients Sij are exactly computed with a fast algorithm. With this potential, ergodic explorations of the clusters can be performed efficiently in a reasonable computing time, at least in the upper half of the solid zone and above. Problems of ergodicity exist in the lower half of the solid zone but routes to overcome them are discussed. The solid-liquid (melting) phase transition temperature Tm is plotted in terms of the cluster atom number Nat. The standard Nat−1/3 linear dependence (Pawlow law) is observed for Nat >300, allowing an extrapolation up to the bulk metal at 1940 ±50 K. For Nat <150, a strong divergence is observed compared to the Pawlow law. The melting transition, which begins at the surface, is stated by a Lindemann-Berry index and an atomic density analysis. Several new features are obtained for the thermodynamics of cEAM clusters, compared to the Rydberg pair potential clusters studied in Paper I
Romania Monte Carlo Methods Application to CANDU Spent Fuel Comparative Analysis
International Nuclear Information System (INIS)
Romania has a single NPP at Cernavoda with 5 PHWR reactors of CANDU6 type of 705 MW(e) each, with Cernavoda Unit1, operational starting from December 1996, Unit2 under construction while the remaining Unit3-5 is being conserved. The nuclear energy world wide development is accompanied by huge quantities of spent nuclear fuel accumulation. Having in view the possible impact upon population and environment, in all activities associated to nuclear fuel cycle, namely transportation, storage, reprocessing or disposal, the spent fuel characteristics must be well known. The paper aim is to apply Monte Carlo methods to CANDU spent fuel analysis, starting from the discharge moment, followed by spent fuel transport after a defined cooling period and finishing with the intermediate dry storage. As radiation source 3 CANDU fuels have been considered: standard 37 rods fuel bundle with natural UO2 and SEU fuels, and 43 rods fuel bundle with SEU fuel. After a criticality calculation using KENO-VI code, the criticality coefficient and the actinides and fission products concentrations are obtained. By using ORIGEN-S code, the photon source profiles are calculated and the spent fuel characteristics estimation is done. For the shielding calculations MORSE-SGC code has been used. Regarding to the spent fuel transport, the photon dose rates to the shipping cask wall and in air, at different distances from the cask, are estimated. The shielding calculation for the spent fuel intermediate dry storage is done and the photon dose rates at the storage basket wall (active element of the Cernavoda NPP intermediate dry storage) are obtained. A comparison between the 3 types of CANDU fuels is presented. (authors)
Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
International Nuclear Information System (INIS)
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL
Energy Technology Data Exchange (ETDEWEB)
Cho, S; Shin, E H; Kim, J; Ahn, S H; Chung, K; Kim, D-H; Han, Y; Choi, D H [Samsung Medical Center, Seoul (Korea, Republic of)
2015-06-15
Purpose: To evaluate the shielding wall design to protect patients, staff and member of the general public for secondary neutron using a simply analytic solution, multi-Monte Carlo code MCNPX, ANISN and FLUKA. Methods: An analytical and multi-Monte Carlo method were calculated for proton facility (Sumitomo Heavy Industry Ltd.) at Samsung Medical Center in Korea. The NCRP-144 analytical evaluation methods, which produced conservative estimates on the dose equivalent values for the shielding, were used for analytical evaluations. Then, the radiation transport was simulated with the multi-Monte Carlo code. The neutron dose at evaluation point is got by the value using the production of the simulation value and the neutron dose coefficient introduced in ICRP-74. Results: The evaluation points of accelerator control room and control room entrance are mainly influenced by the point of the proton beam loss. So the neutron dose equivalent of accelerator control room for evaluation point is 0.651, 1.530, 0.912, 0.943 mSv/yr and the entrance of cyclotron room is 0.465, 0.790, 0.522, 0.453 mSv/yr with calculation by the method of NCRP-144 formalism, ANISN, FLUKA and MCNP, respectively. The most of Result of MCNPX and FLUKA using the complicated geometry showed smaller values than Result of ANISN. Conclusion: The neutron shielding for a proton therapy facility has been evaluated by the analytic model and multi-Monte Carlo methods. We confirmed that the setting of shielding was located in well accessible area to people when the proton facility is operated.
Directory of Open Access Journals (Sweden)
Jimin Liang
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
Energy Technology Data Exchange (ETDEWEB)
Somasundaram, E.; Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97332-5902 (United States)
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
International Nuclear Information System (INIS)
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
International Nuclear Information System (INIS)
Isotope production and Application Division of Bhabha Atomic Research Center developed 32P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed 32P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the 32P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed 32P patch sources for contact brachytherapy applications. - Highlights: • Surface dose rates of 25 mm nominal diameter newly developed 32P patch sources were measured experimentally using extrapolation chamber and Gafchromic EBT2 film. Monte Carlo model of the 32P patch source along with the extrapolation chamber was also developed. • The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and
IMRT dose delivery effects in radiotherapy treatment planning using Monte Carlo methods
Tyagi, Neelam
Inter- and intra-leaf transmission and head scatter can play significant roles in Intensity Modulated Radiation Therapy (IMRT)-based treatment deliveries. In order to accurately calculate the dose in the IMRT planning process, it is therefore important that the detailed geometry of the multi-leaf collimator (MLC), in addition to other components in the accelerator treatment head be accurately modeled. In this thesis Monte Carlo (MC) methods have been used to model the treatment head of a Varian linear accelerator. A comprehensive model of the Varian 120-leaf MLC has been developed within the DPM MC code and has been verified against measurements in homogeneous and heterogeneous phantom geometries under different IMRT delivery circumstances. Accuracy of the MLC model in simulating details in the leaf geometry has been established over a range of arbitrarily shaped fields and IMRT fields. A sensitivity analysis of the effect of the electron-on-target parameters and the structure of the flattening filter on the accuracy of calculated dose distributions has been conducted. Adjustment of the electron-on-target parameters resulting in optimal agreement with measurements was an iterative process, with the final parameters representing a tradeoff between small (3x3 cm2) and large (40x40 cm2) field sizes. A novel method based on adaptive kernel density estimation, in the phase space simulation process is also presented as an alternative to particle recycling. Using this model dosimetric differences between MLC-based static (SMLC) and dynamic (DMLC) deliveries have been investigated. Differences between SMLC and DMLC, possibly related to fluence and/or spectral changes, appear to vary systematically with the density of the medium. The effect of fluence modulation due to leaf sequencing shows differences, up to 10% between plans developed with 1% and 10% fluence intervals for both SMLC and DMLC-delivered sequences. Dose differences between planned and delivered leaf sequences
Evaluation of Monte Carlo Codes Regarding the Calculated Detector Response Function in NDP Method
Energy Technology Data Exchange (ETDEWEB)
Tuan, Hoang Sy Minh; Sun, Gwang Min; Park, Byung Gun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-10-15
The basis of the NDP is the irradiation of a sample with a thermal or cold neutron beam and the subsequent release of charged particles due to neutron-induced exoergic charged particle reactions. Neutrons interact with the nuclei of elements and release mono-energetic charged particles, e.g. alpha particles or protons, and recoil atoms. Depth profile of the analyzed element can be obtained by making a linear transformation of the measured energy spectrum by using the stopping power of the sample material. A few micrometer of the material can be analyzed nondestructively, and on the order of 10nm depth resolution can be obtained depending on the material type with NDP method. In the NDP method, the one first steps of the analytical process is a channel-energy calibration. This calibration is normally made with the experimental measurement of NIST Standard Reference Material sample (SRM-93a). In this study, some Monte Carlo (MC) codes were tried to calculate the Si detector response function when this detector accounted the energy charges particles emitting from an analytical sample. In addition, these MC codes were also tried to calculate the depth distributions of some light elements ({sup 10}B, {sup 3}He, {sup 6}Li, etc.) in SRM-93a and SRM-2137 samples. These calculated profiles were compared with the experimental profiles and SIMS profiles. In this study, some popular MC neutron transport codes are tried and tested to calculate the detector response function in the NDP method. The simulations were modeled based on the real CN-NDP system which is a part of Cold Neutron Activation Station (CONAS) at HANARO (KAERI). The MC simulations are very successful at predicting the alpha peaks in the measured energy spectrum. The net area difference between the measured and predicted alpha peaks are less than 1%. A possible explanation might be bad cross section data set usage in the MC codes for the transport of low energetic lithium atoms inside the silicon substrate.
Energy Technology Data Exchange (ETDEWEB)
Matsumiya, T. [Nippon Steel Corporation, Tokyo (Japan)
1996-08-20
The Monte Carlo method was used to simulate an equilibrium diagram, and structural formation of transformation and recrystallization. In simulating the Cu-A equilibrium diagram, the calculation was performed by laying 24 face centered cubic lattices including four lattice points in all of the three directions, and using a simulation cell consisting of lattice points of a total of 24{sup 3}{times}4 points. Although this method has a possibility to discover existence of an unknown phase as a result of the calculation, problems were found left in handling of lattice mitigation, and in simulation of phase diagrams over phases with different crystal structures. In simulation of the transformation and recrystallization, discussions were given on correspondence of 1MCS to time when the lattice point size is increased, and on handling of nucleus formation. As a result, it was estimated that in three-dimensional grain growth, the average grain size is proportional to 1/3 power of the MCS number, and the real time against 1MCS is proportional to three power of the lattice point size. 11 refs., 8 figs., 2 tabs.
A Bayesian analysis of rare B decays with advanced Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Beaujean, Frederik
2012-11-12
Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit
A Bayesian analysis of rare B decays with advanced Monte Carlo methods
International Nuclear Information System (INIS)
Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C7, C9, and C10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl+l- operators with real-valued Ci. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K*γ, B→K(*)l+l-, and Bs→μ+μ- decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings Ci. The two solutions are related
International Nuclear Information System (INIS)
The Monte Carlo method can be used to compute the gamma-ray backscattering albedo. This method was used by Raso to compute the angular differential albedo. Raso's results have been used by Chilton and Huddelston to adjust their well-known albedo formula. Here, an efficient estimator is proposed to compute the double-differential angular and energetic albedo from gamma-ray histories simulated in matter by the three-dimensional Monte Carlo transport code TRIPOLI. A detailed physical albedo analysis could be done in this way. The double-differential angular and energetic gamma-ray albedo is calculated for iron material for initial gamma-ray energies of 8, 3, 1, and 0.5 MeV
International Nuclear Information System (INIS)
The TH-PPL CT teaching instrument, developed to Tsinghua University, adopts a 137Cs standard radiation source, which is shielded by one lead canister. This paper simulates and analyses the irradiation rate around the lead canister by a method, which combines Monte Carlo and practical measurement. The simulative result validates the correctness of this method. ICRU sphere's sediment energy is simulated, when the ICRU sphere is 50 mm far away from the lead canister. The personal dose will be calculated from the previous step, the results approve that the lead canister's protection is safe and Monte Carlo can be used in radioprotection analysis and optimum design of lead canister to shield radiation source. (authors)
International Nuclear Information System (INIS)
The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered
International Nuclear Information System (INIS)
We have investigated Monte Carlo schemes for analyzing particle transport through media with exponentially varying time-dependent cross sections. For such media, the cross sections are represented in the form Σ(t) = Σ0 e-at (1) or equivalently as Σ(x) = Σ0 e-bx (2) where b = av and v is the particle speed. For the following discussion, the parameters a and b may be either positive, for exponentially decreasing cross sections, or negative, for exponentially increasing cross sections. For most time-dependent Monte Carlo applications, the time and spatial variations of the cross-section data are handled by means of a stepwise procedure, holding the cross sections constant for each region over a small time interval Δt, performing the Monte Carlo random walk over the interval Δt, updating the cross sections, and then repeating for a series of time intervals. Continuously varying spatial- or time-dependent cross sections can be treated in a rigorous Monte Carlo fashion using delta-tracking, but inefficiencies may arise if the range of cross-section variation is large. In this paper, we present a new method for sampling collision distances directly for cross sections that vary exponentially in space or time. The method is exact and efficient and has direct application to Monte Carlo radiation transport methods. To verify that the probability density function (PDF) is correct and that the random-sampling procedure yields correct results, numerical experiments were performed using a one-dimensional Monte Carlo code. The physical problem consisted of a beam source impinging on a purely absorbing infinite slab, with a slab thickness of 1 cm and Σ0 = 1 cm-1. Monte Carlo calculations with 10 000 particles were run for a range of the exponential parameter b from -5 to +20 cm-1. Two separate Monte Carlo calculations were run for each choice of b, a continuously varying case using the random-sampling procedures described earlier, and a 'conventional' case where the
Simulation of the self-powered detectors sensibility using the Monte Carlo method
International Nuclear Information System (INIS)
This work presents a Monte Carlo simulation of Cobalt self-powered detectors, determining the detectors sensitivities to the neutron field. Several detectors, which results are published, were simulated in order to check the performance of this simulation. Furthermore, the sensitivity variation with the geometric parameters and with the irradiation time in the reactor was evaluated. (author)
Farges, Olivier; Bézian, Jean Jacques; Bru, Hélène; El Hafi, Mouna; Fournier, Richard; Spiesser, Christophe
2015-01-01
Rapidity and accuracy of algorithms evaluating yearly collected energy are an important issue in the context of optimizing concentrated solar power plants (CSP). These last ten years, several research groups have concentrated their efforts on the development of such sophisticated tools: approximations are required to decrease the CPU time, closely checking that the corresponding loss in accuracy remains acceptable. Here we present an alternative approach using the Monte Carlo Methods (MCM). T...
International Nuclear Information System (INIS)
Knowing the depth dose at the central axis is fundamental for the accurate planning of medical treatment systems involving ionizing radiation. With the evolution of the informatics it is possible the utilization of various computational tools such as GEANT4 and the MCNPX, which use the Monte Carlo Method for simulation of such situations, This paper makes a comparative between the two tools for the this type of application
Das, Arnab; Chakrabarti, Bikas K.
2008-12-01
Here we discuss the annealing behavior of an infinite-range ±J Ising spin glass in the presence of a transverse field using a zero-temperature quantum Monte Carlo method. Within the simulation scheme, we demonstrate that quantum annealing not only helps finding the ground state of a classical spin glass, but can also help simulating the ground state of a quantum spin glass, in particular, when the transverse field is low, much more efficiently.
Makarevich, K. O.; Minenko, V. F.; Verenich, K. A.; Kuten, S. A.
2016-05-01
This work is dedicated to modeling dental radiographic examinations to assess the absorbed doses of patients and effective doses. For simulating X-ray spectra, the TASMIP empirical model is used. Doses are assessed on the basis of the Monte Carlo method by using MCNP code for voxel phantoms of ICRP. The results of the assessment of doses to individual organs and effective doses for different types of dental examinations and features of X-ray tube are presented.
Measurements of the ZZ production cross sections in the $2\\ell2\
Khachatryan, Vardan; Tumasyan, Armen; Adam, Wolfgang; Bergauer, Thomas; Dragicevic, Marko; Erö, Janos; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Hartl, Christian; Hörmann, Natascha; Hrubec, Josef; Jeitler, Manfred; Kiesenhofer, Wolfgang; Knünz, Valentin; Krammer, Manfred; Krätschmer, Ilse; Liko, Dietrich; Mikulec, Ivan; Rabady, Dinyar; Rahbaran, Babak; Rohringer, Herbert; Schöfbeck, Robert; Strauss, Josef; Treberer-Treberspurg, Wolfgang; Waltenberger, Wolfgang; Wulz, Claudia-Elisabeth; Mossolov, Vladimir; Shumeiko, Nikolai; Suarez Gonzalez, Juan; Alderweireldt, Sara; Bansal, Sunil; Cornelis, Tom; De Wolf, Eddi A; Janssen, Xavier; Knutsson, Albert; Lauwers, Jasper; Luyckx, Sten; Ochesanu, Silvia; Rougny, Romain; Van De Klundert, Merijn; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Van Spilbeeck, Alex; Blekman, Freya; Blyweert, Stijn; D'Hondt, Jorgen; Daci, Nadir; Heracleous, Natalie; Keaveney, James; Lowette, Steven; Maes, Michael; Olbrechts, Annik; Python, Quentin; Strom, Derek; Tavernier, Stefaan; Van Doninck, Walter; Van Mulders, Petra; Van Onsem, Gerrit Patrick; Villella, Ilaria; Caillol, Cécile; Clerbaux, Barbara; De Lentdecker, Gilles; Dobur, Didar; Favart, Laurent; Gay, Arnaud; Grebenyuk, Anastasia; Léonard, Alexandre; Mohammadi, Abdollah; Perniè, Luca; Randle-conde, Aidan; Reis, Thomas; Seva, Tomislav; Thomas, Laurent; Vander Velde, Catherine; Vanlaer, Pascal; Wang, Jian; Zenoni, Florian; Adler, Volker; Beernaert, Kelly; Benucci, Leonardo; Cimmino, Anna; Costantini, Silvia; Crucy, Shannon; Dildick, Sven; Fagot, Alexis; Garcia, Guillaume; Mccartin, Joseph; Ocampo Rios, Alberto Andres; Ryckbosch, Dirk; Salva Diblen, Sinem; Sigamani, Michael; Strobbe, Nadja; Thyssen, Filip; Tytgat, Michael; Yazgan, Efe; Zaganidis, Nicolas; Basegmez, Suzan; Beluffi, Camille; Bruno, Giacomo; Castello, Roberto; Caudron, Adrien; Ceard, Ludivine; Da Silveira, Gustavo Gil; Delaere, Christophe; Du Pree, Tristan; Favart, Denis; Forthomme, Laurent; Giammanco, Andrea; Hollar, Jonathan; Jafari, Abideh; Jez, Pavel; Komm, Matthias; Lemaitre, Vincent; Nuttens, Claude; Pagano, Davide; Perrini, Lucia; Pin, Arnaud; Piotrzkowski, Krzysztof; Popov, Andrey; Quertenmont, Loic; Selvaggi, Michele; Vidal Marono, Miguel; Vizan Garcia, Jesus Manuel; Beliy, Nikita; Caebergs, Thierry; Daubie, Evelyne; Hammad, Gregory Habib; Aldá Júnior, Walter Luiz; Alves, Gilvan; Brito, Lucas; Correa Martins Junior, Marcos; Dos Reis Martins, Thiago; Mora Herrera, Clemencia; Pol, Maria Elena; Rebello Teles, Patricia; Carvalho, Wagner; Chinellato, Jose; Custódio, Analu; Da Costa, Eliza Melo; De Jesus Damiao, Dilson; De Oliveira Martins, Carley; Fonseca De Souza, Sandro; Malbouisson, Helena; Matos Figueiredo, Diego; Mundim, Luiz; Nogima, Helio; Prado Da Silva, Wanda Lucia; Santaolalla, Javier; Santoro, Alberto; Sznajder, Andre; Tonelli Manganote, Edmilson José; Vilela Pereira, Antonio; Bernardes, Cesar Augusto; Dogra, Sunil; Tomei, Thiago; De Moraes Gregores, Eduardo; Mercadante, Pedro G; Novaes, Sergio F; Padula, Sandra; Aleksandrov, Aleksandar; Genchev, Vladimir; Hadjiiska, Roumyana; Iaydjiev, Plamen; Marinov, Andrey; Piperov, Stefan; Rodozov, Mircho; Sultanov, Georgi; Vutova, Mariana; Dimitrov, Anton; Glushkov, Ivan; Litov, Leander; Pavlov, Borislav; Petkov, Peicho; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Chen, Mingshui; Cheng, Tongguang; Du, Ran; Jiang, Chun-Hua; Plestina, Roko; Romeo, Francesco; Tao, Junquan; Wang, Zheng; Asawatangtrakuldee, Chayanit; Ban, Yong; Li, Qiang; Liu, Shuai; Mao, Yajun; Qian, Si-Jin; Wang, Dayong; Xu, Zijun; Zou, Wei; Avila, Carlos; Cabrera, Andrés; Chaparro Sierra, Luisa Fernanda; Florez, Carlos; Gomez, Juan Pablo; Gomez Moreno, Bernardo; Sanabria, Juan Carlos; Godinovic, Nikola; Lelas, Damir; Polic, Dunja; Puljak, Ivica; Antunovic, Zeljko; Kovac, Marko; Brigljevic, Vuko; Kadija, Kreso; Luetic, Jelena; Mekterovic, Darko; Sudic, Lucija; Attikis, Alexandros; Mavromanolakis, Georgios; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Bodlak, Martin; Finger, Miroslav; Finger Jr, Michael; Assran, Yasser; Ellithi Kamel, Ali; Mahmoud, Mohammed; Radi, Amr; Kadastik, Mario; Murumaa, Marion; Raidal, Martti; Tiko, Andres; Eerola, Paula; Fedi, Giacomo; Voutilainen, Mikko; Härkönen, Jaakko; Karimäki, Veikko; Kinnunen, Ritva; Kortelainen, Matti J; Lampén, Tapio; Lassila-Perini, Kati; Lehti, Sami; Lindén, Tomas; Luukka, Panja-Riina; Mäenpää, Teppo; Peltola, Timo; Tuominen, Eija; Tuominiemi, Jorma; Tuovinen, Esa; Wendland, Lauri; Talvitie, Joonas; Tuuva, Tuure
2015-01-01
Measurements of the ZZ production cross sections in proton-proton collisions at center-of-mass energies of 7 and 8 TeV are presented. Candidate events for the leptonic decay mode $\\mathrm{ZZ} \\to 2\\ell2\
Prospects for Measuring Neutral Gauge Boson Couplings in ZZ Production with the ATLAS Detector
Hassani, S
2002-01-01
$ZZ$ production at the LHC provides an opportunity to probe neutral gauge boson self-interaction in a direct way. The possibility to detect anomalous $ZZZ$ and $ZZ\\gamma$ couplings is investigated in the context of the ATLAS detector. The expected limits on these couplings will improve the limits currently obtained by the LEP experiments by 3 order of magnitude.
Signatures of the anomalous $Z\\gamma$ and $ZZ$ production at the lepton and hadron Colliders
Gounaris, George J; Renard, F M
2000-01-01
The possible form of the ZZZ, ZZ$\\gamma$ and $Z\\gamma \\gamma$ vertices which may be induced from some New Physics interactions is critically examined. Their signatures and the possibilities to study them, through ZZ and $Z\\gamma$ production, at the e^-e^+ Colliders LEP and LC and at the hadronic Colliders Tevatron and LHC, are investigated.
Pulsaciones en estrellas enanas blancas variables ZZ Ceti
Córsico, Alejandro Hugo
2003-01-01
La temática de esta tesis está orientada hacia el estudio de las propiedades pulsacionales de las estrellas variables ZZ Ceti desde un punto de vista teórico-numérico. Una de las motivaciones mas importantes para estudiar estrellas pulsantes en general radica en la posibilidad de extraer información de su estructura interna y estado evolutivo a traves del análisis de su espectro pulsacional. Esta técnica es análoga en su esencia a la tan conocida sismología en geofísica. El principio básico (...
ZZ production at hadron colliders in NNLO QCD
Cascioli, F; Grazzini, M; Kallweit, S; Maierhöfer, P; von Manteuffel, A; Pozzorini, S; Rathlev, D; Tancredi, L; Weihs, E
2014-01-01
We report on the first calculation of next-to-next-to-leading order (NNLO) QCD corrections to the inclusive production of ZZ pairs at hadron colliders. Numerical results are presented for pp collisions with centre-of-mass energy ($\\sqrt{s}$) ranging from 7 to 14 TeV. The NNLO corrections increase the NLO result by an amount varying from $11\\%$ to $17\\%$ as $\\sqrt{s}$ goes from 7 to 14 TeV. The loop-induced gluon fusion contribution provides about $60\\%$ of the total NNLO effect. When going from NLO to NNLO the scale uncertainties do not decrease and remain at the $\\pm 3\\%$ level.
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N
2015-09-01
Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25 mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications. PMID:26086681
Monte-Carlo method simulation of the Bremsstrahlung mirror reflection experiment
International Nuclear Information System (INIS)
Full text: To detect gamma-ray mirror reflection on macroscopic smooth surface a search experiment at microtron MT-22S with 330 meter flying distance is in progress. Measured slip angles (i.e. angles between incident ray and reflector surface) don't exceed tens of micro-radian. Under such angles an effect of the reflection could be easily veiled due to negative background conditions. That is why the process needed to be simulated by Monte-Carlo method as accurate as possible and corresponding computer program was developed. A first operating mode of the MT-22S generates 13 MeV electrons that are incident on a Bremsstrahlung target. So energies of gamma-rays were simulated to be in the range of 0.01†12.5 MeV and be distributed by known Shift formula. When any gamma-quantum was incident on the reflector it resulted in following two cases. If its slip angle was more than the critical one, gamma-quantum was to be absorbed by the reflector and the program started to simulate next event. In the other case the program replaced incident gamma-quantum trajectory parameters by the reflected ones. The gamma-quantum trajectory behind the reflector was traced till its detector. Any gamma-quantum that got the detector was to be registered. As any simulated gamma-quantum was of random energy the critical slip angle of every simulated event was evaluated by the following formula: αcrit = eh/E √ZNAρ/πAm. Table values of the absorption coefficients were used for random simulation of gamma-quanta absorption in the air. And it was assumed that any gamma-quantum interaction with air resulted in its disappearance. Dependence of different flying distances (120 and 330 m), gap heights (10, 20 and 50 μ) of the gap collimator and inclinations (20 and 40 μrad) of the reflector's plane on detected gamma-quanta energy distribution and vertical angle one was studied with a help of the developed program
International Nuclear Information System (INIS)
The present report describes a computer code DEEP which calculates the organ dose equivalents and the effective dose equivalent for external photon exposure by the Monte Carlo method. MORSE-CG, Monte Carlo radiation transport code, is incorporated into the DEEP code to simulate photon transport phenomena in and around a human body. The code treats an anthropomorphic phantom represented by mathematical formulae and user has a choice for the phantom sex: male, female and unisex. The phantom can wear personal dosimeters on it and user can specify their location and dimension. This document includes instruction and sample problem for the code as well as the general description of dose calculation, human phantom and computer code. (author)
Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan
2014-08-01
Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.
Energy Technology Data Exchange (ETDEWEB)
Cirrone, G.A.P., E-mail: cirrone@lns.infn.it [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Bucciolini, M. [Department of ' Fisiopatologia Clinica' , University of Florence, V.le Morgagni 85, I-50134 Florence (Italy); Bruzzi, M. [Energetic Department, University of Florence, Via S. Marta 3, I-50139 Florence (Italy); Candiano, G. [Laboratorio di Tecnologie Oncologiche HSR, Giglio Contrada, Pietrapollastra-Pisciotto, 90015 Cefalu, Palermo (Italy); Civinini, C. [National Institute for Nuclear Physics INFN, Section of Florence, Via G. Sansone 1, Sesto Fiorentino, I-50019 Florence (Italy); Cuttone, G. [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Guarino, P. [Nuclear Engineering Department, University of Palermo, Via... Palermo (Italy); Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Lo Presti, D. [Physics Department, University of Catania, Via S. Sofia 64, I-95123, Catania (Italy); Mazzaglia, S.E. [Laboratori Nazionali del Sud - National Instiute for Nuclear Physics INFN (INFN-LNS), Via S.Sofia 64, 95100 Catania (Italy); Pallotta, S. [Department of ' Fisiopatologia Clinica' , University of Florence, V.le Morgagni 85, I-50134 Florence (Italy); Randazzo, N. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); Sipala, V. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); Physics Department, University of Catania, Via S. Sofia 64, I-95123, Catania (Italy); Stancampiano, C. [National Institute for Nuclear Physics INFN, Section of Catania, Via S.Sofia 64, 95123 Catania (Italy); and others
2011-12-01
In this paper the use of the Filtered Back Projection (FBP) Algorithm, in order to reconstruct tomographic images using the high energy (200-250 MeV) proton beams, is investigated. The algorithm has been studied in detail with a Monte Carlo approach and image quality has been analysed and compared with the total absorbed dose. A proton Computed Tomography (pCT) apparatus, developed by our group, has been fully simulated to exploit the power of the Geant4 Monte Carlo toolkit. From the simulation of the apparatus, a set of tomographic images of a test phantom has been reconstructed using the FBP at different absorbed dose values. The images have been evaluated in terms of homogeneity, noise, contrast, spatial and density resolution.
Implementation of a Monte Carlo method to model photon conversion for solar cells
International Nuclear Information System (INIS)
A physical model describing different photon conversion mechanisms is presented in the context of photovoltaic applications. To solve the resulting system of equations, a Monte Carlo ray-tracing model is implemented, which takes into account the coupling of the photon transport phenomena to the non-linear rate equations describing luminescence. It also separates the generation of rays from the two very different sources of photons involved (the sun and the luminescence centers). The Monte Carlo simulator presented in this paper is proposed as a tool to help in the evaluation of candidate materials for up- and down-conversion. Some application examples are presented, exploring the range of values that the most relevant parameters describing the converter should have in order to give significant gain in photocurrent
International Nuclear Information System (INIS)
In this paper the use of the Filtered Back Projection (FBP) Algorithm, in order to reconstruct tomographic images using the high energy (200-250 MeV) proton beams, is investigated. The algorithm has been studied in detail with a Monte Carlo approach and image quality has been analysed and compared with the total absorbed dose. A proton Computed Tomography (pCT) apparatus, developed by our group, has been fully simulated to exploit the power of the Geant4 Monte Carlo toolkit. From the simulation of the apparatus, a set of tomographic images of a test phantom has been reconstructed using the FBP at different absorbed dose values. The images have been evaluated in terms of homogeneity, noise, contrast, spatial and density resolution.
Verification of Transformer Restricted Earth Fault Protection by using the Monte Carlo Method
KRSTIVOJEVIC, J. P.; DJURIC, M. B.
2015-01-01
The results of a comprehensive investigation of the influence of current transformer (CT) saturation on restricted earth fault (REF) protection during power transformer magnetization inrush are presented. Since the inrush current during switch-on of unloaded power transformer is stochastic, its values are obtained by: (i) laboratory measurements and (ii) calculations based on the input data obtained by the Monte Carlo (MC) simulation. To make a detailed assessment of the curre...
Alavirad, Hamzeh; Malekjani, Mohammad
2013-01-01
We constrain holographic dark energy (HDE) with time varying gravitational coupling constant in the framework of the modified Friedmann equations using cosmological data from type Ia supernovae, baryon acoustic oscillations, cosmic microwave background radiation and X-ray gas mass fraction. Applying a Markov Chain Monte Carlo (MCMC) simulation, we obtain the best fit values of the model and cosmological parameters within $1\\sigma$ confidence level (CL) in a flat universe as: $\\Omega_{\\rm b}h^...
Stochastic method for accommodation of equilibrating basins in kinetic Monte Carlo simulations
Van Siclen, Clinton DeW.
2008-01-01
A computationally simple way to accommodate 'basins' of trapping sites in standard kinetic Monte Carlo simulations is presented. By assuming the system is effectively equilibrated in the basin, the residence time (time spent in the basin before escape) and the probabilities for transition to states outside the basin may be calculated. This is demonstrated for point defect diffusion over a periodic grid of sites containing a complex basin.
The Metropolis Monte Carlo method with CUDA enabled Graphic Processing Units
International Nuclear Information System (INIS)
We present a CPU–GPU system for runtime acceleration of large molecular simulations using GPU computation and memory swaps. The memory architecture of the GPU can be used both as container for simulation data stored on the graphics card and as floating-point code target, providing an effective means for the manipulation of atomistic or molecular data on the GPU. To fully take advantage of this mechanism, efficient GPU realizations of algorithms used to perform atomistic and molecular simulations are essential. Our system implements a versatile molecular engine, including inter-molecule interactions and orientational variables for performing the Metropolis Monte Carlo (MMC) algorithm, which is one type of Markov chain Monte Carlo. By combining memory objects with floating-point code fragments we have implemented an MMC parallel engine that entirely avoids the communication time of molecular data at runtime. Our runtime acceleration system is a forerunner of a new class of CPU–GPU algorithms exploiting memory concepts combined with threading for avoiding bus bandwidth and communication. The testbed molecular system used here is a condensed phase system of oligopyrrole chains. A benchmark shows a size scaling speedup of 60 for systems with 210,000 pyrrole monomers. Our implementation can easily be combined with MPI to connect in parallel several CPU–GPU duets. -- Highlights: •We parallelize the Metropolis Monte Carlo (MMC) algorithm on one CPU—GPU duet. •The Adaptive Tempering Monte Carlo employs MMC and profits from this CPU—GPU implementation. •Our benchmark shows a size scaling-up speedup of 62 for systems with 225,000 particles. •The testbed involves a polymeric system of oligopyrroles in the condensed phase. •The CPU—GPU parallelization includes dipole—dipole and Mie—Jones classic potentials.
International Nuclear Information System (INIS)
The electron drift velocity W, and the first Townsend ionization coefficient, α, are calculated for nitrogen, over the range 7000 is the electric field to pressure ratio. The pressure P0 is reduced to 00C. The spherical harmonic expansion calculation predicts α values which are 50-100% larger than those predicted by the Monte Carlo calculation. The predicted drift velocities agree to within 10-20%. (Auth.)
Application of Monte Carlo Method to Phase Separation Dynamics of Complex Systems
Okabe, Yutaka; Miyajima, Tsukasa; Ito, Toshiro; Kawakatsu, Toshihiro
1999-01-01
We report the application of the Monte Carlo simulation to phase separation dynamics. First, we deal with the phase separation under shear flow. The thermal effect on the phase separation is discussed, and the anisotropic growth exponents in the late stage are estimated. Next, we study the effect of surfactants on the three-component solvents. We obtain the mixture of macrophase separation and microphase separation, and investigate the dynamics of both phase separations.
A Monte Carlo method based on antithetic variates for network reliability computations
El Khadiri, Mohamed; Rubino, Gerardo
1992-01-01
The exact evaluation of usual reliability measures of communication networks is seriously limited because of the excessive computational time usually needed to obtain them. In the general case, the computation of almost all the interesting reliability metrics are NP-hard problems. An alternative approach is to estimate them by means of a Monte Carlo simulation. This allows to deal with larger models than those that can be evaluated exactly. In this paper, we propose an algorithm much more per...
Directory of Open Access Journals (Sweden)
Wagner Fernando Delfino Angelotti
2008-01-01
Full Text Available The paper presents an introductory and general discussion on the quantum Monte Carlo methods, some fundamental algorithms, concepts and applicability. In order to introduce the quantum Monte Carlo method, preliminary concepts associated with Monte Carlo techniques are discussed.
International Nuclear Information System (INIS)
Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.
International Nuclear Information System (INIS)
Different codes were used for Monte Carlo calculations in radiation therapy. In this study, a new Monte Carlo Simulation Program (MCSP) was developed for the effects of the physical parameters of photons emitted from a Siemens Primus clinical linear accelerator (LINAC) on the dose distribution in water. For MCSP, it was written considering interactions of photons with matter. Here, it was taken into account mainly two interactions: The Compton (or incoherent) scattering and photoelectric effect. Photons which come to water phantom surface emitting from a point source were Bremsstrahlung photons. It should be known the energy distributions of these photons for following photons. Bremsstrahlung photons which have 6 MeV (6 MV photon mode) maximum energies were taken into account. In the 6 MV photon mode, the energies of photons were sampled from using Mohan's experimental energy spectrum (Mohan at al 1985). In order to investigate the performance and accuracy of the simulation, measured and calculated (MCSP) percentage depth dose curves and dose profiles were compared. The Monte Carlo results were shown good agreement with experimental measurements.
The effects of weekly augmentation therapy in patients with PiZZ α1-antitrypsin deficiency
Directory of Open Access Journals (Sweden)
Schmid ST
2012-09-01
Full Text Available ST Schmid,1 J Koepke,1 M Dresel,1 A Hattesohl,1 E Frenzel,2 J Perez,3 DA Lomas,4 E Miranda,5 T Greulich,1 S Noeske,1 M Wencker,6 H Teschler,6 C Vogelmeier,1 S Janciauskiene,2,* AR Koczulla1,*1Department of Internal Medicine, Division for Pulmonary Diseases, University Hospital Marburg, Marburg, Germany; 2Department of Respiratory Medicine, Hannover Medical School, Hannover, Germany; 3Department of Cellular Biology, University of Malaga, Malaga, Spain; 4Department of Medicine, Cambridge Institute for Medical Research, University of Cambridge, Cambridge, United Kingdom; 5Department of Biology and Biotechnology, Istituto Pasteur – Fondazione Cenci Bolognetti, Sapienza University of Rome, Rome, Italy; 6Department of Pneumology, West German Lung Clinic, Essen University Hospital, Essen, Germany*These authors contributed equally to this workBackground: The major concept behind augmentation therapy with human α1-antitrypsin (AAT is to raise the levels of AAT in patients with protease inhibitor phenotype ZZ (Glu342Lys-inherited AAT deficiency and to protect lung tissues from proteolysis and progression of emphysema.Objective: To evaluate the short-term effects of augmentation therapy (Prolastin® on plasma levels of AAT, C-reactive protein, and chemokines/cytokines.Materials and methods: Serum and exhaled breath condensate were collected from individuals with protease inhibitor phenotype ZZ AAT deficiency-related emphysema (n = 12 on the first, third, and seventh day after the infusion of intravenous Prolastin. Concentrations of total and polymeric AAT, interleukin-8 (IL-8, monocyte chemotactic protein-1, IL-6, tumor necrosis factor-α, vascular endothelial growth factor, and C-reactive protein were determined. Blood neutrophils and primary epithelial cells were also exposed to Prolastin (1 mg/mL.Results: There were significant fluctuations in serum (but not in exhaled breath condensate levels of AAT polymers, IL-8, monocyte chemotactic protein-1, IL
ZZ GEFF-2-MATXS, Coupled Neutron-Gamma Fusion Neutronics Library in MATXS Format
International Nuclear Information System (INIS)
1 - Description of program or function: This library for fusion neutronics calculations, to be used in conjunction with the TRANSX code, is the MATXS format version of ZZ-GEFF-2-GENDF from which it has been derived by means of the MATXSR NJOY module. It has a 175 neutron, 42 photon VITAMIN-J group structure with the standard weighting function: Maxwellian (at the temperature to which the material is referenced) + 1/E + fission spectrum + 1/E + fusion peak + 1/E. It includes 93 materials from 1-H-1 to Bi-209 - almost all from EFF-2 basic data; but Ag-107, Ag-109, natural Cd, the 6 Hf isotopes and the 4 W isotopes have been taken from JEF-2.2 - at 3 temperatures and 6 dilution cross section values; 10 thermal groups are provided below 3 eV. Neutron cross sections and diffusion matrices, photon and gas production, kerma and DPA are given. The library includes H in H2O, metallic Be and Graphite for which an accurate treatment with S(alpha, beta) matrices has been provided for the thermal scattering region, while all remaining materials have been treated with the 'free gas' approximation. Photon interaction data are taken from GEPDL - which is included in ZZ-GEFF-2-GENDF - a 42 group photon interaction P8 library based upon EPDL-90; in order to couple this photon section - GEPDL - which is P8, with the neutron part - GEFF-2 - which is P5, the latter has been filled with zeroes for the contribution P6 to P8. 2 - Method of solution: NJOY has been used to process both GEFF-2 and GEPDL; since the processing activities lasted over a couple of years, less important materials were processed with the 91.13 version of the code and the 91.38 has been used to process all the remaining materials, including the whole GEPDL. Some very important modifications have been introduced in the code as far as the kerma computation is concerned, in order to solve problems due to physical inconsistencies of the data and non-standard formats of some materials, e.g. the lump reaction MT 10
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is