WorldWideScience

Sample records for carlo methods zz

  1. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  2. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  3. Monte Carlo methods

    CERN Document Server

    Kalos, Melvin H

    2008-01-01

    This introduction to Monte Carlo methods seeks to identify and study the unifying elements that underlie their effective application. Initial chapters provide a short treatment of the probability and statistics needed as background, enabling those without experience in Monte Carlo techniques to apply these ideas to their research.The book focuses on two basic themes: The first is the importance of random walks as they occur both in natural stochastic systems and in their relationship to integral and differential equations. The second theme is that of variance reduction in general and importance sampling in particular as a technique for efficient use of the methods. Random walks are introduced with an elementary example in which the modeling of radiation transport arises directly from a schematic probabilistic description of the interaction of radiation with matter. Building on this example, the relationship between random walks and integral equations is outlined

  4. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  5. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  6. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  7. Metropolis Methods for Quantum Monte Carlo Simulations

    OpenAIRE

    Ceperley, D. M.

    2003-01-01

    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  8. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    Markov Chain Monte Carlo Methods. 2. The Markov Chain Case. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance. His spare time is ...

  9. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    GENERAL ! ARTICLE. Markov Chain Monte Carlo Methods. 3. Statistical Concepts. K B Athreya, Mohan Delampady and T Krishnan. K B Athreya is a Professor at. Cornell University. His research interests include mathematical analysis, probability theory and its application and statistics. He enjoys writing for Resonance.

  10. Shell model Monte Carlo methods

    International Nuclear Information System (INIS)

    Koonin, S.E.

    1996-01-01

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  11. Monte Carlo Methods in ICF

    Science.gov (United States)

    Zimmerman, George B.

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  12. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, G.B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics

  13. Monte Carlo methods in ICF

    International Nuclear Information System (INIS)

    Zimmerman, George B.

    1997-01-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials

  14. Extending canonical Monte Carlo methods

    International Nuclear Information System (INIS)

    Velazquez, L; Curilef, S

    2010-01-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C α with α≈0.2 for the particular case of the 2D ten-state Potts model

  15. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  16. Handbook of Monte Carlo methods

    National Research Council Canada - National Science Library

    Kroese, Dirk P; Taimre, Thomas; Botev, Zdravko I

    2011-01-01

    ... in rapid succession, the staggering number of related techniques, ideas, concepts and algorithms makes it difficult to maintain an overall picture of the Monte Carlo approach. This book attempts to encapsulate the emerging dynamics of this field of study"--

  17. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  18. Hybrid Monte Carlo methods in computational finance

    NARCIS (Netherlands)

    Leitao Rodriguez, A.

    2017-01-01

    Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the

  19. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    time Technical Consultant to. Systat Software Asia-Pacific. (P) Ltd., in Bangalore, where the technical work for the development of the statistical software Systat takes place. His research interests have been in statistical pattern recognition and biostatistics. Keywords. Markov chain, Monte Carlo sampling, Markov chain Monte.

  20. Markov Chain Monte Carlo Methods

    Indian Academy of Sciences (India)

    ter of the 20th century, due to rapid developments in computing technology ... early part of this development saw a host of Monte ... These iterative. Monte Carlo procedures typically generate a random se- quence with the Markov property such that the Markov chain is ergodic with a limiting distribution coinciding with the ...

  1. Monte Carlo methods for particle transport

    CERN Document Server

    Haghighat, Alireza

    2015-01-01

    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  2. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  3. Bayesian statistics and Monte Carlo methods

    Science.gov (United States)

    Koch, K. R.

    2018-03-01

    The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes' theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.

  4. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  5. Adiabatic optimization versus diffusion Monte Carlo methods

    Science.gov (United States)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  6. A keff calculation method by Monte Carlo

    International Nuclear Information System (INIS)

    Shen, H; Wang, K.

    2008-01-01

    The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)

  7. Monte Carlo method in neutron activation analysis

    International Nuclear Information System (INIS)

    Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.

    2009-01-01

    Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA

  8. Monte Carlo method for random surfaces

    International Nuclear Information System (INIS)

    Berg, B.

    1985-01-01

    Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)

  9. Introduction to the Monte Carlo methods

    International Nuclear Information System (INIS)

    Uzhinskij, V.V.

    1993-01-01

    Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab

  10. Monte Carlo methods for shield design calculations

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1974-01-01

    A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)

  11. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  12. Monte Carlo methods for preference learning

    DEFF Research Database (Denmark)

    Viappiani, P.

    2012-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....

  13. Markov Chain Monte Carlo Methods-Simple Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 4. Markov Chain Monte Carlo ... New York 14853, USA. Indian Statistical Institute 8th Mile, Mysore Road Bangalore 560 059, India. Systat Software Asia-Pacific (PI Ltd., Floor 5, 'C' Tower Golden Enclave, Airport Road Bangalore 560017, India.

  14. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  15. by means of FLUKA Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Ermis Elif Ebru

    2015-01-01

    Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.

  16. Monte Carlo method in radiation transport problems

    International Nuclear Information System (INIS)

    Dejonghe, G.; Nimal, J.C.; Vergnaud, T.

    1986-11-01

    In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr

  17. Monte carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  18. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  19. Methods for Monte Carlo simulations of biomacromolecules.

    Science.gov (United States)

    Vitalis, Andreas; Pappu, Rohit V

    2009-01-01

    The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.

  20. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  1. Self-test Monte Carlo method

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1996-01-01

    The Self-Test Monte Carlo (STMC) method resolves the main problems in using algebraic pseudo-random numbers for Monte Carlo (MC) calculations: that they can interfere with MC algorithms and lead to erroneous results, and that such an error often cannot be detected without known exact solution. STMC is based on good randomness of about 10 10 bits available from physical noise or transcendental numbers like π = 3.14---. Various bit modifiers are available to get more bits for applications that demands more than 10 10 random bits such as lattice quantum chromodynamics (QCD). These modifiers are designed so that a) each of them gives a bit sequence comparable in randomness as the original if used separately from each other, and b) their mutual interference when used jointly in a single MC calculation is adjustable. Intermediate data of the MC calculation itself are used to quantitatively test and adjust the mutual interference of the modifiers in respect of the MC algorithm. STMC is free of systematic error and gives reliable statistical error. Also it can be easily implemented on vector and parallel supercomputers. (author)

  2. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  3. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  4. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Granero Cabanero, D.

    2015-07-01

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  5. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  6. Monte Carlo methods for pricing financial options

    Indian Academy of Sciences (India)

    Monte Carlo methods have increasingly become a popular computational tool to price complex financial options, especially when the underlying space of assets has a large dimensionality, as the performance of other numerical methods typically suffer from the 'curse of dimensionality'. However, even Monte-Carlo ...

  7. Approximating Sievert Integrals to Monte Carlo Methods to calculate ...

    African Journals Online (AJOL)

    Radiation dose rates along the transverse axis of a miniature P192PIr source were calculated using Sievert Integral (considered simple and inaccurate), and by the sophisticated and accurate Monte Carlo method. Using data obt-ained by the Monte Carlo method as benchmark and applying least squares regression curve ...

  8. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  9. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  10. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules

    CERN Document Server

    Lester, William A; Reynolds, PJ

    1994-01-01

    This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n

  11. Multiple histogram method and static Monte Carlo sampling

    NARCIS (Netherlands)

    Inda, M.A.; Frenkel, D.

    2004-01-01

    We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

  12. Quantum Monte Carlo method for attractive Coulomb potentials

    NARCIS (Netherlands)

    Kole, J.S.; Raedt, H. De

    2001-01-01

    Starting from an exact lower bound on the imaginary-time propagator, we present a path-integral quantum Monte Carlo method that can handle singular attractive potentials. We illustrate the basic ideas of this quantum Monte Carlo algorithm by simulating the ground state of hydrogen and helium.

  13. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    Abstract. Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  14. ZZ production at high transverse momenta beyond NLO QCD

    CERN Document Server

    Campanario, Francisco; Sapeta, Sebastian

    2015-01-01

    We study the production of the four-lepton final state $l^+ l^- l^+ l^-$, predominantly produced by a pair of electroweak Z bosons, ZZ. Using the LoopSim method, we merge NLO QCD results for ZZ and ZZ+jet and obtain approximate NNLO predictions for ZZ production. The exact gluon-fusion loop-squared contribution to the ZZ process is also included. On top of that, we add to our merged sample the gluon-fusion ZZ+jet contributions from the gluon-gluon channel, which is formally of N^3LO and provides approximate results at NLO for the gluon-fusion mechanism. The predictions are obtained with the VBFNLO package and include the leptonic decays of the Z bosons with all off-shell and spin-correlation effects, as well as virtual photon contributions. We compare our predictions with existing results for the total inclusive cross section at NNLO and find a very good agreement. Then, we present results for differential distributions for two experimental setups, one used in searches for anomalous triple gauge boson couplin...

  15. Monte Carlo methods and applications in nuclear physics

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs

  16. Monte Carlo methods and applications in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  17. Molecular dynamics algorithms for quantum Monte Carlo methods

    Science.gov (United States)

    Miura, Shinichi

    2009-11-01

    In the present Letter, novel molecular dynamics methods compatible with corresponding quantum Monte Carlo methods are developed. One is a variational molecular dynamics method that is a molecular dynamics analog of quantum variational Monte Carlo method. The other is a variational path integral molecular dynamics method, which is based on the path integral molecular dynamics method for finite temperature systems by Tuckerman et al. [M. Tuckerman, B.J. Berne, G.J. Martyna, M.L. Klein, J. Chem. Phys. 99 (1993) 2796]. These methods are applied to model systems including the liquid helium-4, demonstrated to work satisfactorily for the tested ground state calculations.

  18. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)

    2011-07-01

    This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)

  19. Simulation and the Monte Carlo Method, Student Solutions Manual

    CERN Document Server

    Rubinstein, Reuven Y

    2012-01-01

    This accessible new edition explores the major topics in Monte Carlo simulation Simulation and the Monte Carlo Method, Second Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over twenty-five years ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, suc

  20. Application of biasing techniques to the contributon Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Dubi, A.; Gerstl, S.A.W.

    1980-01-01

    Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

  1. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  2. Monte Carlo methods for the self-avoiding walk

    International Nuclear Information System (INIS)

    Janse van Rensburg, E J

    2009-01-01

    The numerical simulation of self-avoiding walks remains a significant component in the study of random objects in lattices. In this review, I give a comprehensive overview of the current state of Monte Carlo simulations of models of self-avoiding walks. The self-avoiding walk model is revisited, and the motivations for Monte Carlo simulations of this model are discussed. Efficient sampling of self-avoiding walks remains an elusive objective, but significant progress has been made over the last three decades. The model still poses challenging numerical questions however, and I review specific Monte Carlo methods for improved sampling including general Monte Carlo techniques such as Metropolis sampling, umbrella sampling and multiple Markov Chain sampling. In addition, specific static and dynamic algorithms for walks are presented, and I give an overview of recent innovations in this field, including algorithms such as flatPERM, flatGARM and flatGAS. (topical review)

  3. A Multivariate Time Series Method for Monte Carlo Reactor Analysis

    International Nuclear Information System (INIS)

    Taro Ueki

    2008-01-01

    A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor

  4. Introduction to Monte Carlo methods: sampling techniques and random numbers

    International Nuclear Information System (INIS)

    Bhati, Sharda; Patni, H.K.

    2009-01-01

    The Monte Carlo method describes a very broad area of science, in which many processes, physical systems and phenomena that are statistical in nature and are difficult to solve analytically are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions. As the number of individual events (called histories) is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. Assuming that the behavior of physical system can be described by probability density functions, then the Monte Carlo simulation can proceed by sampling from these probability density functions, which necessitates a fast and effective way to generate random numbers uniformly distributed on the interval (0,1). Particles are generated within the source region and are transported by sampling from probability density functions through the scattering media until they are absorbed or escaped the volume of interest. The outcomes of these random samplings or trials, must be accumulated or tallied in an appropriate manner to produce the desired result, but the essential characteristic of Monte Carlo is the use of random sampling techniques to arrive at a solution of the physical problem. The major components of Monte Carlo methods for random sampling for a given event are described in the paper

  5. Monte Carlo methods for pricing financial options

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Now we discuss the antithetic variate method. Like the control variate method, it is very easy to implement in fairly general situations. It also combines well with other variance reduction techniques. The essential idea behind this method is straightforward. Consider two continuous random variables X1 and X2. The variance ...

  6. Application of Monte Carlo Method to Steady State Heat Conduction ...

    African Journals Online (AJOL)

    The Monte Carlo method was used in modelling steady state heat conduction problems. The method uses the fixed and the floating random walks to determine temperature in the domain of the definition of the heat conduction equation, at a single point directly. A heat conduction problem with an irregular shaped geometry ...

  7. A Monte Carlo adapted finite element method for dislocation ...

    Indian Academy of Sciences (India)

    P Zakian

    2017-10-10

    Oct 10, 2017 ... simulations are proposed. Various comparisons are examined to illustrate the capability of both methods for random simulation of faults. Keywords. Monte Carlo simulation; stochastic modeling; split node technique; finite element method; earthquake fault dislocation. 1. Introduction. In material science, a ...

  8. Monte Carlo methods of PageRank computation

    NARCIS (Netherlands)

    Litvak, Nelli

    2004-01-01

    We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink

  9. Monte Carlo Form-Finding Method for Tensegrity Structures

    Science.gov (United States)

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  10. Continuous energy Monte Carlo method based lattice homogeinzation

    International Nuclear Information System (INIS)

    Li Mancang; Yao Dong; Wang Kan

    2014-01-01

    Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)

  11. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  12. Improved Monte Carlo methods for fermions

    International Nuclear Information System (INIS)

    DeGrand, T.A.; Dreitlein, J.; Toms, D.J.

    1983-01-01

    We describe an improved version of the Kuti-Von Neumann-Ulam algorithm useful for fermion contributions in lattice field theories. This is done by sampling the Neumann series for the propagator, which may be thought of as a sum over a set of weighted paths between two points on the lattice. Rather than selecting paths by a locally determined random walk, we average over sets of paths globally preselected for their importance in evaluating the few needed elements of the inverse. We also describe a method for the calculation of ratios of fermion determinants which is considerably less time consuming than the conventional one. (orig.)

  13. A separable shadow Hamiltonian hybrid Monte Carlo method

    Science.gov (United States)

    Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.

    2009-11-01

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  14. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  15. Copper precipitation in iron: a comparison between metropolis Monte Carlo and lattice kinetic Monte Carlo methods

    CERN Document Server

    Khrushcheva, O; Malerba, L; Becquart, C S; Domain, C; Hou, M

    2003-01-01

    Several variants are possible in the suite of programs forming multiscale predictive tools to estimate the yield strength increase caused by irradiation in RPV steels. For instance, at the atomic scale, both the Metropolis and the lattice kinetic Monte Carlo methods (MMC and LKMC respectively) allow predicting copper precipitation under irradiation conditions. Since these methods are based on different physical models, the present contribution discusses their consistency on the basis of a realistic case study. A cascade debris in iron containing 0.2% of copper was modelled by molecular dynamics with the DYMOKA code, which is part of the REVE suite. We use this debris as input for both the MMC and the LKMC simulations. Thermal motion and lattice relaxation can be avoided in the MMC, making the model closer to the LKMC (LMMC method). The predictions and the complementarity of the three methods for modelling the same phenomenon are then discussed.

  16. Monte Carlo Methods in ICF (LIRPP Vol. 13)

    Science.gov (United States)

    Zimmerman, George B.

    2016-10-01

    Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.

  17. Difficult Sudoku Puzzles Created by Replica Exchange Monte Carlo Method

    OpenAIRE

    Watanabe, Hiroshi

    2013-01-01

    An algorithm to create difficult Sudoku puzzles is proposed. An Ising spin-glass like Hamiltonian describing difficulty of puzzles is defined, and difficult puzzles are created by minimizing the energy of the Hamiltonian. We adopt the replica exchange Monte Carlo method with simultaneous temperature adjustments to search lower energy states efficiently, and we succeed in creating a puzzle which is the world hardest ever created in our definition, to our best knowledge. (Added on Mar. 11, the ...

  18. Comparison of deterministic and Monte Carlo methods in shielding design.

    Science.gov (United States)

    Oliveira, A D; Oliveira, C

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions.

  19. Comparison of deterministic and Monte Carlo methods in shielding design

    International Nuclear Information System (INIS)

    Oliveira, A. D.; Oliveira, C.

    2005-01-01

    In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)

  20. Advanced Markov chain Monte Carlo methods learning from past samples

    CERN Document Server

    Liang, Faming; Carrol, Raymond J

    2010-01-01

    This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

  1. Monte Carlo methods for medical physics a practical introduction

    CERN Document Server

    Schuemann, Jan; Paganetti, Harald

    2018-01-01

    The Monte Carlo (MC) method, established as the gold standard to predict results of physical processes, is now fast becoming a routine clinical tool for applications that range from quality control to treatment verification. This book provides a basic understanding of the fundamental principles and limitations of the MC method in the interpretation and validation of results for various scenarios. It shows how user-friendly and speed optimized MC codes can achieve online image processing or dose calculations in a clinical setting. It introduces this essential method with emphasis on applications in hardware design and testing, radiological imaging, radiation therapy, and radiobiology.

  2. Diagrammatic Monte Carlo method as applied to the polaron problem

    International Nuclear Information System (INIS)

    Mishchenko, A.S.

    2005-01-01

    Exact numerical solution methods for the problem of a few particles interacting with one another and with several bosonic excitation modes are presented. The diagrammatic Monte Carlo method allows the exact calculation of the Green function, and the stochastic optimization technique provides an analytic continuation. Results unobtainable by conventional methods are discussed, including the properties of excited states in the self-trapping phenomenon, the optical spectra of polarons in all coupling regimes, the validity analysis of the exciton models, and the photoemission spectra of a phonon-coupled hole [ru

  3. Comparison of Monte Carlo method and deterministic method for neutron transport calculation

    International Nuclear Information System (INIS)

    Mori, Takamasa; Nakagawa, Masayuki

    1987-01-01

    The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)

  4. Monte Carlo methods in electron transport problems. Pt. 1

    International Nuclear Information System (INIS)

    Cleri, F.

    1989-01-01

    The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage

  5. Multiparameter estimation along quantum trajectories with sequential Monte Carlo methods

    Science.gov (United States)

    Ralph, Jason F.; Maskell, Simon; Jacobs, Kurt

    2017-11-01

    This paper proposes an efficient method for the simultaneous estimation of the state of a quantum system and the classical parameters that govern its evolution. This hybrid approach benefits from efficient numerical methods for the integration of stochastic master equations for the quantum system, and efficient parameter estimation methods from classical signal processing. The classical techniques use sequential Monte Carlo (SMC) methods, which aim to optimize the selection of points within the parameter space, conditioned by the measurement data obtained. We illustrate these methods using a specific example, an SMC sampler applied to a nonlinear system, the Duffing oscillator, where the evolution of the quantum state of the oscillator and three Hamiltonian parameters are estimated simultaneously.

  6. Condensed history Monte Carlo methods for photon transport problems

    International Nuclear Information System (INIS)

    Bhan, Katherine; Spanier, Jerome

    2007-01-01

    We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models - one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes - can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions

  7. Uniform distribution and quasi-Monte Carlo methods discrepancy, integration and applications

    CERN Document Server

    Kritzer, Peter; Pillichshammer, Friedrich; Winterhof, Arne

    2014-01-01

    The survey articles in this book focus on number theoretic point constructions, uniform distribution theory, and quasi-Monte Carlo methods. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules enjoy increasing popularity, with many fruitful applications in mathematical practice, as for example in finance, computer graphics, and biology.

  8. DCA opacity results computed by Monte Carlo Methods

    International Nuclear Information System (INIS)

    Wilson, B.G.; Albritton, J.R.; Liberman, D.A.

    1991-01-01

    The authors present the Monte Carlo methods employed by the code ENRICO for obtaining detailed configuration accounting calculations of LTE opacity. Sample calculations of some mid Z elements, all at experimentally accessible conditions 60ev temperature and one-one hundredth solid density, are presented to illustrate the phenomena of transition array breakup. The prediction of systematic trends in transition array breakup is proposed as a means of testing the ion stage balance produced by codes. The importance of including detailed level transitions in arrays, at least on the level of the UTA approximation, is presented, and a novel approximation for explicitly incorporating the individual transitions between configuration is discussed

  9. Entropic sampling in the path integral Monte Carlo method

    International Nuclear Information System (INIS)

    Vorontsov-Velyaminov, P N; Lyubartsev, A P

    2003-01-01

    We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

  10. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which......The present paper considers the sequential decision optimization problem. This is an important class of decision problems in engineering. Important examples include decision problems on the quality control of manufactured products and engineering components, timing of the implementation of climate....... For the purpose to demonstrate the use and advantages two numerical examples are provided, which is on the quality control of manufactured products....

  11. 'Odontologic dosimetric card' experiments and simulations using Monte Carlo methods

    International Nuclear Information System (INIS)

    Menezes, C.J.M.; Lima, R. de A.; Peixoto, J.E.; Vieira, J.W.

    2008-01-01

    The techniques for data processing, combined with the development of fast and more powerful computers, makes the Monte Carlo methods one of the most widely used tools in the radiation transport simulation. For applications in diagnostic radiology, this method generally uses anthropomorphic phantoms to evaluate the absorbed dose to patients during exposure. In this paper, some Monte Carlo techniques were used to simulation of a testing device designed for intra-oral X-ray equipment performance evaluation called Odontologic Dosimetric Card (CDO of 'Cartao Dosimetrico Odontologico' in Portuguese) for different thermoluminescent detectors. This paper used two computational models of exposition RXD/EGS4 and CDO/EGS4. In the first model, the simulation results are compared with experimental data obtained in the similar conditions. The second model, it presents the same characteristics of the testing device studied (CDO). For the irradiations, the X-ray spectra were generated by the IPEM report number 78, spectrum processor. The attenuated spectrum was obtained for IEC 61267 qualities and various additional filters for a Pantak 320 X-ray industrial equipment. The results obtained for the study of the copper filters used in the determination of the kVp were compared with experimental data, validating the model proposed for the characterization of the CDO. The results shower of the CDO will be utilized in quality assurance programs in order to guarantee that the equipment fulfill the requirements of the Norm SVS No. 453/98 MS (Brazil) 'Directives of Radiation Protection in Medical and Dental Radiodiagnostic'. We conclude that the EGS4 is a suitable code Monte Carlo to simulate thermoluminescent dosimeters and experimental procedures employed in the routine of the quality control laboratory in diagnostic radiology. (author)

  12. Research on Monte Carlo simulation method of industry CT system

    International Nuclear Information System (INIS)

    Li Junli; Zeng Zhi; Qui Rui; Wu Zhen; Li Chunyan

    2010-01-01

    There are a series of radiation physical problems in the design and production of industry CT system (ICTS), including limit quality index analysis; the effect of scattering, efficiency of detectors and crosstalk to the system. Usually the Monte Carlo (MC) Method is applied to resolve these problems. Most of them are of little probability, so direct simulation is very difficult, and existing MC methods and programs can't meet the needs. To resolve these difficulties, particle flux point auto-important sampling (PFPAIS) is given on the basis of auto-important sampling. Then, on the basis of PFPAIS, a particular ICTS simulation method: MCCT is realized. Compared with existing MC methods, MCCT is proved to be able to simulate the ICTS more exactly and effectively. Furthermore, the effects of all kinds of disturbances of ICTS are simulated and analyzed by MCCT. To some extent, MCCT can guide the research of the radiation physical problems in ICTS. (author)

  13. Diagrammatic Monte Carlo method as applied to the polaron problems

    International Nuclear Information System (INIS)

    Mishchenko, Andrei S

    2005-01-01

    Numerical methods whereby exact solutions to the problem of a few particles interacting with one another and with several bosonic excitation branches are presented. The diagrammatic Monte Carlo method allows the exact calculation of the Matsubara Green function, and the stochastic optimization technique provides an approximation-free analytic continuation. In this review, results unobtainable by conventional methods are discussed, including the properties of excited states in the self-trapping phenomenon, the optical spectra of polarons in all coupling regimes, the validity range analysis of the Frenkel and Wannier approximations relevant to the exciton, and the peculiarities of photoemission spectra of a lattice-coupled hole in a Mott insulator. (reviews of topical problems)

  14. Hybrid Monte-Carlo method for ICF calculations

    Energy Technology Data Exchange (ETDEWEB)

    Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)

    2003-07-01

    ) conduction and ray-tracing for laser description. Radiation transport is usually solved by a Monte-Carlo method. In coupling diffusion approximation and transport description, the difficult part comes from the need for an implicit discretization of the emission-absorption terms: this problem was solved by using the symbolic Monte-Carlo method. This means that at each step of the simulation a matrix is computed by a Monte-Carlo method which accounts for the radiation energy exchange between the cells. Because of time step limitation by hydrodynamic motion, energy exchange is limited to a small number of cells and the matrix remains sparse. This matrix is added to usual diffusion matrix for thermal and radiative conductions: finally we arrive at a non-symmetric linear system to invert. A generalized Marshak condition describe the coupling between transport and diffusion. In this paper we will present the principles of the method and numerical simulation of an ICF hohlraum. We shall illustrate the benefits of the method by comparing the results with full implicit Monte-Carlo calculations. In particular we shall show how the spectral cut-off evolves during the propagation of the radiative front in the gold wall. Several issues are still to be addressed (robust algorithm for spectral cut- off calculation, coupling with ALE capabilities): we shall briefly discuss these problems. (authors)

  15. Hybrid Monte-Carlo method for ICF calculations

    International Nuclear Information System (INIS)

    Clouet, J.F.; Samba, G.

    2003-01-01

    ) conduction and ray-tracing for laser description. Radiation transport is usually solved by a Monte-Carlo method. In coupling diffusion approximation and transport description, the difficult part comes from the need for an implicit discretization of the emission-absorption terms: this problem was solved by using the symbolic Monte-Carlo method. This means that at each step of the simulation a matrix is computed by a Monte-Carlo method which accounts for the radiation energy exchange between the cells. Because of time step limitation by hydrodynamic motion, energy exchange is limited to a small number of cells and the matrix remains sparse. This matrix is added to usual diffusion matrix for thermal and radiative conductions: finally we arrive at a non-symmetric linear system to invert. A generalized Marshak condition describe the coupling between transport and diffusion. In this paper we will present the principles of the method and numerical simulation of an ICF hohlraum. We shall illustrate the benefits of the method by comparing the results with full implicit Monte-Carlo calculations. In particular we shall show how the spectral cut-off evolves during the propagation of the radiative front in the gold wall. Several issues are still to be addressed (robust algorithm for spectral cut- off calculation, coupling with ALE capabilities): we shall briefly discuss these problems. (authors)

  16. Periods of ZZ Ceti variables

    International Nuclear Information System (INIS)

    Cox, A.N.; Hodson, S.W.; Starrfield, S.G.

    1979-01-01

    White dwarf pulsators (ZZ Ceti variables) osub solar acccur in the extension of the radial pulsation envelope ionization instability strip to the observed luminosities of 3 x 10 -3 L sub solar according to van Horn. Investigations were underway to see if the driving mechanisms of hydrogen and helium ionization can cause radial pulsations as they do for the Cepheids, the RR Lyrae variables, and the delta Scuti variables. Masses used in this study are 0.60 and 0.75 M sub solar for T/sub e/ between 10,000 K and 14,000 K, the observed range in T/sub e/. Helium rich surface compositions like Y = 0.78,, Z = 0.02 as well as Y = 0.28, Z = 0.02 were used in spite of observations showing only hydrogen lines in the spectrum. The deep layers are pure carbon, and several transition compositions are included. The models show radial pulsation instabilities for many overtone modes at periods between about 0.3 and 3 seconds. The driving mechanism is mostly helium ionization at 40,000 and 150,000 K. The blue edge at about 14,000 K is probably due to the driving region becoming too shallow, and the red edge at 10,000 K is due to so much convection in the pulsation deriving region that no radiative luminosity is available for modulation by the γ and kappa effects. It is speculated that the very long observed periods (100 to 1000 sec) of ZZ Ceti variables are not due to nonradial pulsations, but are possibly aliases due to data undersampling. 4 references

  17. Radiative heat transfer by the Monte Carlo method

    CERN Document Server

    Hartnett †, James P; Cho, Young I; Greene, George A; Taniguchi, Hiroshi; Yang, Wen-Jei; Kudo, Kazuhiko

    1995-01-01

    This book presents the basic principles and applications of radiative heat transfer used in energy, space, and geo-environmental engineering, and can serve as a reference book for engineers and scientists in researchand development. A PC disk containing software for numerical analyses by the Monte Carlo method is included to provide hands-on practice in analyzing actual radiative heat transfer problems.Advances in Heat Transfer is designed to fill the information gap between regularly scheduled journals and university level textbooks by providing in-depth review articles over a broader scope than journals or texts usually allow.Key Features* Offers solution methods for integro-differential formulation to help avoid difficulties* Includes a computer disk for numerical analyses by PC* Discusses energy absorption by gas and scattering effects by particles* Treats non-gray radiative gases* Provides example problems for direct applications in energy, space, and geo-environmental engineering

  18. Multi-pass Monte Carlo simulation method in nuclear transmutations.

    Science.gov (United States)

    Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M

    2016-12-01

    Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 10 25 or 10 26 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/10 25 . Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 10 28 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Optimal mesh hierarchies in Multilevel Monte Carlo methods

    KAUST Repository

    Von Schwerin, Erik

    2016-01-08

    I will discuss how to choose optimal mesh hierarchies in Multilevel Monte Carlo (MLMC) simulations when computing the expected value of a quantity of interest depending on the solution of, for example, an Ito stochastic differential equation or a partial differential equation with stochastic data. I will consider numerical schemes based on uniform discretization methods with general approximation orders and computational costs. I will compare optimized geometric and non-geometric hierarchies and discuss how enforcing some domain constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. I will also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. This talk presents joint work with N.Collier, A.-L.Haji-Ali, F. Nobile, and R. Tempone.

  20. Interacting multiagent systems kinetic equations and Monte Carlo methods

    CERN Document Server

    Pareschi, Lorenzo

    2014-01-01

    The description of emerging collective phenomena and self-organization in systems composed of large numbers of individuals has gained increasing interest from various research communities in biology, ecology, robotics and control theory, as well as sociology and economics. Applied mathematics is concerned with the construction, analysis and interpretation of mathematical models that can shed light on significant problems of the natural sciences as well as our daily lives. To this set of problems belongs the description of the collective behaviours of complex systems composed by a large enough number of individuals. Examples of such systems are interacting agents in a financial market, potential voters during political elections, or groups of animals with a tendency to flock or herd. Among other possible approaches, this book provides a step-by-step introduction to the mathematical modelling based on a mesoscopic description and the construction of efficient simulation algorithms by Monte Carlo methods. The ar...

  1. Quantum Monte Carlo method for models of molecular nanodevices

    Science.gov (United States)

    Arrachea, Liliana; Rozenberg, Marcelo J.

    2005-07-01

    We introduce a quantum Monte Carlo technique to calculate exactly at finite temperatures the Green function of a fermionic quantum impurity coupled to a bosonic field. While the algorithm is general, we focus on the single impurity Anderson model coupled to a Holstein phonon as a schematic model for a molecular transistor. We compute the density of states at the impurity in a large range of parameters, to demonstrate the accuracy and efficiency of the method. We also obtain the conductance of the impurity model and analyze different regimes. The results show that even in the case when the effective attractive phonon interaction is larger than the Coulomb repulsion, a Kondo-like conductance behavior might be observed.

  2. Recursive Monte Carlo method for deep-penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.; Greenspan, E.

    1980-01-01

    The Recursive Monte Carlo (RMC) method developed for estimating importance function distributions in deep-penetration problems is described. Unique features of the method, including the ability to infer the importance function distribution pertaining to many detectors from, essentially, a single M.C. run and the ability to use the history tape created for a representative region to calculate the importance function in identical regions, are illustrated. The RMC method is applied to the solution of two realistic deep-penetration problems - a concrete shield problem and a Tokamak major penetration problem. It is found that the RMC method can provide the importance function distributions, required for importance sampling, with accuracy that is suitable for an efficient solution of the deep-penetration problems considered. The use of the RMC method improved, by one to three orders of magnitude, the solution efficiency of the two deep-penetration problems considered: a concrete shield problem and a Tokamak major penetration problem. 8 figures, 4 tables

  3. Simulation of Rossi-α method with analog Monte-Carlo method

    International Nuclear Information System (INIS)

    Lu Yuzhao; Xie Qilin; Song Lingli; Liu Hangang

    2012-01-01

    The analog Monte-Carlo code for simulating Rossi-α method based on Geant4 was developed. The prompt neutron decay constant α of six metal uranium configurations in Oak Ridge National Laboratory were calculated. α was also calculated by Burst-Neutron method and the result was consistent with the result of Rossi-α method. There is the difference between results of analog Monte-Carlo simulation and experiment, and the reasons for the difference is the gaps between uranium layers. The influence of gaps decrease as the sub-criticality deepens. The relative difference between results of analog Monte-Carlo simulation and experiment changes from 19% to 0.19%. (authors)

  4. Underwater Optical Wireless Channel Modeling Using Monte-Carlo Method

    Science.gov (United States)

    Saini, P. Sri; Prince, Shanthi

    2011-10-01

    At present, there is a lot of interest in the functioning of the marine environment. Unmanned or Autonomous Underwater Vehicles (UUVs or AUVs) are used in the exploration of the underwater resources, pollution monitoring, disaster prevention etc. Underwater, where radio waves do not propagate, acoustic communication is being used. But, underwater communication is moving towards Optical Communication which has higher bandwidth when compared to Acoustic Communication but has shorter range comparatively. Underwater Optical Wireless Communication (OWC) is mainly affected by the absorption and scattering of the optical signal. In coastal waters, both inherent and apparent optical properties (IOPs and AOPs) are influenced by a wide array of physical, biological and chemical processes leading to optical variability. The scattering effect has two effects: the attenuation of the signal and the Inter-Symbol Interference (ISI) of the signal. However, the Inter-Symbol Interference is ignored in the present paper. Therefore, in order to have an efficient underwater OWC link it is necessary to model the channel efficiently. In this paper, the underwater optical channel is modeled using Monte-Carlo method. The Monte Carlo approach provides the most general and most flexible technique for numerically solving the equations of Radiative transfer. The attenuation co-efficient of the light signal is studied as a function of the absorption (a) and scattering (b) coefficients. It has been observed that for pure sea water and for less chlorophyll conditions blue wavelength is less absorbed whereas for chlorophyll rich environment red wavelength signal is absorbed less comparative to blue and green wavelength.

  5. Crop canopy BRDF simulation and analysis using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.

    2006-01-01

    This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and

  6. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    In this article, we give an introduction to Monte Carlo techniques with special emphasis on. Markov Chain Monte Carlo (MCMC). Since the latter needs Markov chains with state space that is R or Rd and most text books on Markov chains do not discuss such chains, we have included a short appendix that gives basic ...

  7. Markov chain Monte Carlo methods: an introductory example

    Science.gov (United States)

    Klauenberg, Katy; Elster, Clemens

    2016-02-01

    When the Guide to the Expression of Uncertainty in Measurement (GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. Here, a concise introduction is given, illustrated by a simple, typical example from metrology. The Metropolis-Hastings algorithm is the most basic and yet flexible MCMC method. Its underlying concepts are explained and the algorithm is given step by step. The few lines of software code required for its implementation invite interested readers to get started. Diagnostics to evaluate the performance and common algorithmic choices are illustrated to calibrate the Metropolis-Hastings algorithm for efficiency. Routine application of MCMC algorithms may be hindered currently by the difficulty to assess the convergence of MCMC output and thus to assure the validity of results. An example points to the importance of convergence and initiates discussion about advantages as well as areas of research. Available software tools are mentioned throughout.

  8. Statistical Analysis of a Class: Monte Carlo and Multiple Imputation Spreadsheet Methods for Estimation and Extrapolation

    Science.gov (United States)

    Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael

    2017-01-01

    The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…

  9. Monte Carlo methods for flux expansion solutions of transport problems

    International Nuclear Information System (INIS)

    Spanier, J.

    1999-01-01

    Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error

  10. Seriation in paleontological data using markov chain Monte Carlo methods.

    Directory of Open Access Journals (Sweden)

    Kai Puolamäki

    2006-02-01

    Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.

  11. LISA data analysis using Markov chain Monte Carlo methods

    International Nuclear Information System (INIS)

    Cornish, Neil J.; Crowder, Jeff

    2005-01-01

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions

  12. Search for $ZW/ZZ \\to \\ell^+ \\ell^-$ + Jets Production in $p\\bar{p}$ Collisions at CDF

    Energy Technology Data Exchange (ETDEWEB)

    Ketchum, Wesley Robert [Univ. of Chicago, IL (United States)

    2012-12-01

    The Standard Model of particle physics describes weak interactions mediated by massive gauge bosons that interact with each other in well-defined ways. Observations of the production and decay of WW, WZ, and ZZ boson pairs are an opportunity to check that these self-interactions agree with the Standard Model predictions. Furthermore, final states that include quarks are very similar to the most prominent final state of Higgs bosons produced in association with a W or Z boson. Diboson production where WW is a significant component has been observed at the Tevatron collider in semi-hadronic decay modes. We present a search for ZW and ZZ production in a final state containing two charged leptons and two jets using 8.9 fb-1 of data recorded with the CDF detector at the Tevatron. We select events by identifying those that contain two charged leptons, two hadronic jets, and low transverse missing energy (ET ). We increase our acceptance by using a wide suite of high-pT lepton triggers and by relaxing many lepton identification requirements. We develop a new method for calculating corrections to jet energies based on whether the originating parton was a quark or gluon to improve the agreement between data and the Monte Carlo simulations used to model our diboson signal and dominant backgrounds. We also make use of neural-network-based discriminants that are trained to pick out jets originating from b quarks and light-flavor quarks, thereby increasing our sensitivity to Z → b$\\bar{b}$ and W=Z → q$\\bar{p'}$0 decays, respectively. The number of signal events is extracted through a simultaneous fit to the dijet mass spectrum in three channels: a heavy-flavor tagged channel, a light-flavor tagged channel, and an untagged channel. We measure σZW/ZZ= 2.5+2.0 -1.0 pb, which is consistent with the SM cross section of 5.1 pb. We establish an upper limit on the cross section of σZW/ZZ < 6.1 pb

  13. Usefulness of the Monte Carlo method in reliability calculations

    International Nuclear Information System (INIS)

    Lanore, J.M.; Kalli, H.

    1977-01-01

    Three examples of reliability Monte Carlo programs developed in the LEP (Laboratory for Radiation Shielding Studies in the Nuclear Research Center at Saclay) are presented. First, an uncertainty analysis is given for a simplified spray system; a Monte Carlo program PATREC-MC has been written to solve the problem with the system components given in the fault tree representation. The second program MONARC 2 has been written to solve the problem of complex systems reliability by the Monte Carlo simulation, here again the system (a residual heat removal system) is in the fault tree representation. Third, the Monte Carlo program MONARC was used instead of the Markov diagram to solve the simulation problem of an electric power supply including two nets and two stand-by diesels

  14. Quantum Monte Carlo methods and lithium cluster properties. [Atomic clusters

    Energy Technology Data Exchange (ETDEWEB)

    Owen, R.K.

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) (0.1981), 0.1895(9) (0.1874(4)), 0.1530(34) (0.1599(73)), 0.1664(37) (0.1724(110)), 0.1613(43) (0.1675(110)) Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) (0.0203(12)), 0.0188(10) (0.0220(21)), 0.0247(8) (0.0310(12)), 0.0253(8) (0.0351(8)) Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  15. Quantum Monte Carlo methods and lithium cluster properties

    Energy Technology Data Exchange (ETDEWEB)

    Owen, Richard Kent [Univ. of California, Berkeley, CA (United States)

    1990-12-01

    Properties of small lithium clusters with sizes ranging from n = 1 to 5 atoms were investigated using quantum Monte Carlo (QMC) methods. Cluster geometries were found from complete active space self consistent field (CASSCF) calculations. A detailed development of the QMC method leading to the variational QMC (V-QMC) and diffusion QMC (D-QMC) methods is shown. The many-body aspect of electron correlation is introduced into the QMC importance sampling electron-electron correlation functions by using density dependent parameters, and are shown to increase the amount of correlation energy obtained in V-QMC calculations. A detailed analysis of D-QMC time-step bias is made and is found to be at least linear with respect to the time-step. The D-QMC calculations determined the lithium cluster ionization potentials to be 0.1982(14) [0.1981], 0.1895(9) [0.1874(4)], 0.1530(34) [0.1599(73)], 0.1664(37) [0.1724(110)], 0.1613(43) [0.1675(110)] Hartrees for lithium clusters n = 1 through 5, respectively; in good agreement with experimental results shown in the brackets. Also, the binding energies per atom was computed to be 0.0177(8) [0.0203(12)], 0.0188(10) [0.0220(21)], 0.0247(8) [0.0310(12)], 0.0253(8) [0.0351(8)] Hartrees for lithium clusters n = 2 through 5, respectively. The lithium cluster one-electron density is shown to have charge concentrations corresponding to nonnuclear attractors. The overall shape of the electronic charge density also bears a remarkable similarity with the anisotropic harmonic oscillator model shape for the given number of valence electrons.

  16. Research of pulse formation neutron detector efficiency by Monte Carlo method

    International Nuclear Information System (INIS)

    Zhang Jianmin; Deng Li; Xie Zhongsheng; Yu Weidong; Zhong Zhenqian

    2001-01-01

    A study on detection efficiency of the neutron detector used in oil logging by Monte Carlo method is presented. Detection efficiency of the thermal and epithermal neutron detectors used in oil logging was calculated by Monte Carlo method using the MCNP code. The calculation results were satisfactory

  17. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo

  18. Latent uncertainties of the precalculated track Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Renaud, Marc-André; Seuntjens, Jan [Medical Physics Unit, McGill University, Montreal, Quebec H3G 1A4 (Canada); Roberge, David [Département de radio-oncologie, Centre Hospitalier de l’Université de Montréal, Montreal, Quebec H2L 4M1 (Canada)

    2015-01-15

    Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of

  19. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  20. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    DEFF Research Database (Denmark)

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint...

  1. A Monte Carlo adapted finite element method for dislocation ...

    Indian Academy of Sciences (India)

    Mean and standard deviation values, as well as probability density function of ground surface responses due to the dislocation are computed. Based on analytical and numerical calculation of dislocation, two approaches of Monte Carlo simulations are proposed. Various comparisons are examined to illustrate the capability ...

  2. A Monte Carlo adapted finite element method for dislocation ...

    Indian Academy of Sciences (India)

    Dislocation modelling of an earthquake fault is of great importance due to the fact that ground surface response may be predicted by the model. However, geological features of a fault cannot be measured exactly, and therefore these features and data involve uncertainties. This paper presents a Monte Carlo based random ...

  3. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  4. Gamma ray energy loss spectra simulation in NaI detectors with the Monte Carlo method

    International Nuclear Information System (INIS)

    Vieira, W.J.

    1982-01-01

    With the aim of studying and applying the Monte Carlo method, a computer code was developed to calculate the pulse height spectra and detector efficiencies for gamma rays incident on NaI (Tl) crystals. The basic detector processes in NaI (Tl) detectors are given together with an outline of Monte Carlo methods and a general review of relevant published works. A detailed description of the application of Monte Carlo methods to ν-ray detection in NaI (Tl) detectors is given. Comparisons are made with published, calculated and experimental, data. (Author) [pt

  5. Parallelism in continuous energy Monte Carlo method for neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Uenohara, Yuji (Nuclear Engineering Lab., Toshiba Corp. (Japan))

    1993-04-01

    The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)

  6. Parallelism in continuous energy Monte Carlo method for neutron transport

    International Nuclear Information System (INIS)

    Uenohara, Yuji

    1993-01-01

    The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)

  7. Strings, Projected Entangled Pair States, and variational Monte Carlo methods

    OpenAIRE

    Schuch, Norbert; Wolf, Michael M.; Verstraete, Frank; Cirac, J. Ignacio

    2007-01-01

    We introduce string-bond states, a class of states obtained by placing strings of operators on a lattice, which encompasses the relevant states in Quantum Information. For string-bond states, expectation values of local observables can be computed efficiently using Monte Carlo sampling, making them suitable for a variational abgorithm which extends DMRG to higher dimensional and irregular systems. Numerical results demonstrate the applicability of these states to the simulation of many-body s...

  8. Bayesian specification analysis and estimation of simultaneous equation models using Monte Carlo methods

    NARCIS (Netherlands)

    A. Zellner (Arnold); L. Bauwens (Luc); H.K. van Dijk (Herman)

    1988-01-01

    textabstractBayesian procedures for specification analysis or diagnostic checking of modeling assumptions for structural equations of econometric models are developed and applied using Monte Carlo numerical methods. Checks on the validity of identifying restrictions, exogeneity assumptions and other

  9. Review of Monte Carlo methods for particle multiplicity evaluation

    International Nuclear Information System (INIS)

    Armesto, Nestor

    2005-01-01

    I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided

  10. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro

    2014-01-01

    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  11. Estimativa da produtividade em soldagem pelo Método de Monte Carlo Productivity estimation in welding by Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    José Luiz Ferreira Martins

    2011-09-01

    Full Text Available O objetivo deste artigo é o de analisar a viabilidade da utilização do método de Monte Carlo para estimar a produtividade na soldagem de tubulações industriais de aço carbono com base em amostras pequenas. O estudo foi realizado através de uma análise de uma amostra de referência contendo dados de produtividade de 160 juntas soldadas pelo processo Eletrodo Revestido na REDUC (refinaria de Duque de Caxias, utilizando o software ControlTub 5.3. A partir desses dados foram retiradas de forma aleatória, amostras com, respectivamente, 10, 15 e 20 elementos e executadas simulações pelo método de Monte Carlo. Comparando-se os resultados da amostra com 160 elementos e os dados gerados por simulação se observa que bons resultados podem ser obtidos usando o método de Monte Carlo para estimativa da produtividade da soldagem. Por outro lado, na indústria da construção brasileira o valor da média de produtividade é normalmente usado como um indicador de produtividade e é baseado em dados históricos de outros projetos coletados e avaliados somente após a conclusão do projeto, o que é uma limitação. Este artigo apresenta uma ferramenta para avaliação da execução em tempo real, permitindo ajustes nas estimativas e monitoramento de produtividade durante o empreendimento. Da mesma forma, em licitações, orçamentos e estimativas de prazo, a utilização desta técnica permite a adoção de outras estimativas diferentes da produtividade média, que é comumente usada e como alternativa, se sugerem três critérios: produtividade otimista, média e pessimista.The aim of this article is to analyze the feasibility of using Monte Carlo method to estimate productivity in industrial pipes welding of carbon steel based on small samples. The study was carried out through an analysis of a reference sample containing productivity data of 160 welded joints by SMAW process in REDUC (Duque de Caxias Refinery, using ControlTub 5.3 software

  12. Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-01-01

    Full Text Available Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  13. Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.

    Science.gov (United States)

    Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari

    2014-01-01

    Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.

  14. The calculation of neutron flux using Monte Carlo method

    Science.gov (United States)

    Günay, Mehtap; Bardakçı, Hilal

    2017-09-01

    In this study, a hybrid reactor system was designed by using 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2 fluids, ENDF/B-VII.0 evaluated nuclear data library and 9Cr2WVTa structural material. The fluids were used in the liquid first wall, liquid second wall (blanket) and shield zones of a fusion-fission hybrid reactor system. The neutron flux was calculated according to the mixture components, radial, energy spectrum in the designed hybrid reactor system for the selected fluids, library and structural material. Three-dimensional nucleonic calculations were performed using the most recent version MCNPX-2.7.0 the Monte Carlo code.

  15. Simulation of thermochromotographic processes by the Monte-Carlo method

    International Nuclear Information System (INIS)

    Zvara, I.

    1983-01-01

    A simplified microscopic model is proposed for the gas adsorption thermochromatography in open columns with laminar flow of the carrier gas. This model describes the downstream migration of a sample molecule as a rather small number of some effective random displacements and sequences of adsorption-desorption events that occur without changing the coordinates. The relevant probability density distributions are thereby derived. Based on this model, a computer program has been developed for simulating thermochromatographic zone profiles by employing the Monte-Carlo technique. The program is versatile in accounting for a wide range of experimental conditions and for treating various properties of the species to be separated. Some results of these simulations are given to demonstrate the influence of several parameters on the zone profile

  16. New Zero-Variance Methods for Monte Carlo Criticality and Source-Detector Problems

    International Nuclear Information System (INIS)

    Larsen, Edward W.; Densmore, Jeffery D.

    2001-01-01

    A zero-variance (ZV) Monte Carlo transport method is a theoretical construct that, if it could be implemented on a practical computer, would produce the exact result after any number of histories. Unfortunately, ZV methods are impractical; nevertheless, ZV methods are of practical interest because it is possible to approximate them in ways that yield efficient variance-reduction schemes. New ZV methods for Monte Carlo criticality and source-detector problems are described. Although these methods have the same requirements and disadvantages of earlier methods, their implementation is very different; thus, the concept of approximating them to obtain practical variance-reduction schemes opens new possibilities. The relationships between the new ZV schemes, conventional ZV schemes, and recently proposed variational variance-reduction techniques are discussed. The goal is the development of more efficient Monte Carlo variance-reduction methods

  17. Markov chain Monte Carlo methods for state-space models with point process observations.

    Science.gov (United States)

    Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

    2012-06-01

    This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.

  18. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    International Nuclear Information System (INIS)

    Badal, A; Zbijewski, W; Bolch, W; Sechopoulos, I

    2014-01-01

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10 7 xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the virtual

  19. Non-analogue Monte Carlo method, application to neutron simulation; Methode de Monte Carlo non analogue, application a la simulation des neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Morillon, B.

    1996-12-31

    With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.

  20. New simpler method of matching NLO corrections with parton shower Monte Carlo

    CERN Document Server

    Jadach, Stanislaw; Sapeta, Sebastian; Siodmok, Andrzej Konrad; Skrzypek, Maciej

    2016-01-01

    Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higgs-boson production process are also presented.

  1. Study of thermodynamic and structural properties of a flexible homopolymer chain using advanced Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Hammou Amine Bouziane

    2013-03-01

    Full Text Available We study the thermodynamic and structural properties of a flexible homopolymer chain using both multi canonical Monte Carlo method and Wang-Landau method. In this work, we focus on the coil-globule transition. Starting from a completely random chain, we have obtained a globule for different sizes of the chain. The implementation of these advanced Monte Carlo methods allowed us to obtain a flat histogram in energy space and calculate various thermodynamic quantities such as the density of states, the free energy and the specific heat. Structural quantities such as the radius of gyration where also calculated.

  2. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method

    International Nuclear Information System (INIS)

    Han Jingru; Chen Yixue; Yuan Longjun

    2013-01-01

    The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)

  3. Selection of Investment Projects by Monte Carlo Method in Risk Condition

    Directory of Open Access Journals (Sweden)

    M. E.

    2017-12-01

    Full Text Available The Monte Carlo method (also known as the Monte Carlo simulation was proposed by Nicholas Metropolis, S. Ulam and Jhon Von Neiman in 50-th years of the past century. The method can be widely applied to analysis of investment projects due to the advantages recognized both by practitioners and the academic community. The balance model of a project with discounted financial flows has been implemented for Microsoft Excel and Google Docs spread-sheet solutions. The Monte Carlo method for project with low and high correlated net present value (NPV parameters in the environment of the electronic tables of MS Excel/Google Docs. A distinct graduation of risk was identified. A necessity of account of correlation effects and the use of multivariate imitation during the project selection has been demonstrated.

  4. Application of Monte Carlo methods for dead time calculations for counting measurements

    International Nuclear Information System (INIS)

    Henniger, Juergen; Jakobi, Christoph

    2015-01-01

    From a mathematical point of view Monte Carlo methods are the numerical solution of certain integrals and integral equations using a random experiment. There are several advantages compared to the classical stepwise integration. The time required for computing increases for multi-dimensional problems only moderately with increasing dimension. The only requirements for the integral kernel are its capability of being integrated in the considered integration area and the possibility of an algorithmic representation. These are the important properties of Monte Carlo methods that allow the application in every scientific area. Besides that Monte Carlo algorithms are often more intuitive than conventional numerical integration methods. The contribution demonstrates these facts using the example of dead time corrections for counting measurements.

  5. Metric conjoint segmentation methods : A Monte Carlo comparison

    NARCIS (Netherlands)

    Vriens, M; Wedel, M; Wilms, T

    The authors compare nine metric conjoint segmentation methods. Four methods concern two-stage procedures in which the estimation of conjoint models and the partitioning of the sample are performed separately; in five, the estimation and segmentation stages are integrated. The methods are compared

  6. Adjoint Weighting Methods Applied to Monte Carlo Simulations of Applications and Experiments in Nuclear Criticality

    Energy Technology Data Exchange (ETDEWEB)

    Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-11

    The goals of this project are to develop Monte Carlo radiation transport methods and simulation software for engineering analysis that are robust, efficient and easy to use; and provide computational resources to assess and improve the predictive capability of radiation transport methods and nuclear data.

  7. Research on applying neutron transport Monte Carlo method in materials with continuously varying cross sections

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Zhang, Xisi

    2011-01-01

    In traditional Monte Carlo method, the material properties in a certain cell are assumed to be constant, but this is no longer applicable in continuous varying materials where the material's nuclear cross-sections vary over the particle's flight path. So, three Monte Carlo methods, including sub stepping method, delta-tracking method and direct sampling method, are discussed in this paper to solve the problems with continuously varying materials. After the verification and comparison of these methods in 1-D models, the basic specialties of these methods are discussed and then we choose the delta-tracking method as the main method to solve the problems with continuously varying materials, especially 3-D problems. To overcome the drawbacks of the original delta-tracking method, an improved delta-tracking method is proposed in this paper to make this method more efficient in solving problems where the material's cross-sections vary sharply over the particle's flight path. To use this method in practical calculation, we implemented the improved delta-tracking method into the 3-D Monte Carlo code RMC developed by Department of Engineering Physics, Tsinghua University. Two problems based on Godiva system were constructed and calculations were made using both improved delta-tracking method and the sub stepping method, and the results proved the effects of improved delta-tracking method. (author)

  8. Development of Continuous-Energy Eigenvalue Sensitivity Coefficient Calculation Methods in the Shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Martin, William R [University of Michigan; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

    2012-01-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the SHIFT Monte Carlo code within the Scale code package. The methods were used for several simple test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods.

  9. External individual monitoring: experiments and simulations using Monte Carlo Method

    International Nuclear Information System (INIS)

    Guimaraes, Carla da Costa

    2005-01-01

    In this work, we have evaluated the possibility of applying the Monte Carlo simulation technique in photon dosimetry of external individual monitoring. The GEANT4 toolkit was employed to simulate experiments with radiation monitors containing TLD-100 and CaF 2 :NaCl thermoluminescent detectors. As a first step, X ray spectra were generated impinging electrons on a tungsten target. Then, the produced photon beam was filtered in a beryllium window and additional filters to obtain the radiation with desired qualities. This procedure, used to simulate radiation fields produced by a X ray tube, was validated by comparing characteristics such as half value layer, which was also experimentally measured, mean photon energy and the spectral resolution of simulated spectra with that of reference spectra established by international standards. In the construction of thermoluminescent dosimeter, two approaches for improvements have. been introduced. The first one was the inclusion of 6% of air in the composition of the CaF 2 :NaCl detector due to the difference between measured and calculated values of its density. Also, comparison between simulated and experimental results showed that the self-attenuation of emitted light in the readout process of the fluorite dosimeter must be taken into account. Then, in the second approach, the light attenuation coefficient of CaF 2 :NaCl compound estimated by simulation to be 2,20(25) mm -1 was introduced. Conversion coefficients C p from air kerma to personal dose equivalent were calculated using a slab water phantom with polymethyl-metacrilate (PMMA) walls, for reference narrow and wide X ray spectrum series [ISO 4037-1], and also for the wide spectra implanted and used in routine at Laboratorio de Dosimetria. Simulations of backscattered radiations by PMMA slab water phantom and slab phantom of ICRU tissue-equivalent material produced very similar results. Therefore, the PMMA slab water phantom that can be easily constructed with low

  10. Quasi-Monte Carlo methods for lattice systems. A first look

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2013-02-15

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  11. Quasi-Monte Carlo methods: applications to modeling of light transport in tissue

    Science.gov (United States)

    Schafer, Steven A.

    1996-05-01

    Monte Carlo modeling of light propagation can accurately predict the distribution of light in scattering materials. A drawback of Monte Carlo methods is that they converge inversely with the square root of the number of iterations. Theoretical considerations suggest that convergence which scales inversely with the first power of the number of iterations is possible. We have previously shown that one can obtain at least a portion of that improvement by using van der Corput sequences in place of a conventional pseudo-random number generator. Here, we present our further analysis, and show that quasi-Monte Carlo methods do have limited applicability to light scattering problems. We also discuss potential improvements which may increase the applicability.

  12. A new method to assess the statistical convergence of monte carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.

    1991-01-01

    Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/√N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig

  13. Improving Power System Risk Evaluation Method Using Monte Carlo Simulation and Gaussian Mixture Method

    Directory of Open Access Journals (Sweden)

    GHAREHPETIAN, G. B.

    2009-06-01

    Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.

  14. Asymptotic equilibrium diffusion analysis of time-dependent Monte Carlo methods for grey radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2004-01-01

    The equations of nonlinear, time-dependent radiative transfer are known to yield the equilibrium diffusion equation as the leading-order solution of an asymptotic analysis when the mean-free path and mean-free time of a photon become small. We apply this same analysis to the Fleck-Cummings, Carter-Forest, and N'kaoua Monte Carlo approximations for grey (frequency-independent) radiative transfer. Although Monte Carlo simulation usually does not require the discretizations found in deterministic transport techniques, Monte Carlo methods for radiative transfer require a time discretization due to the nonlinearities of the problem. If an asymptotic analysis of the equations used by a particular Monte Carlo method yields an accurate time-discretized version of the equilibrium diffusion equation, the method should generate accurate solutions if a time discretization is chosen that resolves temperature changes, even if the time steps are much larger than the mean-free time of a photon. This analysis is of interest because in many radiative transfer problems, it is a practical necessity to use time steps that are large compared to a mean-free time. Our asymptotic analysis shows that: (i) the N'kaoua method has the equilibrium diffusion limit, (ii) the Carter-Forest method has the equilibrium diffusion limit if the material temperature change during a time step is small, and (iii) the Fleck-Cummings method does not have the equilibrium diffusion limit. We include numerical results that verify our theoretical predictions

  15. Numerical solution to the problem of criticality by Monte Carlo method

    International Nuclear Information System (INIS)

    Kyncl, J.

    1989-04-01

    A new method of numerical solution of the criticality problem is proposed. The method is based on the results of the Krein and Rutman theory. Monte Carlo method is used and the random process is chosen in such a way that the differences between results obtained and exact ones would be arbitrarily small. The method can be applied for both analogous and nonanalogous random processes. (author). 8 refs

  16. Research on reactor physics analysis method based on Monte Carlo homogenization

    International Nuclear Information System (INIS)

    Ye Zhimin; Zhang Peng

    2014-01-01

    In order to meet the demand of nuclear energy market in the future, many new concepts of nuclear energy systems has been put forward. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multigroup cross section libraries. Due to its strong geometry modeling capability and the application of continuous energy cross section libraries, the Monte Carlo method has been widely used in reactor physics calculations, and more and more researches on Monte Carlo method has been carried out. Neutronics-thermal hydraulics coupling analysis based on Monte Carlo method has been realized. However, it still faces the problems of long computation time and slow convergence which make it not applicable to the reactor core fuel management simulations. Drawn from the deterministic core analysis method, a new two-step core analysis scheme is proposed in this work. Firstly, Monte Carlo simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Secondly, the core diffusion calculations can be done with these multigroup cross sections. The new scheme can achieve high efficiency while maintain acceptable precision, so it can be used as an effective tool for the design and analysis of innovative nuclear energy systems. Numeric tests have been done in this work to verify the new scheme. (authors)

  17. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    have primarily been based on a Bayesian paradigm, i.e. prior information on the parameters is a prerequisite, but questions about undesirable side effects from the priors are raised.     We present a method, based on MCMC methods, that approximates profile log-likelihood functions in directed graphical...... a tendency to foetal loss is heritable. The data possess a complicated dependence structure due to replicate pregnancies for the same woman, and a given family pattern. We conclude that a tendency to foetal loss is heritable. The model is of great interest in genetic epidemiology, because it considers both...

  18. Power Analysis for Complex Mediational Designs Using Monte Carlo Methods

    Science.gov (United States)

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2010-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…

  19. A variance-reduced electrothermal Monte Carlo method for semiconductor device simulation

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio; Di Stefano, Vincenza [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) Leibniz-Institut im Forschungsverbund Berlin e.V., Berlin (Germany)

    2012-11-01

    This paper is concerned with electron transport and heat generation in semiconductor devices. An improved version of the electrothermal Monte Carlo method is presented. This modification has better approximation properties due to reduced statistical fluctuations. The corresponding transport equations are provided and results of numerical experiments are presented.

  20. Evaluation of gamma-ray attenuation properties of bismuth borate glass systems using Monte Carlo method

    Science.gov (United States)

    Tarim, Urkiye Akar; Ozmutlu, Emin N.; Yalcin, Sezai; Gundogdu, Ozcan; Bradley, D. A.; Gurler, Orhan

    2017-11-01

    A Monte Carlo method was developed to investigate radiation shielding properties of bismuth borate glass. The mass attenuation coefficients and half-value layer parameters were determined for different fractional amounts of Bi2O3 in the glass samples for the 356, 662, 1173 and 1332 keV photon energies. A comparison of the theoretical and experimental attenuation coefficients is presented.

  1. Generation of triangulated random surfaces by the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analog of the Polyakov string is considered. An algorithm is proposed which enables one to study the model by the Monte Carlo method in the grand canonical ensemble. Preliminary results on the determination of the critical index γ are presented

  2. Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water

    Science.gov (United States)

    Gergely, John Robert

    2009-01-01

    Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…

  3. Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods

    International Nuclear Information System (INIS)

    Del Giorgio, Marcelo; Brizuela, Horacio; Riveros, J.A.

    1987-01-01

    The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author) [es

  4. A micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations

    DEFF Research Database (Denmark)

    Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław

    2017-01-01

    We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...

  5. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  6. User's guide to Monte Carlo methods for evaluating path integrals

    Science.gov (United States)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  7. Applications of Malliavin calculus to Monte Carlo methods in finance

    OpenAIRE

    Eric Fournié; Jean-Michel Lasry; Pierre-Louis Lions; Jérôme Lebuchoux; Nizar Touzi

    1999-01-01

    This paper presents an original probabilistic method for the numerical computations of Greeks (i.e. price sensitivities) in finance. Our approach is based on the {\\it integration-by-parts} formula, which lies at the core of the theory of variational stochastic calculus, as developed in the Malliavin calculus. The Greeks formulae, both with respect to initial conditions and for smooth perturbations of the local volatility, are provided for general discontinuous path-dependent payoff functional...

  8. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.

    2007-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  9. TH-E-18A-01: Developments in Monte Carlo Methods for Medical Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Badal, A [U.S. Food and Drug Administration (CDRH/OSEL), Silver Spring, MD (United States); Zbijewski, W [Johns Hopkins University, Baltimore, MD (United States); Bolch, W [University of Florida, Gainesville, FL (United States); Sechopoulos, I [Emory University, Atlanta, GA (United States)

    2014-06-15

    Monte Carlo simulation methods are widely used in medical physics research and are starting to be implemented in clinical applications such as radiation therapy planning systems. Monte Carlo simulations offer the capability to accurately estimate quantities of interest that are challenging to measure experimentally while taking into account the realistic anatomy of an individual patient. Traditionally, practical application of Monte Carlo simulation codes in diagnostic imaging was limited by the need for large computational resources or long execution times. However, recent advancements in high-performance computing hardware, combined with a new generation of Monte Carlo simulation algorithms and novel postprocessing methods, are allowing for the computation of relevant imaging parameters of interest such as patient organ doses and scatter-to-primaryratios in radiographic projections in just a few seconds using affordable computational resources. Programmable Graphics Processing Units (GPUs), for example, provide a convenient, affordable platform for parallelized Monte Carlo executions that yield simulation times on the order of 10{sup 7} xray/ s. Even with GPU acceleration, however, Monte Carlo simulation times can be prohibitive for routine clinical practice. To reduce simulation times further, variance reduction techniques can be used to alter the probabilistic models underlying the x-ray tracking process, resulting in lower variance in the results without biasing the estimates. Other complementary strategies for further reductions in computation time are denoising of the Monte Carlo estimates and estimating (scoring) the quantity of interest at a sparse set of sampling locations (e.g. at a small number of detector pixels in a scatter simulation) followed by interpolation. Beyond reduction of the computational resources required for performing Monte Carlo simulations in medical imaging, the use of accurate representations of patient anatomy is crucial to the

  10. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  11. Fission yield covariances for JEFF: A Bayesian Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Leray Olivier

    2017-01-01

    Full Text Available The JEFF library does not contain fission yield covariances, but simply best estimates and uncertainties. This situation is not unique as all libraries are facing this deficiency, firstly due to the lack of a defined format. An alternative approach is to provide a set of random fission yields, themselves reflecting covariance information. In this work, these random files are obtained combining the information from the JEFF library (fission yields and uncertainties and the theoretical knowledge from the GEF code. Examples of this method are presented for the main actinides together with their impacts on simple burn-up and decay heat calculations.

  12. Optimal Spatial Subdivision method for improving geometry navigation performance in Monte Carlo particle transport simulation

    International Nuclear Information System (INIS)

    Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin

    2015-01-01

    Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes

  13. Evaluation of Investment Risks in CBA with Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jana Korytárová

    2015-01-01

    Full Text Available Investment decisions are at the core of any development strategy. Economic growth and welfare depend on productive capital, infrastructure, human capital, knowledge, total factor productivity and the quality of institutions. Decision-making process on the selection of suitable projects in the public sector is in some aspects more difficult than in the private sector. Evaluating projects on the basis of their financial profitability, where the basic parameter is the value of the potential profit, can be misleading in these cases. One of the basic objectives of the allocation of public resources is respecting of the 3E principle (Economy, Effectiveness, Efficiency in their whole life cycle. The life cycle of the investment projects consists of four main phases. The first pre-investment phase is very important for decision-making process whether to accept or reject a public project for its realization. A well-designed feasibility study as well as cost-benefit analysis (CBA in this phase are important assumptions for future success of the project. A future financial and economical CF which represent the fundamental basis for calculation of economic effectiveness indicators are formed and modelled in these documents. This paper deals with the possibility to calculate the financial and economic efficiency of the public investment projects more accurately by simulation methods used.

  14. MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Gabriela Ižaríková

    2015-12-01

    Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.

  15. Multilevel Monte Carlo and improved timestepping methods in atmospheric dispersion modelling

    Science.gov (United States)

    Katsiolides, Grigoris; Müller, Eike H.; Scheichl, Robert; Shardlow, Tony; Giles, Michael B.; Thomson, David J.

    2018-02-01

    A common way to simulate the transport and spread of pollutants in the atmosphere is via stochastic Lagrangian dispersion models. Mathematically, these models describe turbulent transport processes with stochastic differential equations (SDEs). The computational bottleneck is the Monte Carlo algorithm, which simulates the motion of a large number of model particles in a turbulent velocity field; for each particle, a trajectory is calculated with a numerical timestepping method. Choosing an efficient numerical method is particularly important in operational emergency-response applications, such as tracking radioactive clouds from nuclear accidents or predicting the impact of volcanic ash clouds on international aviation, where accurate and timely predictions are essential. In this paper, we investigate the application of the Multilevel Monte Carlo (MLMC) method to simulate the propagation of particles in a representative one-dimensional dispersion scenario in the atmospheric boundary layer. MLMC can be shown to result in asymptotically superior computational complexity and reduced computational cost when compared to the Standard Monte Carlo (StMC) method, which is currently used in atmospheric dispersion modelling. To reduce the absolute cost of the method also in the non-asymptotic regime, it is equally important to choose the best possible numerical timestepping method on each level. To investigate this, we also compare the standard symplectic Euler method, which is used in many operational models, with two improved timestepping algorithms based on SDE splitting methods.

  16. A quantitative ELISA for monitoring the secretion of ZZ-fusion proteins using SpA domain as immunodetection reporter system.

    Science.gov (United States)

    Mergulhão, F J; Monteiro, G A; Cabral, J M; Taipa, M A

    2001-11-01

    A sandwich-type enzyme-linked immunosorbent assay (ELISA) was established for monitoring the secretion of ZZ-fusion proteins. Two antibodies, a monoclonal mouse anti-human proinsulin and a rabbit anti-bovine IgG (strongly binding to the ZZ-domain), were used to quantify the secretion of recombinant human ZZ-proinsulin to the growth medium of Escherichia coli cultures. The method here reported conjugates the advantages of sandwich-type ELISA assays, namely, high sensitivity, specificity, and throughput, with the possibility of quantifying small protein molecules (e.g., peptides). A further advantage of gene fusion techniques integrating both downstream processing and product detection and quantitation is highlighted. The method is capable of detecting levels of 0.05 ng of ZZ-proinsulin.

  17. Using Monte Carlo Methods for the Valuation of Intangible Assets in Sports Economics

    Directory of Open Access Journals (Sweden)

    Majewski Sebastian

    2017-12-01

    Full Text Available This paper indicates the possibilities of using Monte Carlo simulations methods in players’ performance rights value monitoring. The authors have formulated a hypothesis that connects Monte Carlo methods (MC and econometric models of the player’s life cycle that could give club managers another source of information for the decision process. The MC method in finance is usually used to value the option price on the basis of assumed distribution of price changes. In this approach, the method was used to determine future the hypothetical value of footballers’ performance rights. Using econometric models of the player’s life cycle we could observe and analyse the phase in the life cycle of a football player and determine volatility.

  18. Investigation of Compton scattering correction methods in cardiac SPECT by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Silva, A.M. Marques da; Furlan, A.M.; Robilotta, C.C.

    2001-01-01

    The goal of this work was the use of Monte Carlo simulations to investigate the effects of two scattering correction methods: dual energy window (DEW) and dual photopeak window (DPW), in quantitative cardiac SPECT reconstruction. MCAT torso-cardiac phantom, with 99m Tc and non-uniform attenuation map was simulated. Two different photopeak windows were evaluated in DEW method: 15% and 20%. Two 10% wide subwindows centered symmetrically within the photopeak were used in DPW method. Iterative ML-EM reconstruction with modified projector-backprojector for attenuation correction was applied. Results indicated that the choice of the scattering and photopeak windows determines the correction accuracy. For the 15% window, fitted scatter fraction gives better results than k = 0.5. For the 20% window, DPW is the best method, but it requires parameters estimation using Monte Carlo simulations. (author)

  19. Probability-neighbor method of accelerating geometry treatment in reactor Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Li, Zeguang; Xu, Qi; Wang, Kan; Yu, Ganglin

    2011-01-01

    Probability neighbor method (PNM) is proposed in this paper to accelerate geometry treatment of Monte Carlo (MC) simulation and validated in self-developed reactor Monte Carlo code RMC. During MC simulation by either ray-tracking or delta-tracking method, large amounts of time are spent in finding out which cell one particle is located in. The traditional way is to search cells one by one with certain sequence defined previously. However, this procedure becomes very time-consuming when the system contains a large number of cells. Considering that particles have different probability to enter different cells, PNM method optimizes the searching sequence, i.e., the cells with larger probability are searched preferentially. The PNM method is implemented in RMC code and the numerical results show that the considerable time of geometry treatment in MC calculation for complicated systems is saved, especially effective in delta-tracking simulation. (author)

  20. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.

    1980-04-01

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  1. Iterative Determination of Distributions by the Monte Carlo Method in Problems with an External Source

    OpenAIRE

    Makai, Mihály; Szatmáry, Zoltán

    2013-01-01

    In the Monte Carlo (MC) method statistical noise is usually present. Statistical noise may become dominant in the calculation of a distribution, usually by iteration, but is less Important in calculating integrals. The subject of the present work is the role of statistical noise in iterations involving stochastic simulation (MC method). Convergence is checked by comparing two consecutive solutions in the iteration. The statistical noise may randomize or pervert the convergence. We study the p...

  2. Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method

    International Nuclear Information System (INIS)

    Kim, Dong Su

    1992-02-01

    Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources

  3. Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces

    Science.gov (United States)

    Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz

    2018-03-01

    In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.

  4. Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations

    DEFF Research Database (Denmark)

    Pettersen, E. E.; Demazire, C.; Jareteg, K.

    2015-01-01

    equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real......This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... stationary dynamic calculations, the presented method does not require any modification of the Monte Carlo code....

  5. A computer programme for perturbation calculations by correlated sampling Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Asaoka, Takumi

    1979-11-01

    The perturbation calculation method by the Monte Carlo approach has been improved with use of correlated sampling technique and incorporated into the general purpose Monte Carlo code MORSE. The two methods, similar flight path and identical flight path methods have been adopted for evaluating the reactivity change. In the conventional perturbation method, only the first order term of the perturbation formula was taken into account but the present method can estimate up to the second order term. Through the Monte Carlo games, neutrons passing through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation for not only the first but the second generation. In this article, the perturbation formula is derived from the integral transport equation to estimate the reactivity change. The calculation flow and input/output format are explained for the user of the present computer programme. In Appendices, the FORTRAN list of main subroutines modified from the original code is shown in addition to an output example. (author)

  6. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code

    International Nuclear Information System (INIS)

    Perfetti, C.; Martin, W.; Rearden, B.; Williams, M.

    2012-01-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)

  7. Development of continuous-energy eigenvalue sensitivity coefficient calculation methods in the shift Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, C.; Martin, W. [Univ. of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109-2104 (United States); Rearden, B.; Williams, M. [Oak Ridge National Laboratory, Reactor and Nuclear Systems Div., Bldg. 5700, P.O. Box 2008, Oak Ridge, TN 37831-6170 (United States)

    2012-07-01

    Three methods for calculating continuous-energy eigenvalue sensitivity coefficients were developed and implemented into the Shift Monte Carlo code within the SCALE code package. The methods were used for two small-scale test problems and were evaluated in terms of speed, accuracy, efficiency, and memory requirements. A promising new method for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was developed and produced accurate sensitivity coefficients with figures of merit that were several orders of magnitude larger than those from existing methods. (authors)

  8. The application of Monte Carlo method to electron and photon beams transport; Zastosowanie metody Monte Carlo do analizy transportu elektronow i fotonow

    Energy Technology Data Exchange (ETDEWEB)

    Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)

    1994-12-31

    The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.

  9. Application de la methode des sous-groupes au calcul Monte-Carlo multigroupe

    Science.gov (United States)

    Martin, Nicolas

    This thesis is dedicated to the development of a Monte Carlo neutron transport solver based on the subgroup (or multiband) method. In this formalism, cross sections for resonant isotopes are represented in the form of probability tables on the whole energy spectrum. This study is intended in order to test and validate this approach in lattice physics and criticality-safety applications. The probability table method seems promising since it introduces an alternative computational way between the legacy continuous-energy representation and the multigroup method. In the first case, the amount of data invoked in continuous-energy Monte Carlo calculations can be very important and tend to slow down the overall computational time. In addition, this model preserves the quality of the physical laws present in the ENDF format. Due to its cheap computational cost, the multigroup Monte Carlo way is usually at the basis of production codes in criticality-safety studies. However, the use of a multigroup representation of the cross sections implies a preliminary calculation to take into account self-shielding effects for resonant isotopes. This is generally performed by deterministic lattice codes relying on the collision probability method. Using cross-section probability tables on the whole energy range permits to directly take into account self-shielding effects and can be employed in both lattice physics and criticality-safety calculations. Several aspects have been thoroughly studied: (1) The consistent computation of probability tables with a energy grid comprising only 295 or 361 groups. The CALENDF moment approach conducted to probability tables suitable for a Monte Carlo code. (2) The combination of the probability table sampling for the energy variable with the delta-tracking rejection technique for the space variable, and its impact on the overall efficiency of the proposed Monte Carlo algorithm. (3) The derivation of a model for taking into account anisotropic

  10. Higgs to ZZ to 4 leptons via VBF analysis

    CERN Document Server

    Elveren, Botan

    2013-01-01

    The purpose of this study is to investigate the properties of SM Higgs boson in the mass range 100ZZ->4l decay channel. For this purpose we worked to reduce the background and increase the event selection efficiency.

  11. Convergence Studies on Monte Carlo Methods for Pricing Mortgage-Backed Securities

    Directory of Open Access Journals (Sweden)

    Tao Pang

    2015-05-01

    Full Text Available Monte Carlo methods are widely-used simulation tools for market practitioners from trading to risk management. When pricing complex instruments, like mortgage-backed securities (MBS, strong path-dependency and high dimensionality make the Monte Carlo method the most suitable, if not the only, numerical method. In practice, while simulation processes in option-adjusted valuation can be relatively easy to implement, it is a well-known challenge that the convergence and the desired accuracy can only be achieved at the cost of lengthy computational times. In this paper, we study the convergence of Monte Carlo methods in calculating the option-adjusted spread (OAS, effective duration (DUR and effective convexity (CNVX of MBS instruments. We further define two new concepts, absolute convergence and relative convergence, and show that while the convergence of OAS requires thousands of simulation paths (absolute convergence, only hundreds of paths may be needed to obtain the desired accuracy for effective duration and effective convexity (relative convergence. These results suggest that practitioners can reduce the computational time substantially without sacrificing simulation accuracy.

  12. Search for non-standard model signatures in the WZ/ZZ final state at CDF run II

    Energy Technology Data Exchange (ETDEWEB)

    Norman, Matthew [Univ. of California, San Diego, CA (United States)

    2009-01-01

    This thesis discusses a search for non-Standard Model physics in heavy diboson production in the dilepton-dijet final state, using 1.9 fb -1 of data from the CDF Run II detector. New limits are set on the anomalous coupling parameters for ZZ and WZ production based on limiting the production cross-section at high š. Additionally limits are set on the direct decay of new physics to ZZ andWZ diboson pairs. The nature and parameters of the CDF Run II detector are discussed, as are the influences that it has on the methods of our analysis.

  13. An improved method for storing and retrieving tabulated data in a scalar Monte Carlo code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Reynolds, K.H.; Dodds, H.L.; Landers, N.F.; Petrie, L.M.

    1990-01-01

    The KENO-Va code is a production-level criticality safety code used to calculate the k eff of a system. The code is stochastic in nature, using a Monte Carlo algorithm to track individual particles one at a time through the system. The advent of computers with vector processors has generated an interest in improving KENO-Va to take advantage of the potential speed-up associated with these new processors. Unfortunately, the original Monte Carlo algorithm and method of storing and retrieving cross-section data is not adaptable to vector processing. This paper discusses an alternate method for storing and retrieving data that not only is readily vectorizable but also improves the efficiency of the current scalar code

  14. Sink strength simulations using the Monte Carlo method: Applied to spherical traps

    Science.gov (United States)

    Ahlgren, T.; Bukonte, L.

    2017-12-01

    The sink strength is an important parameter for the mean-field rate equations to simulate temporal changes in the micro-structure of materials. However, there are noteworthy discrepancies between sink strengths obtained by the Monte Carlo and analytical methods. In this study, we show the reasons for these differences. We present the equations to estimate the statistical error for sink strength calculations and show the way to determine the sink strengths for multiple traps. We develop a novel, very fast Monte Carlo method to obtain sink strengths. The results show that, in addition to the well-known sink strength dependence of the trap concentration, trap radius and the total sink strength, the sink strength also depends on the defect diffusion jump length and the total trap volume fraction. Taking these factors into account, allows us to obtain a very accurate analytic expression for the sink strength of spherical traps.

  15. Calculation of the Feynman integrals by means of the Monte Carlo method

    International Nuclear Information System (INIS)

    Filinov, V.S.

    1986-01-01

    The Monte Carlo method (the Metropolis algorithm), which is employed extensively in lattice gauge theories and quantum mechanics, was applicable only to the euclidean version of the Feynman path integrals, i.e. it was valid for evaluating the integrals of real functions. In the present work the Monte Carlo method is extended to the evaluation of the integrals of complex-valued functions. The Feynman path integrals representing the time-dependent Green function of the one-dimensional non-stationary Schroedinger equation have been calculated for the harmonic oscillator and the particle motion in barrier- and well-type potential fields. The numerical results are in reasonable agreement with the analytical estimates, in spite of the presence of singularities in the Green functions. (orig.)

  16. Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport

    International Nuclear Information System (INIS)

    Lynch, J.E.

    1985-01-01

    Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs

  17. Path-integral Monte Carlo method for the local Z2 Berry phase.

    Science.gov (United States)

    Motoyama, Yuichi; Todo, Synge

    2013-02-01

    We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.

  18. Endophytic Bacillus subtilis ZZ120 and its potential application in ...

    African Journals Online (AJOL)

    An endophytic bacterial strain ZZ120 that was isolated from healthy stems of Prunus mume (family: Rosaceae) was identified as Bacillus subtilis based on biochemical and physiological assays and 16s rRNA, rpoB and tetB-yyaO / yyaR genes analysis. Both the culture filtrate and the n-butanol extract of strain ZZ120 showed ...

  19. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    Science.gov (United States)

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  20. Evaluation of equivalent doses in 18F PET/CT using the Monte Carlo method with MCNPX code

    International Nuclear Information System (INIS)

    Belinato, Walmir; Santos, William Souza; Perini, Ana Paula; Neves, Lucio Pereira; Souza, Divanizia N.

    2017-01-01

    The present work used the Monte Carlo method (MMC), specifically the Monte Carlo NParticle - MCNPX, to simulate the interaction of radiation involving photons and particles, such as positrons and electrons, with virtual adult anthropomorphic simulators on PET / CT scans and to determine absorbed and equivalent doses in adult male and female patients

  1. Validation of uncertainty of weighing in the preparation of radionuclide standards by Monte Carlo Method

    International Nuclear Information System (INIS)

    Cacais, F.L.; Delgado, J.U.; Loayza, V.M.

    2016-01-01

    In preparing solutions for the production of radionuclide metrology standards is necessary measuring the quantity Activity by mass. The gravimetric method by elimination is applied to perform weighing with smaller uncertainties. At this work is carried out the validation, by the Monte Carlo method, of the uncertainty calculation approach implemented by Lourenco and Bobin according to ISO GUM for the method by elimination. The results obtained by both uncertainty calculation methods were consistent indicating that were fulfilled the conditions for the application of ISO GUM in the preparation of radioactive standards. (author)

  2. Application of Monte Carlo method in determination of secondary characteristic X radiation in XFA

    International Nuclear Information System (INIS)

    Roubicek, P.

    1982-01-01

    Secondary characteristic radiation is excited by primary radiation from the X-ray tube and by secondary radiation of other elements so that excitations of several orders result. The Monte Carlo method was used to consider all these possibilities and the resulting flux of characteristic radiation was simulated for samples of silicate raw materials. A comparison of the results of these computations with experiments allows to determine the effect of sample preparation on the characteristic radiation flux. (M.D.)

  3. BRAND program complex for neutron-physical experiment simulation by the Monte-Carlo method

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.

    1984-01-01

    Possibilities of the BRAND program complex for neutron and γ-radiation transport simulation by the Monte-Carlo method are described in short. The complex includes the following modules: geometric module, source module, detector module, modules of simulation of a vector of particle motion direction after interaction and a free path. The complex is written in the FORTRAN langauage and realized by the BESM-6 computer

  4. Calculation of ion stopping in dense plasma by the Monte-Carlo method

    Science.gov (United States)

    Kodanova, S. K.; Ramazanov, T. S.; Issanova, M. K.; Bastykova, N. Kh; Golyatina, R. I.; Maiorov, S. A.

    2018-01-01

    In this paper, the Monte-Carlo method was used to simulate ion trajectories in dense plasma of inertial confinement fusion. The results of computer simulation are numerical data on the dynamic characteristics, such as energy loss, penetration depth, the effective range of particles, stopping and straggling. By the results of the work the program of three-dimensional visualization of ion trajectories in dense plasma of inertial confinement fusion was developed.

  5. A new fuzzy Monte Carlo method for solving SLAE with ergodic fuzzy Markov chains

    Directory of Open Access Journals (Sweden)

    Maryam Gharehdaghi

    2015-05-01

    Full Text Available In this paper we introduce a new fuzzy Monte Carlo method for solving system of linear algebraic equations (SLAE over the possibility theory and max-min algebra. To solve the SLAE, we first define a fuzzy estimator and prove that this is an unbiased estimator of the solution. To prove unbiasedness, we apply the ergodic fuzzy Markov chains. This new approach works even for cases with coefficients matrix with a norm greater than one.

  6. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  7. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  8. Investigation of Multicritical Phenomena in ANNNI Model by Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    A. K. Murtazaev

    2012-01-01

    Full Text Available The anisotropic Ising model with competing interactions is investigated in wide temperature range and |J1/J| parameters by means of Monte Carlo methods. Static critical exponents of the magnetization, susceptibility, heat capacity, and correlation radius are calculated in the neighborhood of Lifshitz point. According to obtained results, a phase diagram is plotted, the coordinates of Lifshitz point are defined, and a character of multicritical behavior of the system is detected.

  9. Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation

    KAUST Repository

    Haji-Ali, Abdul-Lateef

    2017-09-12

    We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of $$\\\\mathrm {TOL}$$TOL, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.

  10. Methods for Monte Carlo simulation of the exospheres of the moon and Mercury

    Science.gov (United States)

    Hodges, R. R., Jr.

    1980-01-01

    A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.

  11. Calculation of dose conversion coefficients for the radionuclides in soil using the Monte Carlo method

    International Nuclear Information System (INIS)

    Balos, Y.; Timurtuerkan, E. B.; Yorulmaz, N.; Bozkurt, A.

    2009-01-01

    In determining the radiation background of a region, it is important to carry out environmental radioactivity measurements in soil, water and air, to determine their contribution to the dose rate in air. This study aims to determine the dose conversion coefficients (in {nGy/h}/{Bq/kg}) that are used to convert radionuclide activity concentration in soil (in Bq/kg) to dose rate in air (in nGy/h) using the Monte Carlo method. An isotropic source which emits monoenergetic photons is assumed to be uniformly distributed in soil. The doses released by photons in organs and tissues of a mathematical phantom are determined by the Monte Carlo package MCNP. The organ doses are then used, together with radiation weighting factors and organ weighting factors, to obtain effective doses for the energy range of 100 keV-3 MeV, which in turn are used to determine the dose rates in air per unit of specific activity.

  12. Application of Macro Response Monte Carlo method for electron spectrum simulation

    International Nuclear Information System (INIS)

    Perles, L.A.; Almeida, A. de

    2007-01-01

    During the past years several variance reduction techniques for Monte Carlo electron transport have been developed in order to reduce the electron computation time transport for absorbed dose distribution. We have implemented the Macro Response Monte Carlo (MRMC) method to evaluate the electron spectrum which can be used as a phase space input for others simulation programs. Such technique uses probability distributions for electron histories previously simulated in spheres (called kugels). These probabilities are used to sample the primary electron final state, as well as the creation secondary electrons and photons. We have compared the MRMC electron spectra simulated in homogeneous phantom against the Geant4 spectra. The results showed an agreement better than 6% in the spectra peak energies and that MRMC code is up to 12 time faster than Geant4 simulations

  13. SOLVATION STRUCTURE DETERMINATION OF Ni2+ ION IN WATER BY MEANS OF MONTE CARLO METHOD

    Directory of Open Access Journals (Sweden)

    Tutik Arindah

    2010-06-01

    Full Text Available Determination of solvation structure of Ni2+ ion in water has been achieved using Monte Carlo method using canonic assemble (NVT constant. Simulation of a Ni2+ ion in 215 H2O molecules has been done under NVT condition (298.15 K. The results showed that number of H2O molecules surround Ni2+ ion were 8 molecules in first shell and 17 molecules in second shell, interaction energy of Ni2+-H2O in first shell was -68.7 kcal/mol and in second shell was -9.8 kcal/mol, and there were two angles of O-Ni2+-O, i.e. 74o and 142o. According to those results, the solvation structure of Ni2+ ion in water was cubic antisymetric.   Keywords: Water simulation, Monte Carlo simulation

  14. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    International Nuclear Information System (INIS)

    Sharma, Anupam; Long, Lyle N.

    2004-01-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented

  15. Numerical simulation of the blast impact problem using the Direct Simulation Monte Carlo (DSMC) method

    Science.gov (United States)

    Sharma, Anupam; Long, Lyle N.

    2004-10-01

    A particle approach using the Direct Simulation Monte Carlo (DSMC) method is used to solve the problem of blast impact with structures. A novel approach to model the solid boundary condition for particle methods is presented. The solver is validated against an analytical solution of the Riemann shocktube problem and against experiments on interaction of a planar shock with a square cavity. Blast impact simulations are performed for two model shapes, a box and an I-shaped beam, assuming that the solid body does not deform. The solver uses domain decomposition technique to run in parallel. The parallel performance of the solver on two Beowulf clusters is also presented.

  16. On computing efficiency of Monte-Carlo methods in solving Dirichlet's problem

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Lomtev, V.L.

    1990-01-01

    Algorithms of Monte-Carlo method based on boundary random walks, application of Fredholm's series and intended for the solution of stationary and non-stationary boundary value Dirichlet's problem for the Laplace's equation are presented. Description is made of the code systems BRANDB, BRANDBT and BRANDF realizing the above algorithms and allowing the calculation of values of solution and its derivatives for three-dimensional geometrical systems. The results of computing experiments on solving a number of problems in the system with convex and non-convex geometries are presented, conclusions are made on the computing efficiency of the methods involved. 13 refs.; 4 figs.; 2 tabs

  17. Determination of factors through Monte Carlo method for Fricke dosimetry from 192Ir sources for brachytherapy

    International Nuclear Information System (INIS)

    David, Mariano Gazineu; Salata, Camila; Almeida, Carlos Eduardo

    2014-01-01

    The Laboratorio de Ciencias Radiologicas develops a methodology for the determination of the absorbed dose to water by Fricke chemical dosimetry method for brachytherapy sources of 192 Ir high dose rate and have compared their results with the laboratory of the National Research Council Canada. This paper describes the determination of the correction factors by Monte Carlo method, with the Penelope code. Values for all factors are presented, with a maximum difference of 0.22% for their determination by an alternative way. (author)

  18. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    Noack, K.

    1982-01-01

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  19. Geant4 based Monte Carlo simulation for verifying the modified sum-peak method.

    Science.gov (United States)

    Aso, Tsukasa; Ogata, Yoshimune; Makino, Ryuta

    2018-04-01

    The modified sum-peak method can practically estimate radioactivity by using solely the peak and the sum peak count rate. In order to efficiently verify the method in various experimental conditions, a Geant4 based Monte Carlo simulation for a high-purity germanium detector system was applied. The energy spectra in the detector were simulated for a 60 Co point source in various source to detector distances. The calculated radioactivity shows good agreement with the number of decays in the simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Remarks on a financial inverse problem by means of Monte Carlo Methods

    Science.gov (United States)

    Cuomo, Salvatore; Di Somma, Vittorio; Sica, Federica

    2017-10-01

    Estimating the price of a barrier option is a typical inverse problem. In this paper we present a numerical and statistical framework for a market with risk-free interest rate and a risk asset, described by a Geometric Brownian Motion (GBM). After approximating the risk asset with a numerical method, we find the final option price by following an approach based on sequential Monte Carlo methods. All theoretical results are applied to the case of an option whose underlying is a real stock.

  1. Shield calculation of research reactor IAN-R1 by Monte Carlo method

    International Nuclear Information System (INIS)

    Puerta, J.; Buritica, D.A.; Cardenas, H.F.

    1993-01-01

    Using the Monte Carlo Method a computer program has been developed to simulate the neutron radiation transport and determine the basic parameters in shielding calculations. The program has been tested comparing dose conversion factors with kerma factors issued by the international commission on radiation units and measurements (ICRU) on its report (No 26 of 1987) giving errors less than ten percent showing the goodness of the method. The program computer transmitted backscattered and absorbed flux on energy less on each collision. when neutrons are produced by region source with knowing energy; results are given like conversion factors and reliability of this program allows a wide application on radiological and medical physics

  2. Experimental results and Monte Carlo simulations of a landmine localization device using the neutron backscattering method

    CERN Document Server

    Datema, C P; Eijk, C W E

    2002-01-01

    Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found.

  3. Subtraction method for NLO corrections in Monte-Carlo event generators for leptoproduction

    International Nuclear Information System (INIS)

    Collins, J.

    2000-01-01

    In the case of the gluon-fusion process in deep-inelastic leptoproduction, I explicitly show how to incorporate NLO corrections in a Monte-Carlo event generator by a subtraction method. I calculate the parton densities to be used by the event generator in terms of MS-bar densities. The method is generalizable. A particular motivation for treating the gluon-fusion process is to treat diffractive deep-inelastic scattering properly, since in diffractive scattering the gluon density dominates the quark densities. I also propose a modified algorithm for treating parton kinematics in event generators; the new algorithm results in much simpler formulae for the NLO corrections. (author)

  4. Search for WZ+ZZ Production with Missing Transverse Energy and b Jets at CDF

    Energy Technology Data Exchange (ETDEWEB)

    Poprocki, Stephen [Cornell Univ., Ithaca, NY (United States)

    2013-01-01

    Observation of diboson processes at hadron colliders is an important milestone on the road to discovery or exclusion of the standard model Higgs boson. Since the decay processes happen to be closely related, methods, tools, and insights obtained through the more common diboson decays can be incorporated into low-mass standard model Higgs searches. The combined WW + WZ + ZZ diboson cross section has been measured at the Tevatron in hadronic decay modes. In this thesis we take this one step closer to the Higgs by measuring just the WZ + ZZ cross section, exploiting a novel arti cial neural network based b-jet tagger to separate the WW background. The number of signal events is extracted from data events with large ET using a simultaneous t in events with and without two jets consistent with B hadron decays. Using 5:2 fb-1 of data from the CDF II detector, we measure a cross section of (p $\\bar{p}$ → WZ,ZZ) = 5:8+3.6 -3.0 pb, in agreement with the standard model.

  5. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  6. A method based on Monte Carlo simulation for the determination of the G(E) function.

    Science.gov (United States)

    Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie

    2015-02-01

    The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. The adaptation method in the Monte Carlo simulation for computed tomography

    Directory of Open Access Journals (Sweden)

    Hyounggun Lee

    2015-06-01

    Full Text Available The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT. To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA and a human-like voxel phantom (KTMAN-2 (Los Alamos National Laboratory, Los Alamos, NM, USA. For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations—assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  8. Simulação do equilíbrio: o método de Monte Carlo Equilibrium simulation: Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Alejandro López-Castillo

    2007-01-01

    Full Text Available We make several simulations using the Monte Carlo method in order to obtain the chemical equilibrium for several first-order reactions and one second-order reaction. We study several direct, reverse and consecutive reactions. These simulations show the fluctuations and relaxation time and help to understand the solution of the corresponding differential equations of chemical kinetics. This work was done in an undergraduate physical chemistry course at UNIFIEO.

  9. Vectorization of continuous energy Monte Carlo method for neutron transport calculation

    International Nuclear Information System (INIS)

    Mori, Takamasa; Nakagawa, Masayuki; Sasaki, Makoto

    1992-01-01

    The vectorization method was studied to achieve a high efficiency for the precise physics model used in the continuous energy Monte Carlo method. The collision analysis task was reconstructed on the basis of the event based algorithm, and the stack-driven zone-selection method was applied to the vectorization of random walk simulation. These methods were installed into the vectorized continuous energy MVP code for general purpose uses. Performance of the present method was evaluated by comparison with conventional scalar codes VIM and MCNP for two typical problems. The MVP code achieved a vectorization ratio of more than 95% and a computation speed faster by a factor of 8∼22 on the FACOM VP-2600 vector supercomputer compared with the conventional scalar codes. (author)

  10. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  11. A Monte-Carlo study of landmines detection by neutron backscattering method

    International Nuclear Information System (INIS)

    Maucec, M.; De Meijer, R.J.

    2000-01-01

    The use of Monte-Carlo simulations for modelling a simplified landmine detector system with a 252 Cf- neutron source is presented in this contribution. Different aspects and variety of external conditions, affecting the localisation and identification of a buried suspicious object (such as landmine) have been tested. Results of sensitivity calculations confirm that the landmine detection methods, based on the analysis of the backscattered neutron radiation can be applicable in higher density formations, with the mass fraction of present pore-water <15 %. (author)

  12. Determination of partial structure factors by reverse Monte Carlo modelling—a test of the method

    Science.gov (United States)

    Gruner, S.; Akinlade, O.; Hoyer, W.

    2006-05-01

    The reverse Monte Carlo modelling technique is commonly applied for the analysis of the atomic structure of liquid and amorphous substances. In particular, partial structure factors of multi-component alloys can be determined using this method. In the present study we use the example of the liquid Ni33Ge67 alloy to investigate the impact of different input data on the result of RMC modelling. It was found that even two experimental structure factors might be sufficient to obtain reliable partial structure factors if the contrast between them is high enough.

  13. Reliability analysis of PWR thermohydraulic design by the Monte Carlo method

    International Nuclear Information System (INIS)

    Silva Junior, H.C. da; Berthoud, J.S.; Carajilescov, P.

    1977-01-01

    The operating power level of a PWR is limited by the occurence of DNB. Without affecting the safety and performance of the reactor, it is possible to admit failure of a certain number of core channels. The thermohydraulic design, however, is affect by a great number of uncertainties of deterministic or statistical nature. In the present work, the Monte Carlo method is applied to yield the probability that a number F of channels submitted to boiling crises will not exceed a number F* previously given. This probability is obtained as function of the reactor power level. (Author) [pt

  14. The future of new calculation concepts in dosimetry based on the Monte Carlo Methods

    International Nuclear Information System (INIS)

    Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.

    2009-01-01

    Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)

  15. Application of the direct simulation Monte Carlo method to the full shuttle geometry

    Science.gov (United States)

    Bird, G. A.

    1990-01-01

    A new set of programs has been developed for the application of the direct simulation Monte Carlo (or DSMC) method to rarefied gas flows with complex three-dimensional boundaries. The programs are efficient in terms of the computational load and also in terms of the effort required to set up particular cases. This efficiency is illustrated through computations of the flow about the Shuttle Orbiter. The general flow features are illustrated for altitudes from 170 to 100 km. Also, the computed lift-drag ratio during re-entry is compared with flight measurements.

  16. Shuttle vertical fin flowfield by the direct simulation Monte Carlo method

    Science.gov (United States)

    Hueser, J. E.; Brock, F. J.; Melfi, L. T.

    1985-01-01

    The flow properties in a model flowfield, simulating the shuttle vertical fin, determined using the Direct Simulation Monte Carlo method. The case analyzed corresponds to an orbit height of 225 km with the freestream velocity vector orthogonal to the fin surface. Contour plots of the flowfield distributions of density, temperature, velocity and flow angle are presented. The results also include mean molecular collision frequency (which reaches 1/60 sec near the surface), collision frequency density (approaches 7 x 10 to the 18/cu m sec at the surface) and the mean free path (19 m at the surface).

  17. Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method

    Science.gov (United States)

    Celenligil, M. Cevdet; Moss, James N.

    1993-01-01

    A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.

  18. A numerical study of rays in random media. [Monte Carlo method simulation

    Science.gov (United States)

    Youakim, M. Y.; Liu, C. H.; Yeh, K. C.

    1973-01-01

    Statistics of electromagnetic rays in a random medium are studied numerically by the Monte Carlo method. Two dimensional random surfaces with prescribed correlation functions are used to simulate the random media. Rays are then traced in these sample media. Statistics of the ray properties such as the ray positions and directions are computed. Histograms showing the distributions of the ray positions and directions at different points along the ray path as well as at given points in space are given. The numerical experiment is repeated for different cases corresponding to weakly and strongly random media with isotropic and anisotropic irregularities. Results are compared with those derived from theoretical investigations whenever possible.

  19. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards

    DEFF Research Database (Denmark)

    Anders, Annett; Nishijima, Kazuyoshi

    The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind...

  20. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process

    International Nuclear Information System (INIS)

    Nishimura, Akihiko

    1995-01-01

    The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)

  1. Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations

    DEFF Research Database (Denmark)

    Pettersen, E. E.; Demazire, C.; Jareteg, K.

    2015-01-01

    that corresponds to the real part of the neutron balance, and one that corresponds to the imaginary part. The two equivalent problems are in nature similar to two subcritical systems driven by external neutron sources, and can thus be treated as such in a Monte Carlo framework. The definition of these two...... equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real...

  2. Monte-Carlo Method Python Library for dose distribution Calculation in Brachytherapy

    International Nuclear Information System (INIS)

    Randriantsizafy, R.D.; Ramanandraibe, M.J.; Raboanary, R.

    2007-01-01

    The Cs-137 Brachytherapy treatment is performed in Madagascar since 2005. Time treatment calculation for prescribed dose is made manually. Monte-Carlo Method Python library written at Madagascar INSTN is experimentally used to calculate the dose distribution on the tumour and around it. The first validation of the code was done by comparing the library curves with the Nucletron company curves. To reduce the duration of the calculation, a Grid of PC's is set up with listner patch run on each PC. The library will be used to modelize the dose distribution in the CT scan patient picture for individual and better accuracy time calculation for a prescribed dose.

  3. Monte Carlo method for polarized radiative transfer in gradient-index media

    International Nuclear Information System (INIS)

    Zhao, J.M.; Tan, J.Y.; Liu, L.H.

    2015-01-01

    Light transfer in gradient-index media generally follows curved ray trajectories, which will cause light beam to converge or diverge during transfer and induce the rotation of polarization ellipse even when the medium is transparent. Furthermore, the combined process of scattering and transfer along curved ray path makes the problem more complex. In this paper, a Monte Carlo method is presented to simulate polarized radiative transfer in gradient-index media that only support planar ray trajectories. The ray equation is solved to the second order to address the effect induced by curved ray trajectories. Three types of test cases are presented to verify the performance of the method, which include transparent medium, Mie scattering medium with assumed gradient index distribution, and Rayleigh scattering with realistic atmosphere refractive index profile. It is demonstrated that the atmospheric refraction has significant effect for long distance polarized light transfer. - Highlights: • A Monte Carlo method for polarized radiative transfer in gradient index media. • Effect of curved ray paths on polarized radiative transfer is considered. • Importance of atmospheric refraction for polarized light transfer is demonstrated

  4. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  5. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    Science.gov (United States)

    Tickner

    2000-10-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal.

  6. Dosimetry of Beta-Emitting Radionuclides at the Tissular Level Using Monte Carlo Methods

    International Nuclear Information System (INIS)

    Coulot, J.; Lavielle, F.; Faggiano, A.; Bellon, N.; Aubert, B.; Schlumberger, M.; Ricard, M.

    2005-01-01

    Standard macroscopic methods used to assess the dose in nuclear medicine are limited to cases of homogeneous radionuclide distributions and provide dose estimations at the organ level. In a few applications, like radioimmunotherapy, the mean dose to an organ is not suitable to explain clinical observations, and knowledge of the dose at the tissular level is mandatory. Therefore, one must determine how particles lose their energy and what is the best way to represent tissues. The Monte Carlo method is appropriate to solve the problem of particle transport, but the question of the geometric representation of biology remains. In this paper, we describe a software (CLUSTER3D) that is able to build randomly biologically representative sphere cluster geometries using a statistical description of tissues. These geometries are then used by our Monte Carlo code called DOSE3D to perform particle transport. First results obtained on thyroid models highlight the need of cellular and tissular data to take into account actual radionuclide distributions in tissues. The flexibility and reliability of the method makes it a useful tool to study the energy deposition at various cellular and tissular levels in any configuration

  7. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    Energy Technology Data Exchange (ETDEWEB)

    Krapp, Jr., Donald M. [Univ. of California, Berkeley, CA (United States)

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at βcrit ~ 0.99, and to recalculate the known value of the critical exponent η ~ 0.58 of the system for β = βcrit. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of η. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of βcrit using smaller values of N is 1.01 ± 0.01, and the estimate for η at this value of β is 0.59 ± 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  8. On solution to the problem of criticality by alternative Monte Carlo method

    International Nuclear Information System (INIS)

    Kyncl, J.

    2005-03-01

    The problem of criticality for the neutron transport equation is analyzed. The problem is transformed into an equivalent problem in a suitable set of complex functions, and the existence and uniqueness of its solution is demonstrated. The source iteration method is discussed. It is shown that the final result of the iterative process is strongly affected by the insufficient accuracy of the individual iterations. A modified method is suggested to circumvent this problem based on the theory of positive operators; the criticality problem is solved by the Monte Carlo method constructing special random process and variable so that the difference between the result and the true value can be arbitrarily small. The efficiency of this alternative method is analysed

  9. On solution to the problem of criticality by alternative MONTE CARLO method

    International Nuclear Information System (INIS)

    Kyncl, J.

    2005-01-01

    The contribution deals with solution to the problem of criticality for neutron transport equation. The problem is transformed to equivalent one in a suitable set of complex functions and existence and uniqueness of its solution is shown. Then the source iteration method of the solution is discussed. It is pointed out that final result of iterative process is strongly affected by the fact that individual iterations are not computed with sufficient accuracy. To avoid this problem a modified method of the solution is suggested and presented. The modification is based on results of the theory of positive operators and problem of criticality is solved by Monte Carlo method constructing special random process and variable so that differences between results obtained and the exact ones would be arbitrarily small. Efficiency of this alternative method is analysed as well (Author)

  10. Asteroseismology of Kepler ZZ Ceti Stars with Fully Evolutionary Models

    Science.gov (United States)

    Romero, A. D.; Córsico, A. H.; Castanheira, B. G.; De Gerónimo, F. C.; Kepler, S. O.; Althaus, L. G.; Koester, D.; Kawka, A.; Gianninas, A.; Bonato, C.

    2017-03-01

    Recently the Kepler spacecraft observed ZZ Ceti stars giving the opportunity to study their variability for long baselines. We present a study of pulsational properties of two ZZ Ceti stars observed with the Kepler spacecraft: GD 1212 and SDSS J113655.17+040952.6, based on a grid of full evolutionary models of DA white dwarf stars, characterized by detailed and consistent inner chemical profiles. For J113655.17+040952 we found values of gravity and effective temperature in good agreement with spectroscopy. For GD 1212 the asteroseismological fits show a stellar mass higher than the spectroscopic value, but in agreement with the determinations from photometry coupled with parallax.

  11. Model of electronic energy relaxation in the test-particle Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Roblin, P.; Rosengard, A. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes d`Enrichissement; Nguyen, T.T. [Compagnie Internationale de Services en Informatique (CISI) - Centre d`Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1994-12-31

    We previously presented a new test-particle Monte Carlo method (1) (which we call PTMC), an iterative method for solving the Boltzmann equation, and now improved and very well-suited to the collisional steady gas flows. Here, we apply a statistical method, described by Anderson (2), to treat electronic translational energy transfer by a collisional process, to atomic uranium vapor. For our study, only three levels of its multiple energy states are considered: 0,620 cm{sup -1} and an average level grouping upper levels. After presenting two-dimensional results, we apply this model to the evaporation of uranium by electron bombardment and show that the PTMC results, for given initial electronic temperatures, are in good agreement with experimental radial velocity measurements. (author). 12 refs., 1 fig.

  12. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... from the sample will have a finite spatial coherence that cannot be accounted for by MC simulation. To estimate this intensity distribution adequately we have developed a novel method for modeling a focused Gaussian beam in MC simulation. This approach is valid for a softly as well as for a strongly...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...

  13. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions

    International Nuclear Information System (INIS)

    Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.

    2010-01-01

    A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  14. Quantum Monte Carlo methods and strongly correlated electrons on honeycomb structures

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Thomas C.

    2010-12-16

    In this thesis we apply recently developed, as well as sophisticated quantum Monte Carlo methods to numerically investigate models of strongly correlated electron systems on honeycomb structures. The latter are of particular interest owing to their unique properties when simulating electrons on them, like the relativistic dispersion, strong quantum fluctuations and their resistance against instabilities. This work covers several projects including the advancement of the weak-coupling continuous time quantum Monte Carlo and its application to zero temperature and phonons, quantum phase transitions of valence bond solids in spin-1/2 Heisenberg systems using projector quantum Monte Carlo in the valence bond basis, and the magnetic field induced transition to a canted antiferromagnet of the Hubbard model on the honeycomb lattice. The emphasis lies on two projects investigating the phase diagram of the SU(2) and the SU(N)-symmetric Hubbard model on the hexagonal lattice. At sufficiently low temperatures, condensed-matter systems tend to develop order. An exception are quantum spin-liquids, where fluctuations prevent a transition to an ordered state down to the lowest temperatures. Previously elusive in experimentally relevant microscopic two-dimensional models, we show by means of large-scale quantum Monte Carlo simulations of the SU(2) Hubbard model on the honeycomb lattice, that a quantum spin-liquid emerges between the state described by massless Dirac fermions and an antiferromagnetically ordered Mott insulator. This unexpected quantum-disordered state is found to be a short-range resonating valence bond liquid, akin to the one proposed for high temperature superconductors. Inspired by the rich phase diagrams of SU(N) models we study the SU(N)-symmetric Hubbard Heisenberg quantum antiferromagnet on the honeycomb lattice to investigate the reliability of 1/N corrections to large-N results by means of numerically exact QMC simulations. We study the melting of phases

  15. Application of Monte Carlo method for dose calculation in thyroid follicle

    International Nuclear Information System (INIS)

    Silva, Frank Sinatra Gomes da

    2008-02-01

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 μm. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  16. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  17. Design space development for the extraction process of Danhong injection using a Monte Carlo simulation method.

    Directory of Open Access Journals (Sweden)

    Xingchu Gong

    Full Text Available A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs. Extraction number, extraction time, and the mass ratio of water and material (W/M ratio were selected as critical process parameters (CPPs. Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes.

  18. Design space development for the extraction process of Danhong injection using a Monte Carlo simulation method.

    Science.gov (United States)

    Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin

    2015-01-01

    A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes.

  19. Assessment of probabilistic distributed factors influencing renewable energy supply for hotels using Monte-Carlo methods

    International Nuclear Information System (INIS)

    Meschede, Henning; Dunkelberg, Heiko; Stöhr, Fabian; Peesel, Ron-Hendrik; Hesselbach, Jens

    2017-01-01

    This paper investigates the use of renewable energies to supply hotels in island regions. The aim is to evaluate the effect of weather and occupancy fluctuations on the sensitivity of investment criteria. The sensitivity of the chosen energy system is examined using a Monte Carlo simulation considering stochastic weather data, occupancy rates and energy needs. For this purpose, algorithms based on measured data are developed and applied to a case study on the Canary Islands. The results underline that electricity use in hotels is by far the largest contributor to their overall energy cost. For the invested hotel on the Canary Islands, the optimal share of renewable electricity generation is found to be 63%, split into 67% photovoltaic and 33% wind power. Furthermore, a battery is used to balance the differences between day and night. It is found, that the results are sensitive to weather fluctuations as well as economic parameters to about the same degree. The results underline the risk caused by using reference time series for designing energy systems. The Monte Carlo method helps to define the mean of the annuity more precisely and to rate the risk of fluctuating weather and occupancy better. - Highlights: • An approach to generate synthetic weather data was pointed out. • A methodology to create synthetic energy demand data for hotels was developed. • The influence to the sensitivity of renewable energy systems was analysed. • Fluctuations in weather data have a greater impact on the economy than occupancy.

  20. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  1. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

    International Nuclear Information System (INIS)

    Wang Yongbo; Wu Huapeng; Handroos, Heikki

    2011-01-01

    This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

  2. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

    Energy Technology Data Exchange (ETDEWEB)

    Wang Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machine, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu Huapeng; Handroos, Heikki [Laboratory of Intelligent Machine, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2011-10-15

    This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

  3. The Linked Neighbour List (LNL) method for fast off-lattice Monte Carlo simulations of fluids

    Science.gov (United States)

    Mazzeo, M. D.; Ricci, M.; Zannoni, C.

    2010-03-01

    We present a new algorithm, called linked neighbour list (LNL), useful to substantially speed up off-lattice Monte Carlo simulations of fluids by avoiding the computation of the molecular energy before every attempted move. We introduce a few variants of the LNL method targeted to minimise memory footprint or augment memory coherence and cache utilisation. Additionally, we present a few algorithms which drastically accelerate neighbour finding. We test our methods on the simulation of a dense off-lattice Gay-Berne fluid subjected to periodic boundary conditions observing a speedup factor of about 2.5 with respect to a well-coded implementation based on a conventional link-cell. We provide several implementation details of the different key data structures and algorithms used in this work.

  4. Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method

    Science.gov (United States)

    Boyd, Iain D.

    1991-01-01

    A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.

  5. DSMC calculations for the double ellipse. [direct simulation Monte Carlo method

    Science.gov (United States)

    Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet

    1990-01-01

    The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.

  6. New one-flavor hybrid Monte Carlo simulation method for lattice fermions with γ5 hermiticity

    International Nuclear Information System (INIS)

    Ogawa, Kenji

    2011-01-01

    We propose a new method for Hybrid Monte Carlo (HMC) simulations with odd numbers of dynamical fermions on the lattice. It employs a different approach from polynomial or rational HMC. In this method, γ 5 hermiticity of the lattice Dirac operators is crucial and it can be applied to Wilson, domain-wall, and overlap fermions. We compare HMC simulations with two degenerate flavors and (1+1) degenerate flavors using optimal domain-wall fermions. The ratio of the efficiency, (number of accepted trajectories)/(simulation time), is about 3:2. The relation between pseudofermion action of chirally symmetric lattice fermions in four-dimensional (overlap) and five-dimensional (domain-wall) representation are also analyzed.

  7. Reproduction of the coincidence effect in gamma ray spectrometry by using Monte Carlo method

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, J. K.; Lee, S. H.

    2001-01-01

    Scintillation detector such as NaI(TI), or semiconductor detector, such as HPGe, are used for Measurement/Assessment of the radiation type and radiation activity. The measured energy spectrum are used for measuring the radiation type and activity. Corrections for true coincidence due to emit more than 2 photons at the same time and random coincidence due to measuring system when increasing of the radiation intensity. For accurate assessment, measurement with adequate measure system is performed, and corrections for coincidence are performed in the hardware aspect and software aspect. In general, there are limitations or difficulties in measurement of radiation assessment, computational simulation is instead used. In simulation, it has much advantages than measurement in technically, timely, and financially, it is widely used instead of measurement. In this study, the method to reproduce of the coincidence effect was proposed by using monte carlo method

  8. Adaptive Splitting Integrators for Enhancing Sampling Efficiency of Modified Hamiltonian Monte Carlo Methods in Molecular Simulation.

    Science.gov (United States)

    Akhmatskaya, Elena; Fernández-Pendás, Mario; Radivojević, Tijana; Sanz-Serna, J M

    2017-10-24

    The modified Hamiltonian Monte Carlo (MHMC) methods, i.e., importance sampling methods that use modified Hamiltonians within a Hybrid Monte Carlo (HMC) framework, often outperform in sampling efficiency standard techniques such as molecular dynamics (MD) and HMC. The performance of MHMC may be enhanced further through the rational choice of the simulation parameters and by replacing the standard Verlet integrator with more sophisticated splitting algorithms. Unfortunately, it is not easy to identify the appropriate values of the parameters that appear in those algorithms. We propose a technique, that we call MAIA (Modified Adaptive Integration Approach), which, for a given simulation system and a given time step, automatically selects the optimal integrator within a useful family of two-stage splitting formulas. Extended MAIA (or e-MAIA) is an enhanced version of MAIA, which additionally supplies a value of the method-specific parameter that, for the problem under consideration, keeps the momentum acceptance rate at a user-desired level. The MAIA and e-MAIA algorithms have been implemented, with no computational overhead during simulations, in MultiHMC-GROMACS, a modified version of the popular software package GROMACS. Tests performed on well-known molecular models demonstrate the superiority of the suggested approaches over a range of integrators (both standard and recently developed), as well as their capacity to improve the sampling efficiency of GSHMC, a noticeable method for molecular simulation in the MHMC family. GSHMC combined with e-MAIA shows a remarkably good performance when compared to MD and HMC coupled with the appropriate adaptive integrators.

  9. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Peplow, Douglas E. [ORNL; Miller, Thomas Martin [ORNL; Patton, Bruce W [ORNL; Wagner, John C [ORNL

    2013-01-01

    The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.

  10. Theory and applications of the fission matrix method for continuous-energy Monte Carlo

    International Nuclear Information System (INIS)

    Carney, Sean; Brown, Forrest; Kiedrowski, Brian; Martin, William

    2014-01-01

    Highlights: • The fission matrix method is implemented into the MCNP Monte Carlo code. • Eigenfunctions and eigenvalues of power distributions are shown and studied. • Source convergence acceleration is demonstrated for a fuel storage vault problem. • Forward flux eigenmodes and relative uncertainties are shown for a reactor problem. • Eigenmodes expansions are performed during source convergence for a reactor problem. - Abstract: The fission matrix method can be used to provide estimates of the fundamental mode fission distribution, the dominance ratio, the eigenvalue spectrum, and higher mode forward and adjoint eigenfunctions of the fission distribution. It can also be used to accelerate the convergence of power method iterations and to provide basis functions for higher-order perturbation theory. The higher-mode fission sources can be used to determine higher-mode forward fluxes and tallies, and work is underway to provide higher-mode adjoint-weighted fluxes and tallies. These aspects of the method are here both theoretically justified and demonstrated, and then used to investigate fundamental properties of the transport equation for a continuous-energy physics treatment. Implementation into the MCNP6 Monte Carlo code is also discussed, including a sparse representation of the fission matrix, which permits much larger and more accurate representations. Properties of the calculated eigenvalue spectrum of a 2D PWR problem are discussed: for a fine enough mesh and a sufficient degree of sampling, the spectrum both converges and has a negligible imaginary component. Calculation of the fundamental mode of the fission matrix for a fuel storage vault problem shows how convergence can be accelerated by over a factor of ten given a flat initial distribution. Forward fluxes and the relative uncertainties for a 2D PWR are shown, both of which qualitatively agree with expectation. Lastly, eigenmode expansions are performed during source convergence of the 2D PWR

  11. Coherent-wave Monte Carlo method for simulating light propagation in tissue

    Science.gov (United States)

    Kraszewski, Maciej; Pluciński, Jerzy

    2016-03-01

    Simulating propagation and scattering of coherent light in turbid media, such as biological tissues, is a complex problem. Numerical methods for solving Helmholtz or wave equation (e.g. finite-difference or finite-element methods) require large amount of computer memory and long computation time. This makes them impractical for simulating laser beam propagation into deep layers of tissue. Other group of methods, based on radiative transfer equation, allows to simulate only propagation of light averaged over the ensemble of turbid medium realizations. This makes them unuseful for simulating phenomena connected to coherence properties of light. We propose a new method for simulating propagation of coherent light (e.g. laser beam) in biological tissue, that we called Coherent-Wave Monte Carlo method. This method is based on direct computation of optical interaction between scatterers inside the random medium, what allows to reduce amount of memory and computation time required for simulation. We present the theoretical basis of the proposed method and its comparison with finite-difference methods for simulating light propagation in scattering media in Rayleigh approximation regime.

  12. Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method

    Directory of Open Access Journals (Sweden)

    Muneki Yasuda

    2018-04-01

    Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.

  13. Study of the $ZZ$ diboson production at CDF II

    Energy Technology Data Exchange (ETDEWEB)

    Bauce, Matteo [Univ. of Padua (Italy)

    2013-01-01

    The subject of this Thesis is the production of a pair of massive Z vector bosons in the proton antiproton collisions at the Tevatron, at the center-of-mass energy √s = 1.96 TeV. We measure the ZZ production cross section in two different leptonic decay modes: into four charged leptons (e or μ) and into two charged leptons plus two neutrinos. The results are based on the whole dataset collected by the Collider Detector at Fermilab (CDF), corresponding to 9.7 fb-1 of data. The combination of the two cross section measurements gives (p$\\bar{p}$→ZZ) = 1.38+0.28 -0.27 pb, and is the most precise ZZ cross section measurement at the Tevatron to date. We further investigate the four lepton final state searching for the production of the scalar Higgs particle in the decay H →ZZ(*) →ℓℓℓ'ℓ'. No evidence of its production has been seen in the data, hence was set a 95% Confidence Level upper limit on its production cross section as a function of the Higgs particle mass, mH, in the range from 120 to 300 GeV/c2.

  14. Forwards and Backwards Modelling of Ashfall Hazards in New Zealand by Monte Carlo Methods

    Science.gov (United States)

    Hurst, T.; Smith, W. D.; Bibby, H. M.

    2003-12-01

    We have developed a technique for quantifying the probability of particular thicknesses of airfall ash from a volcanic eruption at any given site, using Monte Carlo methods, for hazards planning and insurance purposes. We use an established program (ASHFALL) to model individual eruptions, where the likely thickness of ash deposited at selected sites depends on the location of the volcano, eruptive volume, column height and ash size, and the wind conditions. A Monte Carlo formulation then allows us to simulate the variations in eruptive volume and in wind conditions by analysing repeat eruptions, each time allowing the parameters to vary randomly according to known or assumed distributions. Actual wind velocity profiles are used, with randomness included by selection of a starting date. We show how this method can handle the effects of multiple volcanic sources by aggregation, each source with its own characteristics. This follows a similar procedure which we have used for earthquake hazard assessment. The result is estimates of the frequency with which any given depth of ash is likely to be deposited at the selected site, accounting for all volcanoes that might affect it. These numbers are expressed as annual probabilities or as mean return periods. We can also use this method for obtaining an estimate of how often and how large the eruptions from a particular volcano have been. Results from ash cores in Auckland can give useful bounds for the likely total volumes erupted from the volcano Mt Egmont/Mt Taranaki, 280 km away, during the last 140,000 years, information difficult to obtain from local tephra stratigraphy.

  15. Contrasting Accreting White Dwarf Pulsators with the ZZ Ceti Stars

    Science.gov (United States)

    Mukadam, A. S.; Szkody, P.; Gänsicke, B. T.; Pala, A.

    2017-03-01

    Understanding the similarities and differences between the accreting white dwarf pulsators and their non-interacting counterparts, the ZZ Ceti stars, will eventually help us deduce how accretion affects pulsations. ZZ Ceti stars pulsate in a narrow instability strip in the range 10800-12300 K due to H ionization in their pure H envelopes; their pulsation characteristics depend on their temperature and stellar mass. Models of accreting white dwarfs are found to be pulsationally unstable due to the H/HeI ionization zone, and even show a second instability strip around 15000 K due to HeII ionization. Both these strips are expected to merge for a He abundance higher than 0.48 to form a broad instability strip, which is consistent with the empirical determination of 10500-16000 K. Accreting pulsators undergo outbursts, during which the white dwarf is heated to temperatures well beyond the instability strip and is observed to cease pulsations. The white dwarf then cools to quiescence in a few years as its outer layers cool more than a million times faster than the evolutionary rate. This provides us with an exceptional opportunity to track the evolution of pulsations from the blue edge to quiescence in a few years, while ZZ Ceti stars evolve on Myr timescales. Some accreting pulsators have also been observed to cease pulsations without any apparent evidence of an outburst. This is a distinct difference between this class of pulsators and the non-interacting ZZ Ceti stars. While the ZZ Ceti instability strip is well sampled, the strip for the accreting white dwarfs is sparsely sampled and we hereby add two new potential discoveries to improve the statistics.

  16. The FLUKA code for application of Monte Carlo methods to promote high precision ion beam therapy

    CERN Document Server

    Parodi, K; Cerutti, F; Ferrari, A; Mairani, A; Paganetti, H; Sommerer, F

    2010-01-01

    Monte Carlo (MC) methods are increasingly being utilized to support several aspects of commissioning and clinical operation of ion beam therapy facilities. In this contribution two emerging areas of MC applications are outlined. The value of MC modeling to promote accurate treatment planning is addressed via examples of application of the FLUKA code to proton and carbon ion therapy at the Heidelberg Ion Beam Therapy Center in Heidelberg, Germany, and at the Proton Therapy Center of Massachusetts General Hospital (MGH) Boston, USA. These include generation of basic data for input into the treatment planning system (TPS) and validation of the TPS analytical pencil-beam dose computations. Moreover, we review the implementation of PET/CT (Positron-Emission-Tomography / Computed- Tomography) imaging for in-vivo verification of proton therapy at MGH. Here, MC is used to calculate irradiation-induced positron-emitter production in tissue for comparison with the +-activity measurement in order to infer indirect infor...

  17. The Calculation of Thermal Conductivities by Three Dimensional Direct Simulation Monte Carlo Method.

    Science.gov (United States)

    Zhao, Xin-Peng; Li, Zeng-Yao; Liu, He; Tao, Wen-Quan

    2015-04-01

    Three dimensional direct simulation Monte Carlo (DSMC) method with the variable soft sphere (VSS) collision model is implemented to solve the Boltzmann equation and to acquire the heat flux between two parallel plates (Fourier Flow). The gaseous thermal conductivity of nitrogen is derived based on the Fourier's law under local equilibrium condition at temperature from 270 to 1800 K and pressure from 0.5 to 100,000 Pa and compared with the experimental data and Eucken relation from Chapman and Enskog (CE) theory. It is concluded that the present results are consistent with the experimental data but much higher than those by Eucken relation especially at high temperature. The contribution of internal energy of molecule to the gaseous thermal conductivity becomes significant as increasing the temperature.

  18. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  19. Investigation of a V15 magnetic molecular nanocluster by the Monte Carlo method

    International Nuclear Information System (INIS)

    Khizriev, K. Sh.; Dzhamalutdinova, I. S.; Taaev, T. A.

    2013-01-01

    Exchange interactions in a V 15 magnetic molecular nanocluster are considered, and the process of magnetization reversal for various values of the set of exchange constants is analyzed by the Monte Carlo method. It is shown that the best agreement between the field dependence of susceptibility and experimental results is observed for the following set of exchange interaction constants in a V 15 magnetic molecular nanocluster: J = 500 K, J′ = 150 K, J″ = 225 K, J 1 = 50 K, and J 2 = 50 K. It is observed for the first time that, in a strong magnetic field, for each of the three transitions from low-spin to high-spin states, the heat capacity exhibits two closely spaced maxima

  20. Generation of organic scintillators response function for fast neutrons using the Monte Carlo method

    International Nuclear Information System (INIS)

    Mazzaro, A.C.

    1979-01-01

    A computer program (DALP) in Fortran-4-G language, has been developed using the Monte Carlo method to simulate the experimental techniques leading to the distribution of pulse heights due to monoenergetic neutrons reaching an organic scintillator. The calculation of the pulse height distribution has been done for two different systems: 1) Monoenergetic neutrons from a punctual source reaching the flat face of a cylindrical organic scintillator; 2) Environmental monoenergetic neutrons randomly reaching either the flat or curved face of the cylindrical organic scintillator. The computer program has been developed in order to be applied to the NE-213 liquid organic scintillator, but can be easily adapted to any other kind of organic scintillator. With this program one can determine the pulse height distribution for neutron energies ranging from 15 KeV to 10 MeV. (Author) [pt

  1. Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Capizzo, M C; Sperandeo-Mineo, R M; Zarcone, M [UoP-PERG, University of Palermo Physics Education Research Group and Dipartimento di Fisica e Tecnologie Relative, Universita di Palermo (Italy)], E-mail: sperandeo@difter.unipa.it

    2008-05-15

    We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels.

  2. Investigation of Reliabilities of Bolt Distances for Bolted Structural Steel Connections by Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Ertekin Öztekin Öztekin

    2015-12-01

    Full Text Available Design of the distance of bolts to each other and design of the distance of bolts to the edge of connection plates are made based on minimum and maximum boundary values proposed by structural codes. In this study, reliabilities of those distances were investigated. For this purpose, loading types, bolt types and plate thicknesses were taken as variable parameters. Monte Carlo Simulation (MCS method was used in the reliability computations performed for all combination of those parameters. At the end of study, all reliability index values for all those distances were presented in graphics and tables. Results obtained from this study compared with the values proposed by some structural codes and finally some evaluations were made about those comparisons. Finally, It was emphasized in the end of study that, it would be incorrect of the usage of the same bolt distances in the both traditional designs and the higher reliability level designs.

  3. An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.

    Science.gov (United States)

    Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B

    2017-01-01

    An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.

  4. Neutrino emission spectra of collapsing degenerate stellar cores - Calculations by the Monte Carlo method

    International Nuclear Information System (INIS)

    Levitan, Iu.L.; Sobol, I.M.; Khlopov, M.Iu.; Chechetkin, V.M.

    1982-01-01

    The variation of the hard part of the neutrino emission spectra of collapsing degenerate stellar cores with matter having a small optical depth to neutrinos is analyzed. The interaction of neutrinos with the degenerate matter is determined by processes of neutrino scattering on nuclei (without a change in neutrino energy) and neutrino scattering on degenerate electrons, in which the neutrino energy can only decrease. The neutrino emission spectrum of a collapsing stellar core in the initial stage of the onset of opacity is calculated by the Monte Carlo method: using a central density of 10 trillion g/cu cm and, in the stage of deep collapse, for a central density of 60 trillion g/cu cm. In the latter case the calculation of the spectrum without allowance for effects of neutrino degeneration in the central part of the collapsing stellar core corresponds to the maximum possible suppression of the hard part of the neutrino emission spectrum

  5. Absorbed dose measurements in mammography using Monte Carlo method and ZrO2+PTFE dosemeters

    International Nuclear Information System (INIS)

    Duran M, H. A.; Hernandez O, M.; Salas L, M. A.; Hernandez D, V. M.; Vega C, H. R.; Pinedo S, A.; Ventura M, J.; Chacon, F.; Rivera M, T.

    2009-10-01

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO 2 +PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  6. A study of the dielectric and magnetic properties of multiferroic materials using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    A. Sosa

    2012-03-01

    Full Text Available A study of the dielectric and magnetic properties of multiferroic materials using the Monte Carlo (MC method is presented. Two different systems are considered: the first, ferroelectric-antiferromagnetic (FE-AFM recently studied by X. S. Gaoand J. M. Liu and the second antiferroelectric-ferromagnetic (AFE-FM. Based on the DIFFOUR-Ising hybrid microscopic model developed by Janssen, a Hamiltonian that takes into account the magnetoelectric coupling in both ferroic phases is proposed. The obtained results show that the existence of such coupling modifies the ferroelectric and magnetic ordering in both phases. Additionally, it is shown that the presence of a magnetic or an electric field influences the electric polarization and the magnetization, respectively, making evident the magnetoelectric effect.

  7. Investigation of physical regularities in gamma gamma logging of oil wells by Monte Carlo method

    International Nuclear Information System (INIS)

    Gulin, Yu.A.

    1973-01-01

    Some results are given of calculations by the Monte Carlo method of specific problems of gamma-gamma density logging. The paper considers the influence of probe length and volume density of the rocks; the angular distribution of the scattered radiation incident on the instrument; the spectra of the radiation being recorded and of the source radiation; depths of surveys, the effect of the mud cake, the possibility of collimating the source radiation; the choice of source, initial collimation angles, the optimum angle of recording scattered gamma-radiation and the radiation discrimination threshold; and the possibility of determining the mineralogical composition of rocks in sections of oil wells and of identifying once-scattered radiation. (author)

  8. Numerical solution of DGLAP equations using Laguerre polynomials expansion and Monte Carlo method.

    Science.gov (United States)

    Ghasempour Nesheli, A; Mirjalili, A; Yazdanpanah, M M

    2016-01-01

    We investigate the numerical solutions of the DGLAP evolution equations at the LO and NLO approximations, using the Laguerre polynomials expansion. The theoretical framework is based on Furmanski et al.'s articles. What makes the content of this paper different from other works, is that all calculations in the whole stages to extract the evolved parton distributions, are done numerically. The employed techniques to do the numerical solutions, based on Monte Carlo method, has this feature that all the results are obtained in a proper wall clock time by computer. The algorithms are implemented in FORTRAN and the employed coding ideas can be used in other numerical computations as well. Our results for the evolved parton densities are in good agreement with some phenomenological models. They also indicate better behavior with respect to the results of similar numerical calculations.

  9. Calibration of lung counter using a CT model of Torso phantom and Monte Carlo method

    International Nuclear Information System (INIS)

    Zhang Binquan; Ma Jizeng; Yang Duanjie; Liu Liye; Cheng Jianping

    2006-01-01

    Tomography image of a Torso phantom was obtained from CT-Scan. The Torso phantom represents the trunk of an adult man that is 170 cm high and weight of 65 kg. After these images were segmented, cropped, and resized, a 3-dimension voxel phantom was created. The voxel phantom includes more than 2 million voxels, which size was 2.73 mm x 2.73 mm x 3 mm. This model could be used for the calibration of lung counter with Monte Carlo method. On the assumption that radioactive material was homogeneously distributed throughout the lung, counting efficiencies of a HPGe detector in different positions were calculated as Adipose Mass fraction (AMF) was different in the soft tissue in chest. The results showed that counting efficiencies of the lung counter changed up to 67% for 17.5 keV γ ray and 20% for 25 keV γ ray when AMF changed from 0 to 40%. (authors)

  10. Sequential Monte Carlo Localization Methods in Mobile Wireless Sensor Networks: A Review

    Directory of Open Access Journals (Sweden)

    Ammar M. A. Abu Znaid

    2017-01-01

    Full Text Available The advancement of digital technology has increased the deployment of wireless sensor networks (WSNs in our daily life. However, locating sensor nodes is a challenging task in WSNs. Sensing data without an accurate location is worthless, especially in critical applications. The pioneering technique in range-free localization schemes is a sequential Monte Carlo (SMC method, which utilizes network connectivity to estimate sensor location without additional hardware. This study presents a comprehensive survey of state-of-the-art SMC localization schemes. We present the schemes as a thematic taxonomy of localization operation in SMC. Moreover, the critical characteristics of each existing scheme are analyzed to identify its advantages and disadvantages. The similarities and differences of each scheme are investigated on the basis of significant parameters, namely, localization accuracy, computational cost, communication cost, and number of samples. We discuss the challenges and direction of the future research work for each parameter.

  11. Finite-Temperature Variational Monte Carlo Method for Strongly Correlated Electron Systems

    Science.gov (United States)

    Takai, Kensaku; Ido, Kota; Misawa, Takahiro; Yamaji, Youhei; Imada, Masatoshi

    2016-03-01

    A new computational method for finite-temperature properties of strongly correlated electrons is proposed by extending the variational Monte Carlo method originally developed for the ground state. The method is based on the path integral in the imaginary-time formulation, starting from the infinite-temperature state that is well approximated by a small number of certain random initial states. Lower temperatures are progressively reached by the imaginary-time evolution. The algorithm follows the framework of the quantum transfer matrix and finite-temperature Lanczos methods, but we extend them to treat much larger system sizes without the negative sign problem by optimizing the truncated Hilbert space on the basis of the time-dependent variational principle (TDVP). This optimization algorithm is equivalent to the stochastic reconfiguration (SR) method that has been frequently used for the ground state to optimally truncate the Hilbert space. The obtained finite-temperature states allow an interpretation based on the thermal pure quantum (TPQ) state instead of the conventional canonical-ensemble average. Our method is tested for the one- and two-dimensional Hubbard models and its accuracy and efficiency are demonstrated.

  12. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    ias

    RESONANCE ⎜ August 2014. GENERAL ⎜ ARTICLE. Variational Monte Carlo Technique. Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. Keywords. Variational methods, Monte. Carlo techniques, harmonic os- cillators, quantum mechanical systems. Sukanta Deb is an. Assistant Professor in the.

  13. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    . Keywords. Gibbs sampling, Markov Chain. Monte Carlo, Bayesian inference, stationary distribution, conver- gence, image restoration. Arnab Chakraborty. We describe the mathematics behind the Markov. Chain Monte Carlo method of ...

  14. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    International Nuclear Information System (INIS)

    Wagner, John C.; Mosher, Scott W.; Evans, Thomas M.; Peplow, Douglas E.; Turner, John A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which

  15. Hybrid and parallel domain-decomposition methods development to enable Monte Carlo for reactor analyses

    International Nuclear Information System (INIS)

    Wagner, J.C.; Mosher, S.W.; Evans, T.M.; Peplow, D.E.; Turner, J.A.

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform 'real' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the 'gold standard' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method

  16. Energy conservation in radiation hydrodynamics. Application to the Monte-Carlo method used for photon transport in the fluid frame

    International Nuclear Information System (INIS)

    Mercier, B.; Meurant, G.; Tassart, J.

    1985-04-01

    The description of the equations in the fluid frame has been done recently. A simplification of the collision term is obtained, but the streaming term now has to include angular deviation and the Doppler shift. We choose the latter description which is more convenient for our purpose. We introduce some notations and recall some facts about stochastic kernels and the Monte-Carlo method. We show how to apply the Monte-Carlo method to a transport equation with an arbitrary streaming term; in particular we show that the track length estimator is unbiased. We review some properties of the radiation hydrodynamics equations, and show how energy conservation is obtained. Then, we apply the Monte-Carlo method explained in section 2 to the particular case of the transfer equation in the fluid frame. Finally, we describe a physical example and give some numerical results

  17. Inhomogeneous broadening of PAC spectra with V zz and η joint probability distribution functions

    Science.gov (United States)

    Evenson, W. E.; Adams, M.; Bunker, A.; Hodges, J.; Matheson, P.; Park, T.; Stufflebeam, M.; Zacate, M. O.

    2013-05-01

    The perturbed angular correlation (PAC) spectrum, G 2( t), is broadened by the presence of randomly distributed defects in crystals due to a distribution of electric field gradients (EFGs) experienced by probe nuclei. Heuristic approaches to fitting spectra that exhibit such inhomogeneous broadening (ihb) consider only the distribution of EFG magnitudes V zz , but the physical effect actually depends on the joint probability distribution function (pdf) of V zz and EFG asymmetry parameter η. The difficulty in determining the joint pdf leads us to more appropriate representations of the EFG coordinates, and to express the joint pdf as the product of two approximately independent pdfs describing each coordinate separately. We have pursued this case in detail using as an initial illustration of the method a simple point defect model with nuclear spin I = 5/2 in several cubic lattices, where G 2( t) is primarily induced by a defect trapped in the first neighbor shell of a probe and broadening is due to defects distributed at random outside the first neighbor shell. Effects such as lattice relaxation are ignored in this simple test of the method. The simplicity of our model is suitable for gaining insight into ihb with more than V zz alone. We simulate ihb in this simple case by averaging the net EFGs of 20,000 random defect arrangements, resulting in a broadened average G 2( t). The 20,000 random cases provide a distribution of EFG components which are first transformed to Czjzek coordinates and then further into the full Czjzek half plane by conformal mapping. The topology of this transformed space yields an approximately separable joint pdf for the EFG components. We then fit the nearly independent pdfs and reconstruct G 2( t) as a function of defect concentration. We report results for distributions of defects on simple cubic, face-centered cubic, and body-centered cubic lattices. The method explored here for analyzing ihb is applicable to more realistic cases.

  18. Multi-Index Monte Carlo and stochastic collocation methods for random PDEs

    KAUST Repository

    Nobile, Fabio

    2016-01-09

    In this talk we consider the problem of computing statistics of the solution of a partial differential equation with random data, where the random coefficient is parametrized by means of a finite or countable sequence of terms in a suitable expansion. We describe and analyze a Multi-Index Monte Carlo (MIMC) and a Multi-Index Stochastic Collocation method (MISC). the former is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using firstorder differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2). On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally effective. We finally show the effectiveness of MIMC and MISC with some computational tests, including tests with a infinite countable number of random parameters.

  19. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method

    International Nuclear Information System (INIS)

    Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.

    2007-01-01

    The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor

  20. Applications of Monte Carlo method to nonlinear regression of rheological data

    Science.gov (United States)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  1. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2009-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  2. Buildup factors for multilayer shieldings in deterministic methods and their comparison with Monte Carlo

    International Nuclear Information System (INIS)

    Listjak, M.; Slavik, O.; Kubovcova, D.; Vermeersch, F.

    2008-01-01

    In general there are two ways how to calculate effective doses. The first way is by use of deterministic methods like point kernel method which is implemented in Visiplan or Microshield. These kind of calculations are very fast, but they are not very convenient for a complex geometry with shielding composed of more then one material in meaning of result precision. In spite of this that programs are sufficient for ALARA optimisation calculations. On other side there are Monte Carlo methods which can be used for calculations. This way of calculation is quite precise in comparison with reality but calculation time is usually very large. Deterministic method like programs have one disadvantage -usually there is option to choose buildup factor (BUF) only for one material in multilayer stratified slabs shielding calculation problems even if shielding is composed from different materials. In literature there are proposed different formulas for multilayer BUF approximation. Aim of this paper was to examine these different formulas and their comparison with MCNP calculations. At first ware compared results of Visiplan and Microshield. Simple geometry was modelled - point source behind single and double slab shielding. For Build-up calculations was chosen Geometric Progression method (feature of the newest version of Visiplan) because there are lower deviations in comparison with Taylor fitting. (authors)

  3. Studying stellar binary systems with the Laser Interferometer Space Antenna using delayed rejection Markov chain Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Trias, Miquel [Departament de Fisica, Universitat de les Illes Balears, Cra. Valldemossa Km. 7.5, E-07122 Palma de Mallorca (Spain); Vecchio, Alberto; Veitch, John, E-mail: miquel.trias@uib.e, E-mail: av@star.sr.bham.ac.u, E-mail: jveitch@star.sr.bham.ac.u [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)

    2009-10-21

    Bayesian analysis of Laser Interferometer Space Antenna (LISA) data sets based on Markov chain Monte Carlo methods has been shown to be a challenging problem, in part due to the complicated structure of the likelihood function consisting of several isolated local maxima that dramatically reduces the efficiency of the sampling techniques. Here we introduce a new fully Markovian algorithm, a delayed rejection Metropolis-Hastings Markov chain Monte Carlo method, to efficiently explore these kind of structures and we demonstrate its performance on selected LISA data sets containing a known number of stellar-mass binary signals embedded in Gaussian stationary noise.

  4. Contribution to the solution of the multigroup Boltzmann equation by the determinist methods and the Monte Carlo method; Contribution a la resolution de l`equation de Bolztmann en multigroupe par les methodes deterministes et Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Li, M

    1998-08-01

    In this thesis, two methods for solving the multigroup Boltzmann equation have been studied: the interface-current method and the Monte Carlo method. A new version of interface-current (IC) method has been develop in the TDT code at SERMA, where the currents of interface are represented by piecewise constant functions in the solid angle space. The convergence of this method to the collision probability (CP) method has been tested. Since the tracking technique is used for both the IC and CP methods, it is necessary to normalize he collision probabilities obtained by this technique. Several methods for this object have been studied and implemented in our code, we have compared their performances and chosen the best one as the standard choice. The transfer matrix treatment has been a long-standing difficulty for the multigroup Monte Carlo method: when the cross-sections are converted into multigroup form, important negative parts will appear in the angular transfer laws represented by low-order Legendre polynomials. Several methods based on the preservation of the first moments, such as the discrete angles methods and the equally-probable step function method, have been studied and implemented in the TRIMARAN-II code. Since none of these codes has been satisfactory, a new method, the non equally-probably step function method, has been proposed and realized in our code. The comparisons for these methods have been done in several aspects: the preservation of the moments required, the calculation of a criticality problem and the calculation of a neutron-transfer in water problem. The results have showed that the new method is the best one in all these comparisons, and we have proposed that it should be a standard choice for the multigroup transfer matrix. (author) 76 refs.

  5. On the choice of beam polarization in e{sup +}e{sup -} → ZZ/Zγ and anomalous triple gauge-boson couplings

    Energy Technology Data Exchange (ETDEWEB)

    Rahaman, Rafiqul; Singh, Ritesh K. [Indian Institute of Science Education and Research Kolkata, Department of Physical Sciences, Mohanpur (India)

    2017-08-15

    The anomalous trilinear gauge couplings of Z and γ are studied in e{sup +}e{sup -} → ZZ/Zγ with longitudinal beam polarizations using a complete set of polarization asymmetries for the Z boson. We quantify the goodness of the beam polarization in terms of the likelihood and find the best choice of e{sup -} and e{sup +} polarizations to be (+0.16, -0.16), (+0.09, -0.10) and (+0.12, -0.12) for ZZ, Zγ and combined processes, respectively. Simultaneous limits on anomalous couplings are obtained for these choices of beam polarizations using Markov-Chain-Monte-Carlo (MCMC) for an e{sup +}e{sup -} collider running at √(s) = 500 GeV and L = 100 fb{sup -1}. We find the simultaneous limits for these beam polarizations to be comparable with each other and also comparable with the unpolarized beam case. (orig.)

  6. Efficient Data-Worth Analysis Using a Multilevel Monte Carlo Method Applied in Oil Reservoir Simulations

    Science.gov (United States)

    Lu, D.; Ricciuto, D. M.; Evans, K. J.

    2017-12-01

    Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.

  7. An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Evans, Katherine

    2018-03-01

    Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.

  8. A study of Monte Carlo methods for weak approximations of stochastic particle systems in the mean-field?

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-08

    I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.

  9. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    Science.gov (United States)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  10. Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods

    Science.gov (United States)

    Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.

    1994-01-01

    Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.

  11. Earthquake forecasting based on data assimilation: sequential Monte Carlo methods for renewal point processes

    Directory of Open Access Journals (Sweden)

    M. J. Werner

    2011-02-01

    Full Text Available Data assimilation is routinely employed in meteorology, engineering and computer sciences to optimally combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant for the seismic gap hypothesis, models of characteristic earthquakes and recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating arbitrary posterior distributions. We perform extensive numerical simulations to demonstrate the feasibility and benefits of forecasting earthquakes based on data assimilation.

  12. Application of Monte Carlo Method to Design a Delayed Neutron Counting System

    International Nuclear Information System (INIS)

    Ahn, Gil Hoon; Park, Il Jin; Kim, Jung Soo; Min, Gyung Sik

    2006-01-01

    The quantitative determination of fissile materials in environmental samples is becoming more and more important because of the increasing demand for nuclear nonproliferation. A number of methods have been proposed for screening environmental samples to measure fissile material content. Among them, delayed neutron counting (DNC) that is a nondestructive neutron activation analysis (NAA) method without chemical preparation has numerous advantages over other screening techniques. Fissile materials such as 239 Pu and 235 U can be made to undergo fission in the intense neutron field. Some of the fission products emit neutrons referred to as 'delayed neutrons' because they are emitted after a brief decay period following irradiation. Counting these delayed neutrons provides a simple method for determining the total fissile content in the sample. In delayed neutron counting, the chemical bonding environment of a fissile atom has no effect on the measurement process. Therefore, NAA is virtually immune to the 'matrix' effects that complicate other methods. The present study aims at design of a DNC system. In advance, neutron detector, gamma ray shielding material, and neutron thermalizing material should be selected. Next, investigation should be done to optimize the thickness of gamma ray shielding material and neutron thermalizing material using the MCNPX that is a well-known and widely-used Monte Carlo radiation transport code to find the following

  13. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  14. Estimation of the Thermal Process in the Honeycomb Panel by a Monte Carlo Method

    Science.gov (United States)

    Gusev, S. A.; Nikolaev, V. N.

    2018-01-01

    A new Monte Carlo method for estimating the thermal state of the heat insulation containing honeycomb panels is proposed in the paper. The heat transfer in the honeycomb panel is described by a boundary value problem for a parabolic equation with discontinuous diffusion coefficient and boundary conditions of the third kind. To obtain an approximate solution, it is proposed to use the smoothing of the diffusion coefficient. After that, the obtained problem is solved on the basis of the probability representation. The probability representation is the expectation of the functional of the diffusion process corresponding to the boundary value problem. The process of solving the problem is reduced to numerical statistical modelling of a large number of trajectories of the diffusion process corresponding to the parabolic problem. It was used earlier the Euler method for this object, but that requires a large computational effort. In this paper the method is modified by using combination of the Euler and the random walk on moving spheres methods. The new approach allows us to significantly reduce the computation costs.

  15. Application of Monte Carlo method for dose calculation in thyroid follicle; Aplicacao de metodo Monte Carlo para calculos de dose em foliculos tiroideanos

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Frank Sinatra Gomes da

    2008-02-15

    The Monte Carlo method is an important tool to simulate radioactive particles interaction with biologic medium. The principal advantage of the method when compared with deterministic methods is the ability to simulate a complex geometry. Several computational codes use the Monte Carlo method to simulate the particles transport and they have the capacity to simulate energy deposition in models of organs and/or tissues, as well models of cells of human body. Thus, the calculation of the absorbed dose to thyroid's follicles (compound of colloid and follicles' cells) have a fundamental importance to dosimetry, because these cells are radiosensitive due to ionizing radiation exposition, in particular, exposition due to radioisotopes of iodine, because a great amount of radioiodine may be released into the environment in case of a nuclear accidents. In this case, the goal of this work was use the code of particles transport MNCP4C to calculate absorbed doses in models of thyroid's follicles, for Auger electrons, internal conversion electrons and beta particles, by iodine-131 and short-lived iodines (131, 132, 133, 134 e 135), with diameters varying from 30 to 500 {mu}m. The results obtained from simulation with the MCNP4C code shown an average percentage of the 25% of total absorbed dose by colloid to iodine- 131 and 75% to short-lived iodine's. For follicular cells, this percentage was of 13% to iodine-131 and 87% to short-lived iodine's. The contributions from particles with low energies, like Auger and internal conversion electrons should not be neglected, to assessment the absorbed dose in cellular level. Agglomerative hierarchical clustering was used to compare doses obtained by codes MCNP4C, EPOTRAN, EGS4 and by deterministic methods. (author)

  16. Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks

    KAUST Repository

    Ben Hammouda, Chiheb

    2015-05-12

    In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for

  17. Simulation of neutral gas flow in a tokamak divertor using the Direct Simulation Monte Carlo method

    International Nuclear Information System (INIS)

    Gleason-González, Cristian; Varoutis, Stylianos; Hauer, Volker; Day, Christian

    2014-01-01

    Highlights: • Subdivertor gas flows calculations in tokamaks by coupling the B2-EIRENE and DSMC method. • The results include pressure, temperature, bulk velocity and particle fluxes in the subdivertor. • Gas recirculation effect towards the plasma chamber through the vertical targets is found. • Comparison between DSMC and the ITERVAC code reveals a very good agreement. - Abstract: This paper presents a new innovative scientific and engineering approach for describing sub-divertor gas flows of fusion devices by coupling the B2-EIRENE (SOLPS) code and the Direct Simulation Monte Carlo (DSMC) method. The present study exemplifies this with a computational investigation of neutral gas flow in the ITER's sub-divertor region. The numerical results include the flow fields and contours of the overall quantities of practical interest such as the pressure, the temperature and the bulk velocity assuming helium as model gas. Moreover, the study unravels the gas recirculation effect located behind the vertical targets, viz. neutral particles flowing towards the plasma chamber. Comparison between calculations performed by the DSMC method and the ITERVAC code reveals a very good agreement along the main sub-divertor ducts

  18. The EURADOS-KIT training course on Monte Carlo methods for the calibration of body counters

    International Nuclear Information System (INIS)

    Breustedt, B.; Broggio, D.; Gomez-Ros, J.M.; Lopez, M.A.; Leone, D.; Poelz, S.; Marzocchi, O.; Shutt, A.

    2016-01-01

    Monte Carlo (MC) methods are numerical simulation techniques that can be used to extend the scope of calibrations performed in in vivo monitoring laboratories. These methods allow calibrations to be carried out for a much wider range of body shapes and sizes than would be feasible using physical phantoms. Unfortunately, nowadays, this powerful technique is still used mainly in research institutions only. In 2013, EURADOS and the in vivo monitoring laboratory of Karlsruhe Institute of Technology (KIT) organized a 3-d training course to disseminate knowledge on the application of MC methods for in vivo monitoring. It was intended as a hands-on course centered around an exercise which guided the participants step by step through the calibration process using a simplified version of KIT's equipment. Only introductory lectures on in vivo monitoring and voxel models were given. The course was based on MC codes of the MCNP family, widespread in the community. The strong involvement of the participants and the working atmosphere in the classroom as well as the formal evaluation of the course showed that the approach chosen was appropriate. Participants liked the hands-on approach and the extensive course materials on the exercise. (authors)

  19. Bridging the gap between quantum Monte Carlo and F12-methods

    Science.gov (United States)

    Chinnamsetty, Sambasiva Rao; Luo, Hongjun; Hackbusch, Wolfgang; Flad, Heinz-Jürgen; Uschmajew, André

    2012-06-01

    Tensor product approximation of pair-correlation functions opens a new route from quantum Monte Carlo (QMC) to explicitly correlated F12 methods. Thereby one benefits from stochastic optimization techniques used in QMC to get optimal pair-correlation functions which typically recover more than 85% of the total correlation energy. Our approach incorporates, in particular, core and core-valence correlation which are poorly described by homogeneous and isotropic ansatz functions usually applied in F12 calculations. We demonstrate the performance of the tensor product approximation by applications to atoms and small molecules. It turns out that the canonical tensor format is especially suitable for the efficient computation of two- and three-electron integrals required by explicitly correlated methods. The algorithm uses a decomposition of three-electron integrals, originally introduced by Boys and Handy and further elaborated by Ten-no in his 3d numerical quadrature scheme, which enables efficient computations in the tensor format. Furthermore, our method includes the adaptive wavelet approximation of tensor components where convergence rates are given in the framework of best N-term approximation theory.

  20. Bridging the gap between quantum Monte Carlo and F12-methods

    International Nuclear Information System (INIS)

    Chinnamsetty, Sambasiva Rao; Luo, Hongjun; Hackbusch, Wolfgang; Flad, Heinz-Jürgen; Uschmajew, André

    2012-01-01

    Graphical abstract: Tensor product approximation of pair-correlation functions: τ(x,y)≈∑ κ=1 κ u k (1) (x 1 ,y 1 )u k (2) (x 2 ,y 2 )u k (3) (x 3 ,y 3 ) Pair-correlation function τ(x,y)∣ ∣x·y∣=∣x∣∣y∣ of the He atom and corresponding tensor product approximation errors. Display Omitted - Abstract: Tensor product approximation of pair-correlation functions opens a new route from quantum Monte Carlo (QMC) to explicitly correlated F12 methods. Thereby one benefits from stochastic optimization techniques used in QMC to get optimal pair-correlation functions which typically recover more than 85% of the total correlation energy. Our approach incorporates, in particular, core and core-valence correlation which are poorly described by homogeneous and isotropic ansatz functions usually applied in F12 calculations. We demonstrate the performance of the tensor product approximation by applications to atoms and small molecules. It turns out that the canonical tensor format is especially suitable for the efficient computation of two- and three-electron integrals required by explicitly correlated methods. The algorithm uses a decomposition of three-electron integrals, originally introduced by Boys and Handy and further elaborated by Ten-no in his 3d numerical quadrature scheme, which enables efficient computations in the tensor format. Furthermore, our method includes the adaptive wavelet approximation of tensor components where convergence rates are given in the framework of best N-term approximation theory.

  1. Louis Leon Thurstone in Monte Carlo: creating error bars for the method of paired comparison

    Science.gov (United States)

    Montag, Ethan D.

    2003-12-01

    The method of paired comparison is often used in experiments where perceptual scale values for a collection of stimuli are desired, such as in experiments analyzing image quality. Thurstone's Case V of his Law of Comparative Judgments is often used as the basis for analyzing data produced in paired comparison experiments. However, methods for determining confidence intervals and critical distances for significant differences based on Thurstone's Law have been elusive leading some to abandon the simple analysis provided by Thurstone's formulation. In order to provide insight into this problem of determining error, Monte Carlo simulations of paired comparison experiments were performed based on the assumptions of uniformly normal, independent, and uncorrelated responses from stimulus pair presentations. The results from these multiple simulations show that the variation in the distribution of experimental results of paired comparison experiments can be well predicted as a function of stimulus number and the number of observations. Using these results, confidence intervals and critical values for comparisons can be made using traditional statistical methods. In addition the results from simulations can be used to analyze goodness-of-fit techniques.

  2. Analysis of large solid propellant rocket engine exhaust plumes using the direct simulation Monte Carlo method

    Science.gov (United States)

    Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.

    1984-01-01

    A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).

  3. Mass change distribution inverted from space-borne gravimetric data using a Monte Carlo method

    Science.gov (United States)

    Zhou, X.; Sun, X.; Wu, Y.; Sun, W.

    2017-12-01

    Mass estimate plays a key role in using temporally satellite gravimetric data to quantify the terrestrial water storage change. GRACE (Gravity Recovery and Climate Experiment) only observes the low degree gravity field changes, which can be used to estimate the total surface density or equivalent water height (EWH) variation, with a limited spatial resolution of 300 km. There are several methods to estimate the mass variation in an arbitrary region, such as averaging kernel, forward modelling and mass concentration (mascon). Mascon method can isolate the local mass from the gravity change at a large scale through solving the observation equation (objective function) which represents the relationship between unknown masses and the measurements. To avoid the unreasonable local mass inverted from smoothed gravity change map, regularization has to be used in the inversion. We herein give a Markov chain Monte Carlo (MCMC) method to objectively determine the regularization parameter for the non-negative mass inversion problem. We first apply this approach to the mass inversion from synthetic data. Result show MCMC can effectively reproduce the local mass variation taking GRACE measurement error into consideration. We then use MCMC to estimate the ground water change rate of North China Plain from GRACE gravity change rate from 2003 to 2014 under a supposition of the continuous ground water loss in this region. Inversion result show that the ground water loss rate in North China Plain is 7.6±0.2Gt/yr during past 12 years which is coincident with that from previous researches.

  4. Uniparental chicken offsprings derived from oogenesis of chicken primordial germ cells (ZZ).

    Science.gov (United States)

    Liu, Chunhai; Chang, Il-Kuk; Khazanehdari, Kamal A; Thomas, Shruti; Varghese, Preetha; Baskar, Vijaya; Alkhatib, Razan; Li, Wenhai; Kinne, Jörg; McGrew, Michael J; Wernery, Ulrich

    2017-03-01

    Cloning (somatic cell nuclear transfer) in avian species has proven unachievable due to the physical structure of the avian oocyte. Here, the sexual differentiation of primordial germ cells with genetic sex ZZ (ZZ PGCs) was investigated in female germline chimeric chicken hosts with the aim to produce uniparental offspring. ZZ PGCs were expanded in culture and transplanted into the same and opposite sex chicken embryos which were partially sterilized using irradiation. All tested chimeric roosters (ZZ/ZZ) showed germline transmission with transmission rates of 3.2%-91.4%. Unexpectedly, functional oogenesis of chicken ZZ PGCs was found in three chimeric hens, resulting in a transmission rate of 2.3%-27.8%. Matings were conducted between the germline chimeras (ZZ/ZZ and ZZ/ZW) which derived from the same ZZ PGCs line. Paternal uniparental chicken offspring were obtained with a transmission rate up to 28.4% and as expected, all uniparental offspring were phenotypic male (ZZ). Genotype analysis of uniparental offsprings was performed using 13 microsatellite markers. The genotype profile showed that uniparental offspring were 100% genetically identical to the donor ZZ PGC line, shared 69.2%-88.5% identity with the donor bird. Homozygosity of the tested birds varied from 61.5% to 84.6%, which was higher than the donor bird (38.5%). These results demonstrate that male avian ZZ PGCs can differentiate into functional ova in an ovary, and uniparental avian clones are possible. This technology suggests novel approaches for generating genetically similar flocks of birds and for the conservation of avian genetic resources. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.

    2009-02-23

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  6. Monte Carlo Methods Development and Applications in Conformational Sampling of Proteins

    DEFF Research Database (Denmark)

    Tian, Pengfei

    such as protein folding and aggregation. Second, by combining Monte Carlo sampling with a flexible probabilistic model of NMR chemical shifts, a series of simulation strategies are developed to accelerate the equilibrium sampling of free energy landscapes of proteins. Finally, a novel approach is presented...... to predict the structure of a functional amyloid protein, by using intramolecular evolutionary restrains in Monte Carlo simulations....... are not sufficient to provide an accurate structural and dynamical description of certain properties of proteins, (2), it is difficult to obtain correct statistical weights of the samples generated, due to lack of equilibrium sampling. In this dissertation I present several new methodologies based on Monte Carlo...

  7. Studies of criticality Monte Carlo method convergence: use of a deterministic calculation and automated detection of the transient

    International Nuclear Information System (INIS)

    Jinaphanh, A.

    2012-01-01

    Monte Carlo criticality calculation allows to estimate the effective multiplication factor as well as local quantities such as local reaction rates. Some configurations presenting weak neutronic coupling (high burn up profile, complete reactor core,...) may induce biased estimations for k eff or reaction rates. In order to improve robustness of the iterative Monte Carlo methods, a coupling with a deterministic code was studied. An adjoint flux is obtained by a deterministic calculation and then used in the Monte Carlo. The initial guess is then automated, the sampling of fission sites is modified and the random walk of neutrons is modified using splitting and russian roulette strategies. An automated convergence detection method has been developed. It locates and suppresses the transient due to the initialization in an output series, applied here to k eff and Shannon entropy. It relies on modeling stationary series by an order 1 auto regressive process and applying statistical tests based on a Student Bridge statistics. This method can easily be extended to every output of an iterative Monte Carlo. Methods developed in this thesis are tested on different test cases. (author)

  8. Generation of triangulated random surfaces by means of the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analogue of the Polyakov string is considered in the work. An algorithm is proposed which enables one to study the model by means of the Monte Carlo method in the grand canonical ensemble. Preliminary results are presented on the evaluation of the critical index γ

  9. Application of the Monte Carlo method for investigation of dynamical parameters of rotors supported by magnetorheological squeeze film damping devices

    Czech Academy of Sciences Publication Activity Database

    Zapoměl, Jaroslav; Ferfecki, Petr; Kozánek, Jan

    2014-01-01

    Roč. 8, č. 1 (2014), s. 129-138 ISSN 1802-680X Institutional support: RVO:61388998 Keywords : uncertain parameters of rigid motors * magnetorheological dampers * force transmission * Monte Carlo method Subject RIV: BI - Acoustics http://www.kme.zcu.cz/acm/acm/article/view/247/275

  10. Search for the Higgs Boson in the H→ ZZ(*)→4μ Channel in CMS Using a Multivariate Analysis

    International Nuclear Information System (INIS)

    Alonso Diaz, A.

    2007-01-01

    This note presents a Higgs boson search analysis in the CMS detector of the LHC accelerator (CERN, Geneva, Switzerland) in the H→ ZZ ( *)→4μ channel, using a multivariate method. This analysis, based in a Higgs boson mass dependent likelihood, constructed from discriminant variables, provides a significant improvement of the Higgs boson discovery potential in a wide mass range with respect to the official analysis published by CMS, based in orthogonal cuts independent of the Higgs boson mass. (Author) 8 refs

  11. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  12. Application of Monte Carlo Method to Test Fingerprinting System for Dry Storage Canister

    International Nuclear Information System (INIS)

    Ahn, Gil Hoon; Park, Il-Jin; Min, Gyung Sik

    2006-01-01

    From 1992, dry storage canisters have been used for long-term disposition of the CANDU spent fuel bundles at Wolsong. Periodic inspection of the dual seals is currently the only measure that exists to verify that the contents have not been altered. So, verification for spent nuclear fuel in the dry storage is one of the important safeguarding tasks because the spent fuel contains significant quantities of fissile material. Although traditional non-destructive analysis and assay techniques to verify contents are ineffective due to shielding of spent fuel and canister wall, straggling position of detector, etc., Manual measurement of the radiation levels present in the reverification tubes that run along the length of the canister to enable the radiation profile within the canister is presently the most reliable method for ensuring that the stored materials are still present. So, gamma-ray fingerprinting method has been used after a canister is sealed in Korea to provide a continuity of knowledge that canister contents remain as loaded. The present study aims at test of current fingerprinting system using the MCNPX that is a well known and widely-used Monte Carlo radiation transport code, which may be useful in the verification measures of the spent fuel subject with final disposal guidance criterion(4kg of Pu, 0.5 SQ)

  13. Systematic hierarchical coarse-graining with the inverse Monte Carlo method

    International Nuclear Information System (INIS)

    Lyubartsev, Alexander P.; Naômé, Aymeric; Vercauteren, Daniel P.; Laaksonen, Aatto

    2015-01-01

    We outline our coarse-graining strategy for linking micro- and mesoscales of soft matter and biological systems. The method is based on effective pairwise interaction potentials obtained in detailed ab initio or classical atomistic Molecular Dynamics (MD) simulations, which can be used in simulations at less accurate level after scaling up the size. The effective potentials are obtained by applying the inverse Monte Carlo (IMC) method [A. P. Lyubartsev and A. Laaksonen, Phys. Rev. E 52(4), 3730–3737 (1995)] on a chosen subset of degrees of freedom described in terms of radial distribution functions. An in-house software package MagiC is developed to obtain the effective potentials for arbitrary molecular systems. In this work we compute effective potentials to model DNA-protein interactions (bacterial LiaR regulator bound to a 26 base pairs DNA fragment) at physiological salt concentration at a coarse-grained (CG) level. Normally the IMC CG pair-potentials are used directly as look-up tables but here we have fitted them to five Gaussians and a repulsive wall. Results show stable association between DNA and the model protein as well as similar position fluctuation profile

  14. Revisiting the hybrid quantum Monte Carlo method for Hubbard and electron-phonon models

    Science.gov (United States)

    Beyl, Stefan; Goth, Florian; Assaad, Fakher F.

    2018-02-01

    A unique feature of the hybrid quantum Monte Carlo (HQMC) method is the potential to simulate negative sign free lattice fermion models with subcubic scaling in system size. Here we will revisit the algorithm for various models. We will show that for the Hubbard model the HQMC suffers from ergodicity issues and unbounded forces in the effective action. Solutions to these issues can be found in terms of a complexification of the auxiliary fields. This implementation of the HQMC that does not attempt to regularize the fermionic matrix so as to circumvent the aforementioned singularities does not outperform single spin flip determinantal methods with cubic scaling. On the other hand we will argue that there is a set of models for which the HQMC is very efficient. This class is characterized by effective actions free of singularities. Using the Majorana representation, we show that models such as the Su-Schrieffer-Heeger Hamiltonian at half filling and on a bipartite lattice belong to this class. For this specific model subcubic scaling is achieved.

  15. Cu-Au Alloys Using Monte Carlo Simulations and the BFS Method for Alloys

    Science.gov (United States)

    Bozzolo, Guillermo; Good, Brian; Ferrante, John

    1996-01-01

    Semi empirical methods have shown considerable promise in aiding in the calculation of many properties of materials. Materials used in engineering applications have defects that occur for various reasons including processing. In this work we present the first application of the BFS method for alloys to describe some aspects of microstructure due to processing for the Cu-Au system (Cu-Au, CuAu3, and Cu3Au). We use finite temperature Monte Carlo calculations, in order to show the influence of 'heat treatment' in the low-temperature phase of the alloy. Although relatively simple, it has enough features that could be used as a first test of the reliability of the technique. The main questions to be answered in this work relate to the existence of low temperature ordered structures for specific concentrations, for example, the ability to distinguish between rather similar phases for equiatomic alloys (CuAu I and CuAu II, the latter characterized by an antiphase boundary separating two identical phases).

  16. Deterministic alternatives to the full configuration interaction quantum Monte Carlo method for strongly correlated systems

    Science.gov (United States)

    Tubman, Norm; Whaley, Birgitta

    The development of exponential scaling methods has seen great progress in tackling larger systems than previously thought possible. One such technique, full configuration interaction quantum Monte Carlo, allows exact diagonalization through stochastically sampling of determinants. The method derives its utility from the information in the matrix elements of the Hamiltonian, together with a stochastic projected wave function, which are used to explore the important parts of Hilbert space. However, a stochastic representation of the wave function is not required to search Hilbert space efficiently and new deterministic approaches have recently been shown to efficiently find the important parts of determinant space. We shall discuss the technique of Adaptive Sampling Configuration Interaction (ASCI) and the related heat-bath Configuration Interaction approach for ground state and excited state simulations. We will present several applications for strongly correlated Hamiltonians. This work was supported through the Scientific Discovery through Advanced Computing (SciDAC) program funded by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences.

  17. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  18. Simulation of Watts Bar Unit 1 Initial Startup Tests with Continuous Energy Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Godfrey, Andrew T [ORNL; Gehin, Jess C [ORNL; Bekar, Kursat B [ORNL; Celik, Cihangir [ORNL

    2014-01-01

    The Consortium for Advanced Simulation of Light Water Reactors* is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications. One component of the testing and validation plan for VERA is comparison of neutronics results to a set of continuous energy Monte Carlo solutions for a range of pressurized water reactor geometries using the SCALE component KENO-VI developed by Oak Ridge National Laboratory. Recent improvements in data, methods, and parallelism have enabled KENO, previously utilized predominately as a criticality safety code, to demonstrate excellent capability and performance for reactor physics applications. The highly detailed and rigorous KENO solutions provide a reliable nu-meric reference for VERAneutronics and also demonstrate the most accurate predictions achievable by modeling and simulations tools for comparison to operating plant data. This paper demonstrates the performance of KENO-VI for the Watts Bar Unit 1 Cycle 1 zero power physics tests, including reactor criticality, control rod worths, and isothermal temperature coefficients.

  19. Volume Measurement Algorithm for Food Product with Irregular Shape using Computer Vision based on Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Joko Siswantoro

    2014-11-01

    Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.

  20. Application of the measurement-based Monte Carlo method in nasopharyngeal cancer patients for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Yeh, C.Y.; Lee, C.C.; Chao, T.C.; Lin, M.H.; Lai, P.A.; Liu, F.H.; Tung, C.J.

    2014-01-01

    This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs

  1. Asteroseismology of the ZZ Ceti Star WD 0246+326

    Science.gov (United States)

    Li, C.; Fu, J.; Fox-Machado, L.; Su, J.

    2017-03-01

    Asteroseismology is the unique tool to explore the internal structures of pulsating stars. Time series photometric observations were made for the pulsating DA white dwarf (ZZ Ceti star) WD 0246+326 in 2014 with a bi-site observation campaign. A few frequencies were detected including several multiplets. With the complement of earlier observed frequencies present in the literature, the frequencies are identified as either l = 1 or l = 2 modes. From the multiplets, the rotation period of WD 0246+326 is derived. The value of the average period spacing of the l = 1 modes indicates that WD 0246+326 may be a massive ZZ Ceti star. Theoretical models were constructed to constrain the stellar mass and the effective temperature by fitting the frequencies of the eigenmodes of the models with the observed frequencies.

  2. DISCOVERY OF A ZZ CETI IN THE KEPLER MISSION FIELD

    International Nuclear Information System (INIS)

    Hermes, J. J.; Winget, D. E.; Mullally, Fergal; Howell, Steve B.; Oestensen, R. H.; Bloemen, S.; Williams, Kurtis A.; Telting, John; Southworth, John; Everett, Mark

    2011-01-01

    We report the discovery of the first identified pulsating DA white dwarf, WD J1916+3938 (Kepler ID 4552982), in the field of the Kepler mission. This ZZ Ceti star was first identified through ground-based, time-series photometry, and follow-up spectroscopy confirms that it is a hydrogen-atmosphere white dwarf with T eff = 11,129 ± 115 K and log g = 8.34 ± 0.06, placing it within the empirical ZZ Ceti instability strip. The object shows up to 0.5% amplitude variability at several periods between 800 and 1450 s. Extended Kepler observations of WD J1916+3938 could yield the best light curve, to date, of any pulsating white dwarf, allowing us to directly study the interior of an evolved object representative of the fate of the majority of stars in our Galaxy.

  3. A Research and Study Course for learning the concept of discrete randomvariable using Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Vicente D. Estruch

    2017-08-01

    Full Text Available The concept of random variable is a mathematical construct that presents some theoretical complexity. However, learning  this  concept  can  be  facilitated  if  it  is  presented  as  the  end  of  a  sequential  process  of  modeling  of  a  real event. More specifically, to learn the concept of discrete random variable, the Monte Carlo simulation can provide an extremely useful tool because in the process of modeling / simulation one can approach the theoretical concept of random variable, while the random variable is observed \\in action". This paper presents a Research and Study Course  (RSC  based  on  series  of  activities  related  to  random  variables  such  as  training  and  introduction  of  simulation  elements,  then  the  construction  of  the  model  is  presented,  which  is  the  substantial  part  of  the  activity, generating a random variable and its probability function. Starting from a simple situation related to reproduction and  survival  of  the  litter  of  a  rodent,  with  random  components,  step  by  step,  the  model  that  represents  the  real raised situation is built obtaining an \\original" random variable. In the intermediate stages of the construction of the model have a fundamental role the uniform discrete and binomial distributions. The trajectory of these stages allows reinforcing the concept of random variable while exploring the possibilities offered by Monte Carlo methods to  simulate  real  cases  and  the  simplicity  of  implementing  these  methods  by  means  of  the  Matlab© programming language.

  4. The Calculation Of Titanium Buildup Factor Based On Monte Carlo Method

    International Nuclear Information System (INIS)

    Has, Hengky Istianto; Achmad, Balza; Harto, Andang Widi

    2001-01-01

    The objective of radioactive-waste container is to reduce radiation emission to the environment. For that purpose, we need material with ability to shield that radiation and last for 10.000 years. Titanium is one of the materials that can be used to make containers. Unfortunately, its buildup factor, which is an importance factor in setting up radiation shielding, has not been calculated. Therefore, the calculations of titanium buildup factor as a function of other parameters is needed. Buildup factor can be determined either experimentally or by simulation. The purpose of this study is to determine titanium buildup factor using simulation program based on Monte Carlo method. Monte Carlo is a stochastic method, therefore is proper to calculate nuclear radiation which naturally has random characteristic. Simulation program also able to give result while experiments can not be performed, because of their limitations.The result of the simulation is, that by increasing titanium thickness the buildup factor number and dosage increase. In contrary If photon energy is higher, then buildup factor number and dosage are lower. The photon energy used in the simulation was ranged from 0.2 MeV to 2.0 MeV with 0.2 MeV step size, while the thickness was ranged from 0.2 cm to 3.0 cm with step size of 0.2 cm. The highest buildup factor number is β = 1.4540 ± 0.047229 at 0.2 MeV photon energy with titanium thickness of 3.0 cm. The lowest is β = 1.0123 ± 0.000650 at 2.0 MeV photon energy with 0.2 cm thickness of titanium. For the dosage buildup factor, the highest dose is β D = 1.3991 ± 0.013999 at 0.2 MeV of the photon energy with a titanium thickness of 3.0 cm and the lowest is β D = 1.0042 ± 0.000597 at 2.0 MeV with titanium thickness of 0.2 cm. For the photon energy and the thickness of titanium used in simulation, buildup factor and dosage buildup factor as a function of photon energy and titanium thickness can be formulated as follow β = 1.1264 e - 0.0855 E e 0 .0584 T

  5. Correlation between vacancies and magnetoresistance changes in FM manganites using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Agudelo-Giraldo, J.D. [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [PCM Computational Applications, Universidad Nacional de Colombia-Sede Manizales, Km. 9 vía al aeropuerto, Manizales (Colombia); Restrepo, J. [Grupo de Magnetismo y Simulación, Instituto de Física, Universidad de Antioquia, A.A. 1226, Medellín (Colombia)

    2015-10-01

    The Metropolis algorithm and the classical Heisenberg approximation were implemented by the Monte Carlo method to design a computational approach to the magnetization and resistivity of La{sub 2/3}Ca{sub 1/3}MnO{sub 3}, which depends on the Mn ion vacancies as the external magnetic field increases. This compound is ferromagnetic, and it exhibits the colossal magnetoresistance (CMR) effect. The monolayer was built with L×L×d dimensions, and it had L=30 umc (units of magnetic cells) for its dimension in the x–y plane and was d=12 umc in thickness. The Hamiltonian that was used contains interactions between first neighbors, the magnetocrystalline anisotropy effect and the external applied magnetic field response. The system that was considered contains mixed-valence bonds: Mn{sup 3+eg’}–O–Mn{sup 3+eg}, Mn{sup 3+eg}–O–Mn{sup 4+d3} and Mn{sup 3+eg’}–O–Mn{sup 4+d3}. The vacancies were placed randomly in the sample, replacing any type of Mn ion. The main result shows that without vacancies, the transitions T{sub C} (Curie temperature) and T{sub MI} (metal–insulator temperature) are similar, whereas with the increase in the vacancy percentage, T{sub MI} presented lower values than T{sub C}. This situation is caused by the competition between the external magnetic field, the vacancy percentage and the magnetocrystalline anisotropy, which favors the magnetoresistive effect at temperatures below T{sub MI}. Resistivity loops were also observed, which shows a direct correlation with the hysteresis loops of magnetization at temperatures below T{sub C}. - Highlights: • Changes in the resistivity of FM materials as a function of the temperature and external magnetic field can be obtained by the Monte Carlo method, Metropolis algorithm, classical Heisenberg and Kronig–Penney approximation for magnetic clusters. • Increases in the magnetoresistive effect were observed at temperatures below T{sub MI} by the vacancies effect. • The resistive hysteresis

  6. Comprehensive benchmarking of Markov chain Monte Carlo methods for dynamical systems.

    Science.gov (United States)

    Ballnus, Benjamin; Hug, Sabine; Hatz, Kathrin; Görlitz, Linus; Hasenauer, Jan; Theis, Fabian J

    2017-06-24

    In quantitative biology, mathematical models are used to describe and analyze biological processes. The parameters of these models are usually unknown and need to be estimated from experimental data using statistical methods. In particular, Markov chain Monte Carlo (MCMC) methods have become increasingly popular as they allow for a rigorous analysis of parameter and prediction uncertainties without the need for assuming parameter identifiability or removing non-identifiable parameters. A broad spectrum of MCMC algorithms have been proposed, including single- and multi-chain approaches. However, selecting and tuning sampling algorithms suited for a given problem remains challenging and a comprehensive comparison of different methods is so far not available. We present the results of a thorough benchmarking of state-of-the-art single- and multi-chain sampling methods, including Adaptive Metropolis, Delayed Rejection Adaptive Metropolis, Metropolis adjusted Langevin algorithm, Parallel Tempering and Parallel Hierarchical Sampling. Different initialization and adaptation schemes are considered. To ensure a comprehensive and fair comparison, we consider problems with a range of features such as bifurcations, periodical orbits, multistability of steady-state solutions and chaotic regimes. These problem properties give rise to various posterior distributions including uni- and multi-modal distributions and non-normally distributed mode tails. For an objective comparison, we developed a pipeline for the semi-automatic comparison of sampling results. The comparison of MCMC algorithms, initialization and adaptation schemes revealed that overall multi-chain algorithms perform better than single-chain algorithms. In some cases this performance can be further increased by using a preceding multi-start local optimization scheme. These results can inform the selection of sampling methods and the benchmark collection can serve for the evaluation of new algorithms. Furthermore, our

  7. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    Science.gov (United States)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  8. Study on the dominant reaction path in nucleosynthesis during stellar evolution by means of the Monte Carlo method

    International Nuclear Information System (INIS)

    Yamamoto, K.; Hashizume, K.; Wada, T.; Ohta, M.; Suda, T.; Nishimura, T.; Fujimoto, M. Y.; Kato, K.; Aikawa, M.

    2006-01-01

    We propose a Monte Carlo method to study the reaction paths in nucleosynthesis during stellar evolution. Determination of reaction paths is important to obtain the physical picture of stellar evolution. The combination of network calculation and our method gives us a better understanding of physical picture. We apply our method to the case of the helium shell flash model in the extremely metal poor star

  9. Electroweak corrections to H->ZZ/WW->4 leptons

    International Nuclear Information System (INIS)

    Bredenstein, A.; Denner, A.; Dittmaier, S.; Weber, M.M.

    2006-01-01

    We provide predictions for the decays H->ZZ->4-bar and H->WW->4-bar including the complete electroweak O(α) corrections and improvements by higher-order final-state radiation and two-loop corrections proportional to G μ 2 M H 4 . The gauge-boson resonances are described in the complex-mass scheme. We find corrections at the level of 1-8% for the partial widths

  10. New ZZ Ceti Stars from the LAMOST Survey

    Science.gov (United States)

    Su, Jie; Fu, Jianning; Lin, Guifang; Chen, Fangfang; Khokhuntod, Pongsak; Li, Chunqian

    2017-09-01

    The spectroscopic sky survey carried out by the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) provides the largest stellar spectra library in the world to date. A large number of new DA white dwarfs had been identified based on the LAMOST spectra. The effective temperature ({T}{eff}) and surface gravity ({log}g) of most DA white dwarfs were determined and published in the catalogs. We selected ZZ Ceti candidates from the published catalogs by considering whether their {T}{eff} are situated in the ZZ Ceti instability strip. The follow-up time-series photometric observations for the candidates were performed in 2015 and 2016. Four stars: LAMOST J004628.31+343319.90, LAMOST J062159.49+252335.9, LAMOST J010302.46+433756.2, and LAMOST J013033.90+273757.9 are finally confirmed to be new ZZ Ceti stars. They show dominant peaks with amplitudes rising above the 99.9% confidence level in the amplitude spectra. As LAMOST J004628.31+343319.90 has an estimated mass of ˜0.40 {M}⊙ , and LAMOST J013033.90+273757.9 has a mass of ˜0.45 {M}⊙ derived from their {log}g values, these two stars are inferred to be potential helium-core white dwarfs.

  11. Hybrid method for fast Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with tumor-like heterogeneities.

    Science.gov (United States)

    Zhu, Caigang; Liu, Quan

    2012-01-01

    We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.

  12. Review and comparison of effective delayed neutron fraction calculation methods with Monte Carlo codes

    OpenAIRE

    Bécares, V.; Pérez Martín, S.; Vázquez Antolín, Miriam; Villamarín, D.; Martín Fuertes, Francisco; González Romero, E.M.; Merino Rodríguez, Iván

    2014-01-01

    The calculation of the effective delayed neutron fraction, beff , with Monte Carlo codes is a complex task due to the requirement of properly considering the adjoint weighting of delayed neutrons. Nevertheless, several techniques have been proposed to circumvent this difficulty and obtain accurate Monte Carlo results for beff without the need of explicitly determining the adjoint flux. In this paper, we make a review of some of these techniques; namely we have analyzed two variants of what we...

  13. Constant-pH Hybrid Nonequilibrium Molecular Dynamics–Monte Carlo Simulation Method

    Science.gov (United States)

    2016-01-01

    A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys.2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD–MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD–MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems. PMID:26300709

  14. Constant-pH Hybrid Nonequilibrium Molecular Dynamics-Monte Carlo Simulation Method.

    Science.gov (United States)

    Chen, Yunjie; Roux, Benoît

    2015-08-11

    A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems.

  15. {ZZ}\\gamma production in the NLO QCD+EW accuracy at the LHC

    Science.gov (United States)

    Yong, Wang; Ren-You, Zhang; Wen-Gan, Ma; Xiao-Zhou, Li; Shao-Ming, Wang; Huan-Yu, Bi

    2017-08-01

    In this paper we present the first study of the impact of the { O }(α ) electroweak (EW) correction to the {pp}\\to {ZZ}γ +X process at the CERN Large Hadron Collider. The subsequent Z-boson leptonic decays are considered at the leading order using the MadSpin method, which takes into account the spin-correlation and off-shell effects from the Z-boson decays. We provide numerical results of the integrated cross section and the kinematic distributions for this process. In coping with final-state photon-jet separation in the QCD real emission and photon-induced processes, we adopt both the Frixione isolated-photon plus jets algorithm and the phenomenological quark-to-photon fragmentation function method for comparison. We find that the next-to-leading order (NLO) EW correction to the {ZZ}γ production can be sizeable and amounts to about -7 % of the integrated cross section, and provides a non-negligible contribution to the kinematic distributions, particularly in the high energy region. We conclude that the NLO EW correction should be included in precision theoretical predictions in order to match future experimental accuracy.

  16. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.

    2013-08-21

    In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate

  17. Sustainable Queuing-Network Design for Airport Security Based on the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Xiangqian Xu

    2018-01-01

    Full Text Available The design of airport queuing networks is a significant research field currently for researchers. Many factors must to be considered in order to achieve the optimized strategies, including the passenger flow volume, boarding time, and boarding order of passengers. Optimizing these factors lead to the sustainable development of the queuing network, which currently faces a few difficulties. In particular, the high variance in checkpoint lines can be extremely costly to passengers as they arrive unduly early or possibly miss their scheduled flights. In this article, the Monte Carlo method is used to design the queuing network so as to achieve sustainable development. Thereafter, a network diagram is used to determine the critical working point, and design a structurally and functionally sustainable network. Finally, a case study for a sustainable queuing-network design in the airport is conducted to verify the efficiency of the proposed model. Specifically, three sustainable queuing-network design solutions are proposed, all of which not only maintain the same standards of security, but also increase checkpoint throughput and reduce passenger waiting time variance.

  18. Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods

    International Nuclear Information System (INIS)

    Kramer, Richard

    2010-01-01

    Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)

  19. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    International Nuclear Information System (INIS)

    Kramer, Richard

    2011-01-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  20. Dose rate evaluation of body phantom behind ITER bio-shield wall using Monte Carlo method

    International Nuclear Information System (INIS)

    Beheshti, A.; Jabbari, I.; Karimian, A.; Abdi, M.

    2012-01-01

    One of the most critical risks to humans in reactors environment is radiation exposure. Around the tokamak hall personnel are exposed to a wide range of particles, including neutrons and photons. International Thermonuclear Experimental Reactor (ITER) is a nuclear fusion research and engineering project, which is the most advanced experimental tokamak nuclear fusion reactor. Dose rates assessment and photon radiation due to the neutron activation of the solid structures in ITER is important from the radiological point of view. Therefore, the dosimetry considered in this case is based on the Deuterium-Tritium (DT) plasma burning with neutrons production rate at 14.1 MeV. The aim of this study is assessment the amount of radiation behind bio-shield wall that a human received during normal operation of ITER by considering neutron activation and delay gammas. To achieve the aim, the ITER system and its components were simulated by Monte Carlo method. Also to increase the accuracy and precision of the absorbed dose assessment a body phantom were considered in the simulation. The results of this research showed that total dose rates level near the outside of bio-shield wall of the tokamak hall is less than ten percent of the annual occupational dose limits during normal operation of ITER and It is possible to learn how long human beings can remain in that environment before the body absorbs dangerous levels of radiation. (authors)

  1. Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods

    Science.gov (United States)

    Kramer, Richard

    2011-08-01

    Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.

  2. Evaluation of functioning of an extrapolation chamber using Monte Carlo method

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Alfonso Laguardia, R.

    2015-01-01

    The extrapolation chamber is a parallel plate chamber and variable volume based on the Braff-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents a simulation for evaluating the functioning of an extrapolation chamber type 23392 of PTW, using the MCNPX Monte Carlo method. In the simulation, the fluence in the air collector cavity of the chamber was obtained. The influence of the materials that compose the camera on its response against beta radiation beam was also analysed. A comparison of the contribution of primary and secondary radiation was performed. The energy deposition in the air collector cavity for different depths was calculated. The component with the higher energy deposition is the Polymethyl methacrylate block. The energy deposition in the air collector cavity for chamber depth 2500 μm is greater with a value of 9.708E-07 MeV. The fluence in the air collector cavity decreases with depth. It's value is 1.758E-04 1/cm 2 for chamber depth 500 μm. The values reported are for individual electron and photon histories. The graphics of simulated parameters are presented in the paper. (Author)

  3. Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, Richard [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)

    2010-07-01

    Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)

  4. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    International Nuclear Information System (INIS)

    Hall, Howard L.

    2012-01-01

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  5. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    International Nuclear Information System (INIS)

    Garrison, J.R.; Hanson, D.E.; Hall, H.L.

    2012-01-01

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation. (author)

  6. Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method

    Science.gov (United States)

    Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.

    2018-01-01

    Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.

  7. Evaluation of the scattered radiation components produced in a gamma camera using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Polo, Ivon Oramas, E-mail: ivonoramas67@gmail.com [Department of Nuclear Engineering, Faculty of Nuclear Sciences and Technologies, Higher Institute of Applied Science and Technology (InSTEC), La Habana (Cuba)

    2014-07-01

    Introduction: this paper presents a simulation for evaluation of the scattered radiation components produced in a gamma camera PARK using Monte Carlo code SIMIND. It simulates a whole body study with MDP (Methylene Diphosphonate) radiopharmaceutical based on Zubal anthropomorphic phantom, with some spinal lesions. Methods: the simulation was done by comparing 3 configurations for the detected photons. The corresponding energy spectra were obtained using Low Energy High Resolution collimator. The parameters related with the interactions and the fraction of events in the energy window, the simulated events of the spectrum and scatter events were calculated. Results: the simulation confirmed that the images without influence of scattering events have a higher number of valid recorded events and it improved the statistical quality of them. A comparison among different collimators was made. The parameters and detector energy spectrum were calculated for each simulation configuration with these collimators using {sup 99m}Tc. Conclusion: the simulation corroborated that LEHS collimator has higher sensitivity and HEHR collimator has lower sensitivity when they are used with low energy photons. (author)

  8. Deterministic flows of order-parameters in stochastic processes of quantum Monte Carlo method

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi

    2010-01-01

    In terms of the stochastic process of quantum-mechanical version of Markov chain Monte Carlo method (the MCMC), we analytically derive macroscopically deterministic flow equations of order parameters such as spontaneous magnetization in infinite-range (d(= ∞)-dimensional) quantum spin systems. By means of the Trotter decomposition, we consider the transition probability of Glauber-type dynamics of microscopic states for the corresponding (d + 1)-dimensional classical system. Under the static approximation, differential equations with respect to macroscopic order parameters are explicitly obtained from the master equation that describes the microscopic-law. In the steady state, we show that the equations are identical to the saddle point equations for the equilibrium state of the same system. The equation for the dynamical Ising model is recovered in the classical limit. We also check the validity of the static approximation by making use of computer simulations for finite size systems and discuss several possible extensions of our approach to disordered spin systems for statistical-mechanical informatics. Especially, we shall use our procedure to evaluate the decoding process of Bayesian image restoration. With the assistance of the concept of dynamical replica theory (the DRT), we derive the zero-temperature flow equation of image restoration measure showing some 'non-monotonic' behaviour in its time evolution.

  9. Application of the Monte Carlo method to estimate doses in a radioactive waste drum environment

    International Nuclear Information System (INIS)

    Rodenas, J.; Garcia, T.; Burgos, M.C.; Felipe, A.; Sanchez-Mayoral, M.L.

    2002-01-01

    During refuelling operation in a Nuclear Power Plant, filtration is used to remove non-soluble radionuclides contained in the water from reactor pool. Filter cartridges accumulate a high radioactivity, so that they are usually placed into a drum. When the operation ends up, the drum is filled with concrete and stored along with other drums containing radioactive wastes. Operators working in the refuelling plant near these radwaste drums can receive high dose rates. Therefore, it is convenient to estimate those doses to prevent risks in order to apply ALARA criterion for dose reduction to workers. The Monte Carlo method has been applied, using MCNP 4B code, to simulate the drum containing contaminated filters and estimate doses produced in the drum environment. In the paper, an analysis of the results obtained with the MCNP code has been performed. Thus, the influence on the evaluated doses of distance from drum and interposed shielding barriers has been studied. The source term has also been analysed to check the importance of the isotope composition. Two different geometric models have been considered in order to simplify calculations. Results have been compared with dose measurements in plant in order to validate the calculation procedure. This work has been developed at the Nuclear Engineering Department of the Polytechnic University of Valencia in collaboration with IBERINCO in the frame of an RD project sponsored by IBERINCO

  10. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Science.gov (United States)

    Lu, Dan; Ricciuto, Daniel; Walker, Anthony; Safta, Cosmin; Munger, William

    2017-09-01

    Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  11. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    Science.gov (United States)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  12. Bayesian calibration of terrestrial ecosystem models: a study of advanced Markov chain Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    D. Lu

    2017-09-01

    Full Text Available Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.

  13. Tracer diffusion in an ordered alloy: application of the path probability and Monte Carlo methods

    International Nuclear Information System (INIS)

    Sato, Hiroshi; Akbar, S.A.; Murch, G.E.

    1984-01-01

    Tracer diffusion technique has been extensively utilized to investigate diffusion phenomena and has contributed a great deal to the understanding of the phenomena. However, except for self diffusion and impurity diffusion, the meaning of tracer diffusion is not yet satisfactorily understood. Here we try to extend the understanding to concentrated alloys. Our major interest here is directed towards understanding the physical factors which control diffusion through the comparison of results obtained by the Path Probability Method (PPM) and those by the Monte Carlo simulation method (MCSM). Both the PPM and the MCSM are basically in the same category of statistical mechanical approaches applicable to random processes. The advantage of the Path Probability method in dealing with phenomena which occur in crystalline systems has been well established. However, the approximations which are inevitably introduced to make the analytical treatment tractable, although their meaning may be well-established in equilibrium statistical mechanics, sometimes introduce unwarranted consequences the origin of which is often hard to trace. On the other hand, the MCSM which can be carried out in a parallel fashion to the PPM provides, with care, numerically exact results. Thus a side-by-side comparison can give insight into the effect of approximations in the PPM. It was found that in the pair approximation of the CVM, the distribution in the completely random state is regarded as homogeneous (without fluctuations), and hence, the fluctuation in distribution is not well represented in the PPM. These examples thus show clearly how the comparison of analytical results with carefully carried out calculations by the MCSM guides the progress of theoretical treatments and gives insights into the mechanism of diffusion

  14. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    Science.gov (United States)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of

  15. Evaluation of radiation dose to patients in intraoral dental radiography using Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Il; Kim, Kyeong Ho; Oh, Seung Chul; Song, Ji Young [Dept. of Nuclear Engineering, Kyung Hee University, Yongin (Korea, Republic of)

    2016-11-15

    The use of dental radiographic examinations is common although radiation dose resulting from the dental radiography is relatively small. Therefore, it is required to evaluate radiation dose from the dental radiography for radiation safety purpose. The objectives of the present study were to develop dosimetry method for intraoral dental radiography using a Monte Carlo method based radiation transport code and to calculate organ doses and effective doses of patients from different types of intraoral radiographies. Radiological properties of dental radiography equipment were characterized for the evaluation of patient radiation dose. The properties including x-ray energy spectrum were simulated using MCNP code. Organ doses and effective doses to patients were calculated by MCNP simulation with computational adult phantoms. At the typical equipment settings (60 kVp, 7 mA, and 0.12 sec), the entrance air kerma was 1.79 mGy and the measured half value layer was 1.82 mm. The half value layer calculated by MCNP simulation was well agreed with the measurement values. Effective doses from intraoral radiographies ranged from 1 μSv for maxilla premolar to 3 μSv for maxilla incisor. Oral cavity layer (23⁓82 μSv) and salivary glands (10⁓68 μSv) received relatively high radiation dose. Thyroid also received high radiation dose (3⁓47 μSv) for examinations. The developed dosimetry method and evaluated radiation doses in this study can be utilized for policy making, patient dose management, and development of low-dose equipment. In addition, this study can ultimately contribute to decrease radiation dose to patients for radiation safety.

  16. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  17. Application of the Monte Carlo method to the study of the response of an organic liquid scintillator irradiated by photons

    International Nuclear Information System (INIS)

    Dupre, Corinne.

    1982-10-01

    The Monte Carlo method was applied to simulate the transport of a photon beam in an organic liquid scintillation detector. The interactions of secondary gamma rays and electrons with the detector and its peripheral materials components such as the pyrex glass container are included. The pulse height spectra and the detectors efficiency are compared with calculated and measured results. Calculations and programmation methods are presented in the same way as results concerning cobalt and cesium sources [fr

  18. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586

  19. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  20. Improvement of the symbolic Monte-Carlo method for the transport equation: P1 extension and coupling with diffusion

    Energy Technology Data Exchange (ETDEWEB)

    Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)

    2005-07-01

    We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)

  1. Development of a consistent Monte Carlo-deterministic transport methodology based on the method of characteristics and MCNP5

    International Nuclear Information System (INIS)

    Karriem, Z.; Ivanov, K.; Zamonsky, O.

    2011-01-01

    This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)

  2. A method for tuning parameters of Monte Carlo generators and a determination of the unintegrated gluon density

    International Nuclear Information System (INIS)

    Bacchetta, Alessandro; Jung, Hannes; Kutak, Krzysztof

    2010-02-01

    A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)

  3. An equivalence relation and grey Dancoff factor calculated by monte Carlo method for irregular fuel assemblies

    International Nuclear Information System (INIS)

    Kim, Hyeong Heon

    2000-02-01

    The equivalence theorem providing a relation between a homogeneous and a heterogeneous medium has been used in the resonance calculation for the heterogeneous system. The accuracy of the resonance calculation based on the equivalence theorem depends on how accurately the fuel collision probability is expressed by the rational terms. The fuel collision probability is related to the Dancoff factor in closely packed lattices. The calculation of the Dancoff factor is one of the most difficult problems in the core analysis because the actual configuration of fuel elements in the lattice is very complex. Most reactor physics codes currently used are based on the roughly calculated black Dancoff factor, where total cross section of the fuel is assumed to be infinite. Even the black Dancoff factors have not been calculated accurately though many methods have been proposed. The equivalence theorem based on the black Dancoff factor causes some errors inevitably due to the approximations involved in the Dancoff factor calculation and the derivation of the fuel collision probability, but they have not been evaluated seriously before. In this study, a Monte Carlo program - G-DANCOFF - was developed to calculate not only the traditional black Dancoff factor but also grey Dancoff factor where the medium is described realistically. G-DANCOFF calculates the Dancoff factor based on its collision probability definition in an arbitrary arrangement of cylindrical fuel pins in full three-dimensional fashion. G-DANCOFF was verified by comparing the black Dancoff factors calculated for the geometries where accurate solutions are available. With 100,000 neutron histories, the calculated results by G-DANCOFF were matched within maximum 1% and in most cases less than 0.2% with previous results. G-DANCOFF also provides graphical information on particle tracks which makes it possible to calculate the Dancoff factor independently. The effects of the Dancoff factor on the criticality calculation

  4. Automating methods to improve precision in Monte-Carlo event generation for particle colliders

    Energy Technology Data Exchange (ETDEWEB)

    Gleisberg, Tanju

    2008-07-01

    The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove

  5. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Pellegrino, Esteban

    2011-01-01

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author) [es

  6. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  7. An Investigation of the Performance of the Unified Monte Carlo Method of Neutron Cross Section Data Evaluation

    International Nuclear Information System (INIS)

    Capote, Roberto; Smith, Donald L.

    2008-01-01

    The Unified Monte Carlo method (UMC) has been suggested to avoid certain limitations and approximations inherent to the well-known Generalized Least Squares (GLS) method of nuclear data evaluation. This contribution reports on an investigation of the performance of the UMC method in comparison with the GLS method. This is accomplished by applying both methods to simple examples with few input values that were selected to explore various features of the evaluation process that impact upon the quality of an evaluation. Among the issues explored are: i) convergence of UMC results with the number of Monte Carlo histories and the ranges of sampled values; ii) a comparison of Monte Carlo sampling using the Metropolis scheme and a brute force approach; iii) the effects of large data discrepancies; iv) the effects of large data uncertainties; v) the effects of strong or weak model or experimental data correlations; and vi) the impact of ratio data and integral data. Comparisons are also made of the evaluated results for these examples when the input values are first transformed to comparable logarithmic values prior to performing the evaluation. Some general conclusions that are applicable to more realistic evaluation exercises are offered

  8. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A.

    2014-08-01

    This work is based on the determination of the detection efficiency of 125 I and 131 I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of 131 I and 125 I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  9. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues

    International Nuclear Information System (INIS)

    Harris, G.; Van Horn, R.

    1996-06-01

    The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL

  10. Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues

    Energy Technology Data Exchange (ETDEWEB)

    Harris, G.; Van Horn, R.

    1996-06-01

    The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.

  11. Dosimetry of 252Cf medical sources using Monte-Carlo methods

    International Nuclear Information System (INIS)

    Wierzbicki, J.G.; Roberts, W.; Rivard, M.J.; Fontanesi, J.

    1996-01-01

    Dosimetric measurements are the only way to calibrate radioactive sources. Dose distribution around sources have also been determined experimentally, but in recent years, dramatic technological developments have made computer simulations an attractive method for dose distribution studies. Monte Carlo simulations are especially useful if the radiation field has several components with different biological properties. The dose in the vicinity of 252 Cf source has five components: primary fast neutron, primary photon, secondary 2.2 MeV photon from the H(n,γ) reaction, protons from the 14 N(n,p) reaction, and products of the boron neutron capture reaction if the tumor is augmented by 10 B. The RBE values for these components are different, and their independent determination is essential for 252 Cf brachytherapy. We used MCNP, neutron-photon transport code to calculate all five components of the total dose. The 252 Cf medical source is 2.3 cm long and has diameter 2.8 mm. To construct along-away tables, we divided the volume into cells using concentric cylinders with the source length and planes perpendicular to the source. The computer simulated both neutron and photon histories using cross sections provided by the code library. The neutron/photon energy spectrum and kerma values for particular components were made based on the most recent data available. Results of neutron/photon flux and dose rates were obtained for all cells. Based on these data, along-away tables were constructed for all components of the total dose which will be entered into the treatment planning computer and used for total dose calculations with the appropriate RBE multipliers. Similar calculations may also be done for 252 Cf source of any design

  12. Bayesian prediction of future ice sheet volume using local approximation Markov chain Monte Carlo methods

    Science.gov (United States)

    Davis, A. D.; Heimbach, P.; Marzouk, Y.

    2017-12-01

    We develop a Bayesian inverse modeling framework for predicting future ice sheet volume with associated formal uncertainty estimates. Marine ice sheets are drained by fast-flowing ice streams, which we simulate using a flowline model. Flowline models depend on geometric parameters (e.g., basal topography), parameterized physical processes (e.g., calving laws and basal sliding), and climate parameters (e.g., surface mass balance), most of which are unknown or uncertain. Given observations of ice surface velocity and thickness, we define a Bayesian posterior distribution over static parameters, such as basal topography. We also define a parameterized distribution over variable parameters, such as future surface mass balance, which we assume are not informed by the data. Hyperparameters are used to represent climate change scenarios, and sampling their distributions mimics internal variation. For example, a warming climate corresponds to increasing mean surface mass balance but an individual sample may have periods of increasing or decreasing surface mass balance. We characterize the predictive distribution of ice volume by evaluating the flowline model given samples from the posterior distribution and the distribution over variable parameters. Finally, we determine the effect of climate change on future ice sheet volume by investigating how changing the hyperparameters affects the predictive distribution. We use state-of-the-art Bayesian computation to address computational feasibility. Characterizing the posterior distribution (using Markov chain Monte Carlo), sampling the full range of variable parameters and evaluating the predictive model is prohibitively expensive. Furthermore, the required resolution of the inferred basal topography may be very high, which is often challenging for sampling methods. Instead, we leverage regularity in the predictive distribution to build a computationally cheaper surrogate over the low dimensional quantity of interest (future ice

  13. Modeling of continuous free-radical butadiene-styrene copolymerization process by the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    T. A. Mikhailova

    2016-01-01

    Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.

  14. Environmental dose rate assessment of ITER using the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    Karimian Alireza

    2014-01-01

    Full Text Available Exposure to radiation is one of the main sources of risk to staff employed in reactor facilities. The staff of a tokamak is exposed to a wide range of neutrons and photons around the tokamak hall. The International Thermonuclear Experimental Reactor (ITER is a nuclear fusion engineering project and the most advanced experimental tokamak in the world. From the radiobiological point of view, ITER dose rates assessment is particularly important. The aim of this study is the assessment of the amount of radiation in ITER during its normal operation in a radial direction from the plasma chamber to the tokamak hall. To achieve this goal, the ITER system and its components were simulated by the Monte Carlo method using the MCNPX 2.6.0 code. Furthermore, the equivalent dose rates of some radiosensitive organs of the human body were calculated by using the medical internal radiation dose phantom. Our study is based on the deuterium-tritium plasma burning by 14.1 MeV neutron production and also photon radiation due to neutron activation. As our results show, the total equivalent dose rate on the outside of the bioshield wall of the tokamak hall is about 1 mSv per year, which is less than the annual occupational dose rate limit during the normal operation of ITER. Also, equivalent dose rates of radiosensitive organs have shown that the maximum dose rate belongs to the kidney. The data may help calculate how long the staff can stay in such an environment, before the equivalent dose rates reach the whole-body dose limits.

  15. A validation of direct grey Dancoff factors results for cylindrical cells in cluster geometry by the Monte Carlo method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Bogado, Sergio; Vilhena, Marco T.

    2008-01-01

    The WIMS code is a well known and one of the most used codes to handle nuclear core physics calculations. Recently, the PIJM module of the WIMS code was modified in order to allow the calculation of Grey Dancoff factors, for partially absorbing materials, using the alternative definition in terms of escape and collision probabilities. Grey Dancoff factors for the Canadian CANDU-37 and CANFLEX assemblies were calculated with PIJM at five symmetrically distinct fuel pin positions. The results, obtained via Direct Method, i.e., by direct calculation of escape and collision probabilities, were satisfactory when compared with the ones of literature. On the other hand, the PIJMC module was developed to calculate escape and collision probabilities using Monte Carlo method. Modifications in this module were performed to determine Black Dancoff factors, considering perfectly absorbing fuel rods. In this work, we proceed further in the task of validating the Direct Method by the Monte Carlo approach. To this end, the PIJMC routine is modified to compute Grey Dancoff factors using the cited alternative definition. Results are reported for the mentioned CANDU-37 and CANFLEX assemblies obtained with PIJMC, at the same fuel pin positions as with PIJM. A good agreement is observed between the results from the Monte Carlo and Direct methods

  16. Evaluation of Monte Carlo Codes Regarding the Calculated Detector Response Function in NDP Method

    Energy Technology Data Exchange (ETDEWEB)

    Tuan, Hoang Sy Minh; Sun, Gwang Min; Park, Byung Gun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The basis of the NDP is the irradiation of a sample with a thermal or cold neutron beam and the subsequent release of charged particles due to neutron-induced exoergic charged particle reactions. Neutrons interact with the nuclei of elements and release mono-energetic charged particles, e.g. alpha particles or protons, and recoil atoms. Depth profile of the analyzed element can be obtained by making a linear transformation of the measured energy spectrum by using the stopping power of the sample material. A few micrometer of the material can be analyzed nondestructively, and on the order of 10nm depth resolution can be obtained depending on the material type with NDP method. In the NDP method, the one first steps of the analytical process is a channel-energy calibration. This calibration is normally made with the experimental measurement of NIST Standard Reference Material sample (SRM-93a). In this study, some Monte Carlo (MC) codes were tried to calculate the Si detector response function when this detector accounted the energy charges particles emitting from an analytical sample. In addition, these MC codes were also tried to calculate the depth distributions of some light elements ({sup 10}B, {sup 3}He, {sup 6}Li, etc.) in SRM-93a and SRM-2137 samples. These calculated profiles were compared with the experimental profiles and SIMS profiles. In this study, some popular MC neutron transport codes are tried and tested to calculate the detector response function in the NDP method. The simulations were modeled based on the real CN-NDP system which is a part of Cold Neutron Activation Station (CONAS) at HANARO (KAERI). The MC simulations are very successful at predicting the alpha peaks in the measured energy spectrum. The net area difference between the measured and predicted alpha peaks are less than 1%. A possible explanation might be bad cross section data set usage in the MC codes for the transport of low energetic lithium atoms inside the silicon substrate.

  17. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  18. Numerical investigation of turbomolecular pumps using the direct simulation Monte Carlo method with moving surfaces

    NARCIS (Netherlands)

    Versluis, R.; Dorsman, R.; Thielen, L.; Roos, M.E.

    2009-01-01

    A new approach for performing numerical direct simulation Monte Carlo (DSMC) simulations on turbomolecular pumps in the free molecular and transitional flow regimes is described. The chosen approach is to use surfaces that move relative to the grid to model the effect of rotors and stators on a gas

  19. RMCSANS-modelling the inter-particle term of small angle scattering data via the reverse Monte Carlo method

    International Nuclear Information System (INIS)

    Gereben, O; Pusztai, L; McGreevy, R L

    2010-01-01

    A new reverse Monte Carlo (RMC) method has been developed for creating three-dimensional structures in agreement with small angle scattering data. Extensive tests, using computer generated quasi-experimental data for aggregation processes via constrained RMC and Langevin molecular dynamics, were performed. The software is capable of fitting several consecutive time frames of scattering data, and movie-like visualization of the structure (and its evolution) either during or after the simulation is also possible.

  20. Comparison of Monte Carlo and fuzzy math simulation methods for quantitative microbial risk assessment.

    Science.gov (United States)

    Davidson, Valerie J; Ryks, Joanne

    2003-10-01

    The objective of food safety risk assessment is to quantify levels of risk for consumers as well as to design improved processing, distribution, and preparation systems that reduce exposure to acceptable limits. Monte Carlo simulation tools have been used to deal with the inherent variability in food systems, but these tools require substantial data for estimates of probability distributions. The objective of this study was to evaluate the use of fuzzy values to represent uncertainty. Fuzzy mathematics and Monte Carlo simulations were compared to analyze the propagation of uncertainty through a number of sequential calculations in two different applications: estimation of biological impacts and economic cost in a general framework and survival of Campylobacter jejuni in a sequence of five poultry processing operations. Estimates of the proportion of a population requiring hospitalization were comparable, but using fuzzy values and interval arithmetic resulted in more conservative estimates of mortality and cost, in terms of the intervals of possible values and mean values, compared to Monte Carlo calculations. In the second application, the two approaches predicted the same reduction in mean concentration (-4 log CFU/ ml of rinse), but the limits of the final concentration distribution were wider for the fuzzy estimate (-3.3 to 5.6 log CFU/ml of rinse) compared to the probability estimate (-2.2 to 4.3 log CFU/ml of rinse). Interval arithmetic with fuzzy values considered all possible combinations in calculations and maximum membership grade for each possible result. Consequently, fuzzy results fully included distributions estimated by Monte Carlo simulations but extended to broader limits. When limited data defines probability distributions for all inputs, fuzzy mathematics is a more conservative approach for risk assessment than Monte Carlo simulations.

  1. Validating a virtual source model based in Monte Carlo Method for profiles and percent deep doses calculation

    Energy Technology Data Exchange (ETDEWEB)

    Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)

    2017-07-01

    The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)

  2. Modeling dose-rate on/over the surface of cylindrical radio-models using Monte Carlo methods

    International Nuclear Information System (INIS)

    Xiao Xuefu; Ma Guoxue; Wen Fuping; Wang Zhongqi; Wang Chaohui; Zhang Jiyun; Huang Qingbo; Zhang Jiaqiu; Wang Xinxing; Wang Jun

    2004-01-01

    Objective: To determine the dose-rates on/over the surface of 10 cylindrical radio-models, which belong to the Metrology Station of Radio-Geological Survey of CNNC. Methods: The dose-rates on/over the surface of 10 cylindrical radio-models were modeled using the famous Monte Carlo code-MCNP. The dose-rates on/over the surface of 10 cylindrical radio-models were measured by a high gas pressurized ionization chamber dose-rate meter, respectively. The values of dose-rate modeled using MCNP code were compared with those obtained by authors in the present experimental measurement, and with those obtained by other workers previously. Some factors causing the discrepancy between the data obtained by authors using MCNP code and the data obtained using other methods are discussed in this paper. Results: The data of dose-rates on/over the surface of 10 cylindrical radio-models, obtained using MCNP code, were in good agreement with those obtained by other workers using the theoretical method. They were within the discrepancy of ±5% in general, and the maximum discrepancy was less than 10%. Conclusions: As if each factor needed for the Monte Carlo code is correct, the dose-rates on/over the surface of cylindrical radio-models modeled using the Monte Carlo code are correct with an uncertainty of 3%

  3. Measurements of the ZZ production cross sections in the $2\\ell2\

    CERN Document Server

    Khachatryan, Vardan; Tumasyan, Armen; Adam, Wolfgang; Bergauer, Thomas; Dragicevic, Marko; Erö, Janos; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Hartl, Christian; Hörmann, Natascha; Hrubec, Josef; Jeitler, Manfred; Kiesenhofer, Wolfgang; Knünz, Valentin; Krammer, Manfred; Krätschmer, Ilse; Liko, Dietrich; Mikulec, Ivan; Rabady, Dinyar; Rahbaran, Babak; Rohringer, Herbert; Schöfbeck, Robert; Strauss, Josef; Treberer-Treberspurg, Wolfgang; Waltenberger, Wolfgang; Wulz, Claudia-Elisabeth; Mossolov, Vladimir; Shumeiko, Nikolai; Suarez Gonzalez, Juan; Alderweireldt, Sara; Bansal, Sunil; Cornelis, Tom; De Wolf, Eddi A; Janssen, Xavier; Knutsson, Albert; Lauwers, Jasper; Luyckx, Sten; Ochesanu, Silvia; Rougny, Romain; Van De Klundert, Merijn; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Van Spilbeeck, Alex; Blekman, Freya; Blyweert, Stijn; D'Hondt, Jorgen; Daci, Nadir; Heracleous, Natalie; Keaveney, James; Lowette, Steven; Maes, Michael; Olbrechts, Annik; Python, Quentin; Strom, Derek; Tavernier, Stefaan; Van Doninck, Walter; Van Mulders, Petra; Van Onsem, Gerrit Patrick; Villella, Ilaria; Caillol, Cécile; Clerbaux, Barbara; De Lentdecker, Gilles; Dobur, Didar; Favart, Laurent; Gay, Arnaud; Grebenyuk, Anastasia; Léonard, Alexandre; Mohammadi, Abdollah; Perniè, Luca; Randle-conde, Aidan; Reis, Thomas; Seva, Tomislav; Thomas, Laurent; Vander Velde, Catherine; Vanlaer, Pascal; Wang, Jian; Zenoni, Florian; Adler, Volker; Beernaert, Kelly; Benucci, Leonardo; Cimmino, Anna; Costantini, Silvia; Crucy, Shannon; Dildick, Sven; Fagot, Alexis; Garcia, Guillaume; Mccartin, Joseph; Ocampo Rios, Alberto Andres; Ryckbosch, Dirk; Salva Diblen, Sinem; Sigamani, Michael; Strobbe, Nadja; Thyssen, Filip; Tytgat, Michael; Yazgan, Efe; Zaganidis, Nicolas; Basegmez, Suzan; Beluffi, Camille; Bruno, Giacomo; Castello, Roberto; Caudron, Adrien; Ceard, Ludivine; Da Silveira, Gustavo Gil; Delaere, Christophe; Du Pree, Tristan; Favart, Denis; Forthomme, Laurent; Giammanco, Andrea; Hollar, Jonathan; Jafari, Abideh; Jez, Pavel; Komm, Matthias; Lemaitre, Vincent; Nuttens, Claude; Pagano, Davide; Perrini, Lucia; Pin, Arnaud; Piotrzkowski, Krzysztof; Popov, Andrey; Quertenmont, Loic; Selvaggi, Michele; Vidal Marono, Miguel; Vizan Garcia, Jesus Manuel; Beliy, Nikita; Caebergs, Thierry; Daubie, Evelyne; Hammad, Gregory Habib; Aldá Júnior, Walter Luiz; Alves, Gilvan; Brito, Lucas; Correa Martins Junior, Marcos; Dos Reis Martins, Thiago; Mora Herrera, Clemencia; Pol, Maria Elena; Rebello Teles, Patricia; Carvalho, Wagner; Chinellato, Jose; Custódio, Analu; Melo Da Costa, Eliza; De Jesus Damiao, Dilson; De Oliveira Martins, Carley; Fonseca De Souza, Sandro; Malbouisson, Helena; Matos Figueiredo, Diego; Mundim, Luiz; Nogima, Helio; Prado Da Silva, Wanda Lucia; Santaolalla, Javier; Santoro, Alberto; Sznajder, Andre; Tonelli Manganote, Edmilson José; Vilela Pereira, Antonio; Bernardes, Cesar Augusto; Dogra, Sunil; Tomei, Thiago; De Moraes Gregores, Eduardo; Mercadante, Pedro G; Novaes, Sergio F; Padula, Sandra; Aleksandrov, Aleksandar; Genchev, Vladimir; Hadjiiska, Roumyana; Iaydjiev, Plamen; Marinov, Andrey; Piperov, Stefan; Rodozov, Mircho; Sultanov, Georgi; Vutova, Mariana; Dimitrov, Anton; Glushkov, Ivan; Litov, Leander; Pavlov, Borislav; Petkov, Peicho; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Chen, Mingshui; Cheng, Tongguang; Du, Ran; Jiang, Chun-Hua; Plestina, Roko; Romeo, Francesco; Tao, Junquan; Wang, Zheng; Asawatangtrakuldee, Chayanit; Ban, Yong; Li, Qiang; Liu, Shuai; Mao, Yajun; Qian, Si-Jin; Wang, Dayong; Xu, Zijun; Zou, Wei; Avila, Carlos; Cabrera, Andrés; Chaparro Sierra, Luisa Fernanda; Florez, Carlos; Gomez, Juan Pablo; Gomez Moreno, Bernardo; Sanabria, Juan Carlos; Godinovic, Nikola; Lelas, Damir; Polic, Dunja; Puljak, Ivica; Antunovic, Zeljko; Kovac, Marko; Brigljevic, Vuko; Kadija, Kreso; Luetic, Jelena; Mekterovic, Darko; Sudic, Lucija; Attikis, Alexandros; Mavromanolakis, Georgios; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Bodlak, Martin; Finger, Miroslav; Finger Jr, Michael; Assran, Yasser; Ellithi Kamel, Ali; Mahmoud, Mohammed; Radi, Amr; Kadastik, Mario; Murumaa, Marion; Raidal, Martti; Tiko, Andres; Eerola, Paula; Fedi, Giacomo; Voutilainen, Mikko; Härkönen, Jaakko; Karimäki, Veikko; Kinnunen, Ritva; Kortelainen, Matti J; Lampén, Tapio; Lassila-Perini, Kati; Lehti, Sami; Lindén, Tomas; Luukka, Panja-Riina; Mäenpää, Teppo; Peltola, Timo; Tuominen, Eija; Tuominiemi, Jorma; Tuovinen, Esa; Wendland, Lauri; Talvitie, Joonas; Tuuva, Tuure; Besancon, Marc; Couderc, Fabrice; Dejardin, Marc; Denegri, Daniel; Fabbro, Bernard; Faure, Jean-Louis; Favaro, Carlotta; Ferri, Federico; Ganjour, Serguei; Givernaud, Alain; Gras, Philippe; Hamel de Monchenault, Gautier; Jarry, Patrick; Locci, Elizabeth; Malcles, Julie; Rander, John; Rosowsky, André; Titov, Maksym; Baffioni, Stephanie; Beaudette, Florian; Busson, Philippe; Charlot, Claude; Dahms, Torsten; Dalchenko, Mykhailo; Dobrzynski, Ludwik; Filipovic, Nicolas; Florent, Alice; Granier de Cassagnac, Raphael; Mastrolorenzo, Luca; Miné, Philippe; Mironov, Camelia; Naranjo, Ivo Nicolas; Nguyen, Matthew; Ochando, Christophe; Ortona, Giacomo; Paganini, Pascal; Regnard, Simon; Salerno, Roberto; Sauvan, Jean-Baptiste; Sirois, Yves; Veelken, Christian; Yilmaz, Yetkin; Zabi, Alexandre; Agram, Jean-Laurent; Andrea, Jeremy; Aubin, Alexandre; Bloch, Daniel; Brom, Jean-Marie; Chabert, Eric Christian; Collard, Caroline; Conte, Eric; Fontaine, Jean-Charles; Gelé, Denis; Goerlach, Ulrich; Goetzmann, Christophe; Le Bihan, Anne-Catherine; Skovpen, Kirill; Van Hove, Pierre; Gadrat, Sébastien; Beauceron, Stephanie; Beaupere, Nicolas; Bernet, Colin; Boudoul, Gaelle; Bouvier, Elvire; Brochet, Sébastien; Carrillo Montoya, Camilo Andres; Chasserat, Julien; Chierici, Roberto; Contardo, Didier; Depasse, Pierre; El Mamouni, Houmani; Fan, Jiawei; Fay, Jean; Gascon, Susan; Gouzevitch, Maxime; Ille, Bernard; Kurca, Tibor; Lethuillier, Morgan; Mirabito, Laurent; Perries, Stephane; Ruiz Alvarez, José David; Sabes, David; Sgandurra, Louis; Sordini, Viola; Vander Donckt, Muriel; Verdier, Patrice; Viret, Sébastien; Xiao, Hong; Tsamalaidze, Zviad; Autermann, Christian; Beranek, Sarah; Bontenackels, Michael; Edelhoff, Matthias; Feld, Lutz; Heister, Arno; Hindrichs, Otto; Klein, Katja; Ostapchuk, Andrey; Preuten, Marius; Raupach, Frank; Sammet, Jan; Schael, Stefan; Schulte, Jan-Frederik; Weber, Hendrik; Wittmer, Bruno; Zhukov, Valery; Ata, Metin; Brodski, Michael; Dietz-Laursonn, Erik; Duchardt, Deborah; Erdmann, Martin; Fischer, Robert; Güth, Andreas; Hebbeker, Thomas; Heidemann, Carsten; Hoepfner, Kerstin; Klingebiel, Dennis; Knutzen, Simon; Kreuzer, Peter; Merschmeyer, Markus; Meyer, Arnd; Millet, Philipp; Olschewski, Mark; Padeken, Klaas; Papacz, Paul; Reithler, Hans; Schmitz, Stefan Antonius; Sonnenschein, Lars; Teyssier, Daniel; Thüer, Sebastian; Weber, Martin; Cherepanov, Vladimir; Erdogan, Yusuf; Flügge, Günter; Geenen, Heiko; Geisler, Matthias; Haj Ahmad, Wael; Hoehle, Felix; Kargoll, Bastian; Kress, Thomas; Kuessel, Yvonne; Künsken, Andreas; Lingemann, Joschka; Nowack, Andreas; Nugent, Ian Michael; Pooth, Oliver; Stahl, Achim; Aldaya Martin, Maria; Asin, Ivan; Bartosik, Nazar; Behr, Joerg; Behrens, Ulf; Bell, Alan James; Bethani, Agni; Borras, Kerstin; Burgmeier, Armin; Cakir, Altan; Calligaris, Luigi; Campbell, Alan; Choudhury, Somnath; Costanza, Francesco; Diez Pardos, Carmen; Dolinska, Ganna; Dooling, Samantha; Dorland, Tyler; Eckerlin, Guenter; Eckstein, Doris; Eichhorn, Thomas; Flucke, Gero; Garay Garcia, Jasone; Geiser, Achim; Gunnellini, Paolo; Hauk, Johannes; Hempel, Maria; Jung, Hannes; Kalogeropoulos, Alexis; Kasemann, Matthias; Katsas, Panagiotis; Kieseler, Jan; Kleinwort, Claus; Korol, Ievgen; Krücker, Dirk; Lange, Wolfgang; Leonard, Jessica; Lipka, Katerina; Lobanov, Artur; Lohmann, Wolfgang; Lutz, Benjamin; Mankel, Rainer; Marfin, Ihar; Melzer-Pellmann, Isabell-Alissandra; Meyer, Andreas Bernhard; Mittag, Gregor; Mnich, Joachim; Mussgiller, Andreas; Naumann-Emme, Sebastian; Nayak, Aruna; Ntomari, Eleni; Perrey, Hanno; Pitzl, Daniel; Placakyte, Ringaile; Raspereza, Alexei; Ribeiro Cipriano, Pedro M; Roland, Benoit; Ron, Elias; Sahin, Mehmet Özgür; Salfeld-Nebgen, Jakob; Saxena, Pooja; Schoerner-Sadenius, Thomas; Schröder, Matthias; Seitz, Claudia; Spannagel, Simon; Vargas Trevino, Andrea Del Rocio; Walsh, Roberval; Wissing, Christoph; Blobel, Volker; Centis Vignali, Matteo; Draeger, Arne-Rasmus; Erfle, Joachim; Garutti, Erika; Goebel, Kristin; Görner, Martin; Haller, Johannes; Hoffmann, Malte; Höing, Rebekka Sophie; Junkes, Alexandra; Kirschenmann, Henning; Klanner, Robert; Kogler, Roman; Lange, Jörn; Lapsien, Tobias; Lenz, Teresa; Marchesini, Ivan; Ott, Jochen; Peiffer, Thomas; Perieanu, Adrian; Pietsch, Niklas; Poehlsen, Jennifer; Pöhlsen, Thomas; Rathjens, Denis; Sander, Christian; Schettler, Hannes; Schleper, Peter; Schlieckau, Eike; Schmidt, Alexander; Seidel, Markus; Sola, Valentina; Stadie, Hartmut; Steinbrück, Georg; Troendle, Daniel; Usai, Emanuele; Vanelderen, Lukas; Vanhoefer, Annika; Barth, Christian; Baus, Colin; Berger, Joram; Böser, Christian; Butz, Erik; Chwalek, Thorsten; De Boer, Wim; Descroix, Alexis; Dierlamm, Alexander; Feindt, Michael; Frensch, Felix; Giffels, Manuel; Gilbert, Andrew; Hartmann, Frank; Hauth, Thomas; Husemann, Ulrich; Katkov, Igor; Kornmayer, Andreas; Kuznetsova, Ekaterina; Lobelle Pardo, Patricia; Mozer, Matthias Ulrich; Müller, Thomas; Müller, Thomas; Nürnberg, Andreas; Quast, Gunter; Rabbertz, Klaus; Röcker, Steffen; Simonis, Hans-Jürgen; Stober, Fred-Markus Helmut; Ulrich, Ralf; Wagner-Kuhr, Jeannine; Wayand, Stefan; Weiler, Thomas; Wolf, Roger; Anagnostou, Georgios; Daskalakis, Georgios; Geralis, Theodoros; Giakoumopoulou, Viktoria Athina; Kyriakis, Aristotelis; Loukas, Demetrios; Markou, Athanasios; Markou, Christos; Psallidas, Andreas; Topsis-Giotis, Iasonas; Agapitos, Antonis; Kesisoglou, Stilianos; Panagiotou, Apostolos; Saoulidou, Niki; Stiliaris, Efstathios; Aslanoglou, Xenofon; Evangelou, Ioannis; Flouris, Giannis; Foudas, Costas; Kokkas, Panagiotis; Manthos, Nikolaos; Papadopoulos, Ioannis; Paradas, Evangelos; Strologas, John; Bencze, Gyorgy; Hajdu, Csaba; Hidas, Pàl; Horvath, Dezso; Sikler, Ferenc; Veszpremi, Viktor; Vesztergombi, Gyorgy; Zsigmond, Anna Julia; Beni, Noemi; Czellar, Sandor; Karancsi, János; Molnar, Jozsef; Palinkas, Jozsef; Szillasi, Zoltan; Makovec, Alajos; Raics, Peter; Trocsanyi, Zoltan Laszlo; Ujvari, Balazs; Swain, Sanjay Kumar; Beri, Suman Bala; Bhatnagar, Vipin; Gupta, Ruchi; Bhawandeep, Bhawandeep; Kalsi, Amandeep Kaur; Kaur, Manjit; Kumar, Ramandeep; Mittal, Monika; Nishu, Nishu; Singh, Jasbir; Kumar, Ashok; Kumar, Arun; Ahuja, Sudha; Bhardwaj, Ashutosh; Choudhary, Brajesh C; Kumar, Ajay; Malhotra, Shivali; Naimuddin, Md; Ranjan, Kirti; Sharma, Varun; Banerjee, Sunanda; Bhattacharya, Satyaki; Chatterjee, Kalyanmoy; Dutta, Suchandra; Gomber, Bhawna; Jain, Sandhya; Jain, Shilpi; Khurana, Raman; Modak, Atanu; Mukherjee, Swagata; Roy, Debarati; Sarkar, Subir; Sharan, Manoj; Abdulsalam, Abdulla; Dutta, Dipanwita; Kumar, Vineet; Mohanty, Ajit Kumar; Pant, Lalit Mohan; Shukla, Prashant; Topkar, Anita; Aziz, Tariq; Banerjee, Sudeshna; Bhowmik, Sandeep; Chatterjee, Rajdeep Mohan; Dewanjee, Ram Krishna; Dugad, Shashikant; Ganguly, Sanmay; Ghosh, Saranya; Guchait, Monoranjan; Gurtu, Atul; Kole, Gouranga; Kumar, Sanjeev; Maity, Manas; Majumder, Gobinda; Mazumdar, Kajari; Mohanty, Gagan Bihari; Parida, Bibhuti; Sudhakar, Katta; Wickramage, Nadeesha; Bakhshiansohi, Hamed; Behnamian, Hadi; Etesami, Seyed Mohsen; Fahim, Ali; Goldouzian, Reza; Khakzad, Mohsen; Mohammadi Najafabadi, Mojtaba; Naseri, Mohsen; Paktinat Mehdiabadi, Saeid; Rezaei Hosseinabadi, Ferdos; Safarzadeh, Batool; Zeinali, Maryam; Felcini, Marta; Grunewald, Martin; Abbrescia, Marcello; Calabria, Cesare; Chhibra, Simranjit Singh; Colaleo, Anna; Creanza, Donato; De Filippis, Nicola; De Palma, Mauro; Fiore, Luigi; Iaselli, Giuseppe; Maggi, Giorgio; Maggi, Marcello; My, Salvatore; Nuzzo, Salvatore; Pompili, Alexis; Pugliese, Gabriella; Radogna, Raffaella; Selvaggi, Giovanna; Sharma, Archana; Silvestris, Lucia; Venditti, Rosamaria; Verwilligen, Piet; Abbiendi, Giovanni; Benvenuti, Alberto; Bonacorsi, Daniele; Braibant-Giacomelli, Sylvie; Brigliadori, Luca; Campanini, Renato; Capiluppi, Paolo; Castro, Andrea; Cavallo, Francesca Romana; Codispoti, Giuseppe; Cuffiani, Marco; Dallavalle, Gaetano-Marco; Fabbri, Fabrizio; Fanfani, Alessandra; Fasanella, Daniele; Giacomelli, Paolo; Grandi, Claudio; Guiducci, Luigi; Marcellini, Stefano; Masetti, Gianni; Montanari, Alessandro; Navarria, Francesco; Perrotta, Andrea; Primavera, Federica; Rossi, Antonio; Rovelli, Tiziano; Siroli, Gian Piero; Tosi, Nicolò; Travaglini, Riccardo; Albergo, Sebastiano; Cappello, Gigi; Chiorboli, Massimiliano; Costa, Salvatore; Giordano, Ferdinando; Potenza, Renato; Tricomi, Alessia; Tuve, Cristina; Barbagli, Giuseppe; Ciulli, Vitaliano; Civinini, Carlo; D'Alessandro, Raffaello; Focardi, Ettore; Gallo, Elisabetta; Gonzi, Sandro; Gori, Valentina; Lenzi, Piergiulio; Meschini, Marco; Paoletti, Simone; Sguazzoni, Giacomo; Tropiano, Antonio; Benussi, Luigi; Bianco, Stefano; Fabbri, Franco; Piccolo, Davide; Ferretti, Roberta; Ferro, Fabrizio; Lo Vetere, Maurizio; Robutti, Enrico; Tosi, Silvano; Dinardo, Mauro Emanuele; Fiorendi, Sara; Gennai, Simone; Gerosa, Raffaele; Ghezzi, Alessio; Govoni, Pietro; Lucchini, Marco Toliman; Malvezzi, Sandra; Manzoni, Riccardo Andrea; Martelli, Arabella; Marzocchi, Badder; Menasce, Dario; Moroni, Luigi; Paganoni, Marco; Pedrini, Daniele; Ragazzi, Stefano; Redaelli, Nicola; Tabarelli de Fatis, Tommaso; Buontempo, Salvatore; Cavallo, Nicola; Di Guida, Salvatore; Fabozzi, Francesco; Iorio, Alberto Orso Maria; Lista, Luca; Meola, Sabino; Merola, Mario; Paolucci, Pierluigi; Azzi, Patrizia; Bacchetta, Nicola; Bisello, Dario; Branca, Antonio; Carlin, Roberto; Checchia, Paolo; Dall'Osso, Martino; Dorigo, Tommaso; Galanti, Mario; Gasparini, Fabrizio; Gasparini, Ugo; Gonella, Franco; Gozzelino, Andrea; Kanishchev, Konstantin; Lacaprara, Stefano; Margoni, Martino; Meneguzzo, Anna Teresa; Pazzini, Jacopo; Pozzobon, Nicola; Ronchese, Paolo; Simonetto, Franco; Torassa, Ezio; Tosi, Mia; Zotto, Pierluigi; Zucchetta, Alberto; Zumerle, Gianni; Gabusi, Michele; Ratti, Sergio P; Re, Valerio; Riccardi, Cristina; Salvini, Paola; Vitulo, Paolo; Biasini, Maurizio; Bilei, Gian Mario; Ciangottini, Diego; Fanò, Livio; Lariccia, Paolo; Mantovani, Giancarlo; Menichelli, Mauro; Saha, Anirban; Santocchia, Attilio; Spiezia, Aniello; Androsov, Konstantin; Azzurri, Paolo; Bagliesi, Giuseppe; Bernardini, Jacopo; Boccali, Tommaso; Broccolo, Giuseppe; Castaldi, Rino; Ciocci, Maria Agnese; Dell'Orso, Roberto; Donato, Silvio; Fiori, Francesco; Foà, Lorenzo; Giassi, Alessandro; Grippo, Maria Teresa; Ligabue, Franco; Lomtadze, Teimuraz; Martini, Luca; Messineo, Alberto; Moon, Chang-Seong; Palla, Fabrizio; Rizzi, Andrea; Savoy-Navarro, Aurore; Serban, Alin Titus; Spagnolo, Paolo; Squillacioti, Paola; Tenchini, Roberto; Tonelli, Guido; Venturi, Andrea; Verdini, Piero Giorgio; Vernieri, Caterina; Barone, Luciano; Cavallari, Francesca; D'imperio, Giulia; Del Re, Daniele; Diemoz, Marcella; Jorda, Clara; Longo, Egidio; Margaroli, Fabrizio; Meridiani, Paolo; Micheli, Francesco; Organtini, Giovanni; Paramatti, Riccardo; Rahatlou, Shahram; Rovelli, Chiara; Santanastasio, Francesco; Soffi, Livia; Traczyk, Piotr; Amapane, Nicola; Arcidiacono, Roberta; Argiro, Stefano; Arneodo, Michele; Bellan, Riccardo; Biino, Cristina; Cartiglia, Nicolo; Casasso, Stefano; Costa, Marco; De Remigis, Paolo; Degano, Alessandro; Demaria, Natale; Finco, Linda; Mariotti, Chiara; Maselli, Silvia; Migliore, Ernesto; Monaco, Vincenzo; Musich, Marco; Obertino, Maria Margherita; Pacher, Luca; Pastrone, Nadia; Pelliccioni, Mario; Pinna Angioni, Gian Luca; Romero, Alessandra; Ruspa, Marta; Sacchi, Roberto; Solano, Ada; Staiano, Amedeo; Tamponi, Umberto; Belforte, Stefano; Candelise, Vieri; Casarsa, Massimo; Cossutti, Fabio; Della Ricca, Giuseppe; Gobbo, Benigno; La Licata, Chiara; Marone, Matteo; Schizzi, Andrea; Umer, Tomo; Zanetti, Anna; Chang, Sunghyun; Kropivnitskaya, Anna; Nam, Soon-Kwon; Kim, Dong Hee; Kim, Gui Nyun; Kim, Min Suk; Kong, Dae Jung; Lee, Sangeun; Oh, Young Do; Park, Hyangkyu; Sakharov, Alexandre; Son, Dong-Chul; Kim, Tae Jeong; Kim, Jae Yool; Moon, Dong Ho; Song, Sanghyeon; Choi, Suyong; Gyun, Dooyeon; Hong, Byung-Sik; Jo, Mihee; Kim, Hyunchul; Kim, Yongsun; Lee, Byounghoon; Lee, Kyong Sei; Park, Sung Keun; Roh, Youn; Yoo, Hwi Dong; Choi, Minkyoo; Kim, Ji Hyun; Park, Inkyu; Ryu, Geonmo; Ryu, Min Sang; Choi, Young-Il; Choi, Young Kyu; Goh, Junghwan; Kim, Donghyun; Kwon, Eunhyang; Lee, Jongseok; Yu, Intae; Juodagalvis, Andrius; Komaragiri, Jyothsna Rani; Md Ali, Mohd Adli Bin; Casimiro Linares, Edgar; Castilla-Valdez, Heriberto; De La Cruz-Burelo, Eduard; Heredia-de La Cruz, Ivan; Hernandez-Almada, Alberto; Lopez-Fernandez, Ricardo; Sánchez Hernández, Alberto; Carrillo Moreno, Salvador; Vazquez Valencia, Fabiola; Pedraza, Isabel; Salazar Ibarguen, Humberto Antonio; Morelos Pineda, Antonio; Krofcheck, David; Butler, Philip H; Reucroft, Steve; Ahmad, Ashfaq; Ahmad, Muhammad; Hassan, Qamar; Hoorani, Hafeez R; Khan, Wajid Ali; Khurshid, Taimoor; Shoaib, Muhammad; Bialkowska, Helena; Bluj, Michal; Boimska, Bożena; Frueboes, Tomasz; Górski, Maciej; Kazana, Malgorzata; Nawrocki, Krzysztof; Romanowska-Rybinska, Katarzyna; Szleper, Michal; Zalewski, Piotr; Brona, Grzegorz; Bunkowski, Karol; Cwiok, Mikolaj; Dominik, Wojciech; Doroba, Krzysztof; Kalinowski, Artur; Konecki, Marcin; Krolikowski, Jan; Misiura, Maciej; Olszewski, Michal; Bargassa, Pedrame; Beirão Da Cruz E Silva, Cristóvão; Faccioli, Pietro; Ferreira Parracho, Pedro Guilherme; Gallinaro, Michele; Lloret Iglesias, Lara; Nguyen, Federico; Rodrigues Antunes, Joao; Seixas, Joao; Varela, Joao; Vischia, Pietro; Afanasiev, Serguei; Golutvin, Igor; Karjavin, Vladimir; Konoplyanikov, Viktor; Korenkov, Vladimir; Kozlov, Guennady; Lanev, Alexander; Malakhov, Alexander; Matveev, Viktor; Mitsyn, Valeri Valentinovitch; Moisenz, Petr; Palichik, Vladimir; Perelygin, Victor; Shmatov, Sergey; Skatchkov, Nikolai; Smirnov, Vitaly; Tikhonenko, Elena; Zarubin, Anatoli; Golovtsov, Victor; Ivanov, Yury; Kim, Victor; Levchenko, Petr; Murzin, Victor; Oreshkin, Vadim; Smirnov, Igor; Sulimov, Valentin; Uvarov, Lev; Vavilov, Sergey; Vorobyev, Alexey; Vorobyev, Andrey; Andreev, Yuri; Dermenev, Alexander; Gninenko, Sergei; Golubev, Nikolai; Kirsanov, Mikhail; Krasnikov, Nikolai; Pashenkov, Anatoli; Tlisov, Danila; Toropin, Alexander; Epshteyn, Vladimir; Gavrilov, Vladimir; Lychkovskaya, Natalia; Popov, Vladimir; Pozdnyakov, Ivan; Safronov, Grigory; Semenov, Sergey; Spiridonov, Alexander; Stolin, Viatcheslav; Vlasov, Evgueni; Zhokin, Alexander; Andreev, Vladimir; Azarkin, Maksim; Dremin, Igor; Kirakosyan, Martin; Leonidov, Andrey; Mesyats, Gennady; Rusakov, Sergey V; Vinogradov, Alexey; Belyaev, Andrey; Boos, Edouard; Dubinin, Mikhail; Dudko, Lev; Ershov, Alexander; Gribushin, Andrey; Klyukhin, Vyacheslav; Kodolova, Olga; Lokhtin, Igor; Obraztsov, Stepan; Petrushanko, Sergey; Savrin, Viktor; Snigirev, Alexander; Azhgirey, Igor; Bayshev, Igor; Bitioukov, Sergei; Kachanov, Vassili; Kalinin, Alexey; Konstantinov, Dmitri; Krychkine, Victor; Petrov, Vladimir; Ryutin, Roman; Sobol, Andrei; Tourtchanovitch, Leonid; Troshin, Sergey; Tyurin, Nikolay; Uzunian, Andrey; Volkov, Alexey; Adzic, Petar; Ekmedzic, Marko; Milosevic, Jovan; Rekovic, Vladimir; Alcaraz Maestre, Juan; Battilana, Carlo; Calvo, Enrique; Cerrada, Marcos; Chamizo Llatas, Maria; Colino, Nicanor; De La Cruz, Begona; Delgado Peris, Antonio; Domínguez Vázquez, Daniel; Escalante Del Valle, Alberto; Fernandez Bedoya, Cristina; Fernández Ramos, Juan Pablo; Flix, Jose; Fouz, Maria Cruz; Garcia-Abia, Pablo; Gonzalez Lopez, Oscar; Goy Lopez, Silvia; Hernandez, Jose M; Josa, Maria Isabel; Navarro De Martino, Eduardo; Pérez-Calero Yzquierdo, Antonio María; Puerta Pelayo, Jesus; Quintario Olmeda, Adrián; Redondo, Ignacio; Romero, Luciano; Senghi Soares, Mara; Albajar, Carmen; de Trocóniz, Jorge F; Missiroli, Marino; Moran, Dermot; Brun, Hugues; Cuevas, Javier; Fernandez Menendez, Javier; Folgueras, Santiago; Gonzalez Caballero, Isidro; Brochero Cifuentes, Javier Andres; Cabrillo, Iban Jose; Calderon, Alicia; Duarte Campderros, Jordi; Fernandez, Marcos; Gomez, Gervasio; Graziano, Alberto; Lopez Virto, Amparo; Marco, Jesus; Marco, Rafael; Martinez Rivero, Celso; Matorras, Francisco; Munoz Sanchez, Francisca Javiela; Piedra Gomez, Jonatan; Rodrigo, Teresa; Rodríguez-Marrero, Ana Yaiza; Ruiz-Jimeno, Alberto; Scodellaro, Luca; Vila, Ivan; Vilar Cortabitarte, Rocio; Abbaneo, Duccio; Auffray, Etiennette; Auzinger, Georg; Bachtis, Michail; Baillon, Paul; Ball, Austin; Barney, David; Benaglia, Andrea; Bendavid, Joshua; Benhabib, Lamia; Benitez, Jose F; Bloch, Philippe; Bocci, Andrea; Bonato, Alessio; Bondu, Olivier; Botta, Cristina; Breuker, Horst; Camporesi, Tiziano; Cerminara, Gianluca; Colafranceschi, Stefano; D'Alfonso, Mariarosaria; D'Enterria, David; Dabrowski, Anne; David Tinoco Mendes, Andre; De Guio, Federico; De Roeck, Albert; De Visscher, Simon; Di Marco, Emanuele; Dobson, Marc; Dordevic, Milos; Dorney, Brian; Dupont-Sagorin, Niels; Elliott-Peisert, Anna; Franzoni, Giovanni; Funk, Wolfgang; Gigi, Dominique; Gill, Karl; Giordano, Domenico; Girone, Maria; Glege, Frank; Guida, Roberto; Gundacker, Stefan; Guthoff, Moritz; Hammer, Josef; Hansen, Magnus; Harris, Philip; Hegeman, Jeroen; Innocente, Vincenzo; Janot, Patrick; Kousouris, Konstantinos; Krajczar, Krisztian; Lecoq, Paul; Lourenco, Carlos; Magini, Nicolo; Malgeri, Luca; Mannelli, Marcello; Marrouche, Jad; Masetti, Lorenzo; Meijers, Frans; Mersi, Stefano; Meschi, Emilio; Moortgat, Filip; Morovic, Srecko; Mulders, Martijn; Orsini, Luciano; Pape, Luc; Perez, Emmanuelle; Petrilli, Achille; Petrucciani, Giovanni; Pfeiffer, Andreas; Pimiä, Martti; Piparo, Danilo; Plagge, Michael; Racz, Attila; Rolandi, Gigi; Rovere, Marco; Sakulin, Hannes; Schäfer, Christoph; Schwick, Christoph; Sharma, Archana; Siegrist, Patrice; Silva, Pedro; Simon, Michal; Sphicas, Paraskevas; Spiga, Daniele; Steggemann, Jan; Stieger, Benjamin; Stoye, Markus; Takahashi, Yuta; Treille, Daniel; Tsirou, Andromachi; Veres, Gabor Istvan; Wardle, Nicholas; Wöhri, Hermine Katharina; Wollny, Heiner; Zeuner, Wolfram Dietrich; Bertl, Willi; Deiters, Konrad; Erdmann, Wolfram; Horisberger, Roland; Ingram, Quentin; Kaestli, Hans-Christian; Kotlinski, Danek; Langenegger, Urs; Renker, Dieter; Rohe, Tilman; Bachmair, Felix; Bäni, Lukas; Bianchini, Lorenzo; Buchmann, Marco-Andrea; Casal, Bruno; Chanon, Nicolas; Dissertori, Günther; Dittmar, Michael; Donegà, Mauro; Dünser, Marc; Eller, Philipp; Grab, Christoph; Hits, Dmitry; Hoss, Jan; Lustermann, Werner; Mangano, Boris; Marini, Andrea Carlo; Marionneau, Matthieu; Martinez Ruiz del Arbol, Pablo; Masciovecchio, Mario; Meister, Daniel; Mohr, Niklas; Musella, Pasquale; Nägeli, Christoph; Nessi-Tedaldi, Francesca; Pandolfi, Francesco; Pauss, Felicitas; Perrozzi, Luca; Peruzzi, Marco; Quittnat, Milena; Rebane, Liis; Rossini, Marco; Starodumov, Andrei; Takahashi, Maiko; Theofilatos, Konstantinos; Wallny, Rainer; Weber, Hannsjoerg Artur; Amsler, Claude; Canelli, Maria Florencia; Chiochia, Vincenzo; De Cosa, Annapaola; Hinzmann, Andreas; Hreus, Tomas; Kilminster, Benjamin; Lange, Clemens; Millan Mejias, Barbara; Ngadiuba, Jennifer; Pinna, Deborah; Robmann, Peter; Ronga, Frederic Jean; Taroni, Silvia; Verzetti, Mauro; Yang, Yong; Cardaci, Marco; Chen, Kuan-Hsin; Ferro, Cristina; Kuo, Chia-Ming; Lin, Willis; Lu, Yun-Ju; Volpe, Roberta; Yu, Shin-Shan; Chang, Paoti; Chang, You-Hao; Chang, Yu-Wei; Chao, Yuan; Chen, Kai-Feng; Chen, Po-Hsun; Dietz, Charles; Grundler, Ulysses; Hou, George Wei-Shu; Kao, Kai-Yi; Liu, Yueh-Feng; Lu, Rong-Shyang; Majumder, Devdatta; Petrakou, Eleni; Tzeng, Yeng-Ming; Wilken, Rachel; Asavapibhop, Burin; Singh, Gurpreet; Srimanobhas, Norraphat; Suwonjandee, Narumon; Adiguzel, Aytul; Bakirci, Mustafa Numan; Cerci, Salim; Dozen, Candan; Dumanoglu, Isa; Eskut, Eda; Girgis, Semiray; Gokbulut, Gul; Gurpinar, Emine; Hos, Ilknur; Kangal, Evrim Ersin; Kayis Topaksu, Aysel; Onengut, Gulsen; Ozdemir, Kadri; Ozturk, Sertac; Polatoz, Ayse; Sunar Cerci, Deniz; Tali, Bayram; Topakli, Huseyin; Vergili, Mehmet; Akin, Ilina Vasileva; Bilin, Bugra; Bilmis, Selcuk; Gamsizkan, Halil; Isildak, Bora; Karapinar, Guler; Ocalan, Kadir; Sekmen, Sezen; Surat, Ugur Emrah; Yalvac, Metin; Zeyrek, Mehmet; Albayrak, Elif Asli; Gülmez, Erhan; Kaya, Mithat; Kaya, Ozlem; Yetkin, Taylan; Cankocak, Kerem; Vardarlı, Fuat Ilkehan; Levchuk, Leonid; Sorokin, Pavel; Brooke, James John; Clement, Emyr; Cussans, David; Flacher, Henning; Goldstein, Joel; Grimes, Mark; Heath, Greg P; Heath, Helen F; Jacob, Jeson; Kreczko, Lukasz; Lucas, Chris; Meng, Zhaoxia; Newbold, Dave M; Paramesvaran, Sudarshan; Poll, Anthony; Sakuma, Tai; Seif El Nasr-storey, Sarah; Senkin, Sergey; Smith, Vincent J; Bell, Ken W; Belyaev, Alexander; Brew, Christopher; Brown, Robert M; Cockerill, David JA; Coughlan, John A; Harder, Kristian; Harper, Sam; Olaiya, Emmanuel; Petyt, David; Shepherd-Themistocleous, Claire; Thea, Alessandro; Tomalin, Ian R; Williams, Thomas; Womersley, William John; Worm, Steven; Baber, Mark; Bainbridge, Robert; Buchmuller, Oliver; Burton, Darren; Colling, David; Cripps, Nicholas; Dauncey, Paul; Davies, Gavin; Della Negra, Michel; Dunne, Patrick; Ferguson, William; Fulcher, Jonathan; Futyan, David; Hall, Geoffrey; Iles, Gregory; Jarvis, Martyn; Karapostoli, Georgia; Kenzie, Matthew; Lane, Rebecca; Lucas, Robyn; Lyons, Louis; Magnan, Anne-Marie; Malik, Sarah; Mathias, Bryn; Nash, Jordan; Nikitenko, Alexander; Pela, Joao; Pesaresi, Mark; Petridis, Konstantinos; Raymond, David Mark; Rogerson, Samuel; Rose, Andrew; Seez, Christopher; Sharp, Peter; Tapper, Alexander; Vazquez Acosta, Monica; Virdee, Tejinder; Zenz, Seth Conrad; Cole, Joanne; Hobson, Peter R; Khan, Akram; Kyberd, Paul; Leggat, Duncan; Leslie, Dawn; Reid, Ivan; Symonds, Philip; Teodorescu, Liliana; Turner, Mark; Dittmann, Jay; Hatakeyama, Kenichi; Kasmi, Azeddine; Liu, Hongxuan; Scarborough, Tara; Charaf, Otman; Cooper, Seth; Henderson, Conor; Rumerio, Paolo; Avetisyan, Aram; Bose, Tulika; Fantasia, Cory; Lawson, Philip; Richardson, Clint; Rohlf, James; St John, Jason; Sulak, Lawrence; Alimena, Juliette; Berry, Edmund; Bhattacharya, Saptaparna; Christopher, Grant; Cutts, David; Demiragli, Zeynep; Dhingra, Nitish; Ferapontov, Alexey; Garabedian, Alex; Heintz, Ulrich; Kukartsev, Gennadiy; Laird, Edward; Landsberg, Greg; Luk, Michael; Narain, Meenakshi; Segala, Michael; Sinthuprasith, Tutanon; Speer, Thomas; Swanson, Joshua; Breedon, Richard; Breto, Guillermo; Calderon De La Barca Sanchez, Manuel; Chauhan, Sushil; Chertok, Maxwell; Conway, John; Conway, Rylan; Cox, Peter Timothy; Erbacher, Robin; Gardner, Michael; Ko, Winston; Lander, Richard; Mulhearn, Michael; Pellett, Dave; Pilot, Justin; Ricci-Tam, Francesca; Shalhout, Shalhout; Smith, John; Squires, Michael; Stolp, Dustin; Tripathi, Mani; Wilbur, Scott; Yohay, Rachel; Cousins, Robert; Everaerts, Pieter; Farrell, Chris; Hauser, Jay; Ignatenko, Mikhail; Rakness, Gregory; Takasugi, Eric; Valuev, Vyacheslav; Weber, Matthias; Burt, Kira; Clare, Robert; Ellison, John Anthony; Gary, J William; Hanson, Gail; Heilman, Jesse; Ivova Rikova, Mirena; Jandir, Pawandeep; Kennedy, Elizabeth; Lacroix, Florent; Long, Owen Rosser; Luthra, Arun; Malberti, Martina; Olmedo Negrete, Manuel; Shrinivas, Amithabh; Sumowidagdo, Suharyo; Wimpenny, Stephen; Branson, James G; Cerati, Giuseppe Benedetto; Cittolin, Sergio; D'Agnolo, Raffaele Tito; Holzner, André; Kelley, Ryan; Klein, Daniel; Kovalskyi, Dmytro; Letts, James; Macneill, Ian; Olivito, Dominick; Padhi, Sanjay; Palmer, Christopher; Pieri, Marco; Sani, Matteo; Sharma, Vivek; Simon, Sean; Tu, Yanjun; Vartak, Adish; Welke, Charles; Würthwein, Frank; Yagil, Avraham; Barge, Derek; Bradmiller-Feld, John; Campagnari, Claudio; Danielson, Thomas; Dishaw, Adam; Dutta, Valentina; Flowers, Kristen; Franco Sevilla, Manuel; Geffert, Paul; George, Christopher; Golf, Frank; Gouskos, Loukas; Incandela, Joe; Justus, Christopher; Mccoll, Nickolas; Richman, Jeffrey; Stuart, David; To, Wing; West, Christopher; Yoo, Jaehyeok; Apresyan, Artur; Bornheim, Adolf; Bunn, Julian; Chen, Yi; Duarte, Javier; Mott, Alexander; Newman, Harvey B; Pena, Cristian; Pierini, Maurizio; Spiropulu, Maria; Vlimant, Jean-Roch; Wilkinson, Richard; Xie, Si; Zhu, Ren-Yuan; Azzolini, Virginia; Calamba, Aristotle; Carlson, Benjamin; Ferguson, Thomas; Iiyama, Yutaro; Paulini, Manfred; Russ, James; Vogel, Helmut; Vorobiev, Igor; Cumalat, John Perry; Ford, William T; Gaz, Alessandro; Krohn, Michael; Luiggi Lopez, Eduardo; Nauenberg, Uriel; Smith, James; Stenson, Kevin; Wagner, Stephen Robert; Alexander, James; Chatterjee, Avishek; Chaves, Jorge; Chu, Jennifer; Dittmer, Susan; Eggert, Nicholas; Mirman, Nathan; Nicolas Kaufman, Gala; Patterson, Juliet Ritchie; Ryd, Anders; Salvati, Emmanuele; Skinnari, Louise; Sun, Werner; Teo, Wee Don; Thom, Julia; Thompson, Joshua; Tucker, Jordan; Weng, Yao; Winstrom, Lucas; Wittich, Peter; Winn, Dave; Abdullin, Salavat; Albrow, Michael; Anderson, Jacob; Apollinari, Giorgio; Bauerdick, Lothar AT; Beretvas, Andrew; Berryhill, Jeffrey; Bhat, Pushpalatha C; Bolla, Gino; Burkett, Kevin; Butler, Joel Nathan; Cheung, Harry; Chlebana, Frank; Cihangir, Selcuk; Elvira, Victor Daniel; Fisk, Ian; Freeman, Jim; Gao, Yanyan; Gottschalk, Erik; Gray, Lindsey; Green, Dan; Grünendahl, Stefan; Gutsche, Oliver; Hanlon, Jim; Hare, Daryl; Harris, Robert M; Hirschauer, James; Hooberman, Benjamin; Jindariani, Sergo; Johnson, Marvin; Joshi, Umesh; Klima, Boaz; Kreis, Benjamin; Kwan, Simon; Linacre, Jacob; Lincoln, Don; Lipton, Ron; Liu, Tiehui; Lykken, Joseph; Maeshima, Kaori; Marraffino, John Michael; Martinez Outschoorn, Verena Ingrid; Maruyama, Sho; Mason, David; McBride, Patricia; Merkel, Petra; Mishra, Kalanand; Mrenna, Stephen; Nahn, Steve; Newman-Holmes, Catherine; O'Dell, Vivian; Prokofyev, Oleg; Sexton-Kennedy, Elizabeth; Sharma, Seema; Soha, Aron; Spalding, William J; Spiegel, Leonard; Taylor, Lucas; Tkaczyk, Slawek; Tran, Nhan Viet; Uplegger, Lorenzo; Vaandering, Eric Wayne; Vidal, Richard; Whitbeck, Andrew; Whitmore, Juliana; Yang, Fan; Acosta, Darin; Avery, Paul; Bortignon, Pierluigi; Bourilkov, Dimitri; Carver, Matthew; Curry, David; Das, Souvik; De Gruttola, Michele; Di Giovanni, Gian Piero; Field, Richard D; Fisher, Matthew; Furic, Ivan-Kresimir; Hugon, Justin; Konigsberg, Jacobo; Korytov, Andrey; Kypreos, Theodore; Low, Jia Fu; Matchev, Konstantin; Mei, Hualin; Milenovic, Predrag; Mitselmakher, Guenakh; Muniz, Lana; Rinkevicius, Aurelijus; Shchutska, Lesya; Snowball, Matthew; Sperka, David; Yelton, John; Zakaria, Mohammed; Hewamanage, Samantha; Linn, Stephan; Markowitz, Pete; Martinez, German; Rodriguez, Jorge Luis; Adams, Todd; Askew, Andrew; Bochenek, Joseph; Diamond, Brendan; Haas, Jeff; Hagopian, Sharon; Hagopian, Vasken; Johnson, Kurtis F; Prosper, Harrison; Veeraraghavan, Venkatesh; Weinberg, Marc; Baarmand, Marc M; Hohlmann, Marcus; Kalakhety, Himali; Yumiceva, Francisco; Adams, Mark Raymond; Apanasevich, Leonard; Berry, Douglas; Betts, Russell Richard; Bucinskaite, Inga; Cavanaugh, Richard; Evdokimov, Olga; Gauthier, Lucie; Gerber, Cecilia Elena; Hofman, David Jonathan; Kurt, Pelin; O'Brien, Christine; Sandoval Gonzalez, Irving Daniel; Silkworth, Christopher; Turner, Paul; Varelas, Nikos; Bilki, Burak; Clarida, Warren; Dilsiz, Kamuran; Haytmyradov, Maksat; Merlo, Jean-Pierre; Mermerkaya, Hamit; Mestvirishvili, Alexi; Moeller, Anthony; Nachtman, Jane; Ogul, Hasan; Onel, Yasar; Ozok, Ferhat; Penzo, Aldo; Rahmat, Rahmat; Sen, Sercan; Tan, Ping; Tiras, Emrah; Wetzel, James; Yi, Kai; Barnett, Bruce Arnold; Blumenfeld, Barry; Bolognesi, Sara; Fehling, David; Gritsan, Andrei; Maksimovic, Petar; Martin, Christopher; Swartz, Morris; Baringer, Philip; Bean, Alice; Benelli, Gabriele; Bruner, Christopher; Gray, Julia; Kenny III, Raymond Patrick; Malek, Magdalena; Murray, Michael; Noonan, Daniel; Sanders, Stephen; Sekaric, Jadranka; Stringer, Robert; Wang, Quan; Wood, Jeffrey Scott; Chakaberia, Irakli; Ivanov, Andrew; Kaadze, Ketino; Khalil, Sadia; Makouski, Mikhail; Maravin, Yurii; Saini, Lovedeep Kaur; Skhirtladze, Nikoloz; Svintradze, Irakli; Gronberg, Jeffrey; Lange, David; Rebassoo, Finn; Wright, Douglas; Baden, Drew; Belloni, Alberto; Calvert, Brian; Eno, Sarah Catherine; Gomez, Jaime; Hadley, Nicholas John; Kellogg, Richard G; Kolberg, Ted; Lu, Ying; Mignerey, Alice; Pedro, Kevin; Skuja, Andris; Tonjes, Marguerite; Tonwar, Suresh C; Apyan, Aram; Barbieri, Richard; Busza, Wit; Cali, Ivan Amos; Chan, Matthew; Di Matteo, Leonardo; Gomez Ceballos, Guillelmo; Goncharov, Maxim; Gulhan, Doga; Klute, Markus; Lai, Yue Shi; Lee, Yen-Jie; Levin, Andrew; Luckey, Paul David; Paus, Christoph; Ralph, Duncan; Roland, Christof; Roland, Gunther; Stephans, George; Sumorok, Konstanty; Velicanu, Dragos; Veverka, Jan; Wyslouch, Bolek; Yang, Mingming; Zanetti, Marco; Zhukova, Victoria; Dahmes, Bryan; Gude, Alexander; Kao, Shih-Chuan; Klapoetke, Kevin; Kubota, Yuichi; Mans, Jeremy; Nourbakhsh, Shervin; Pastika, Nathaniel; Rusack, Roger; Singovsky, Alexander; Tambe, Norbert; Turkewitz, Jared; Acosta, John Gabriel; Oliveros, Sandra; Avdeeva, Ekaterina; Bloom, Kenneth; Bose, Suvadeep; Claes, Daniel R; Dominguez, Aaron; Gonzalez Suarez, Rebeca; Keller, Jason; Knowlton, Dan; Kravchenko, Ilya; Lazo-Flores, Jose; Meier, Frank; Ratnikov, Fedor; Snow, Gregory R; Zvada, Marian; Dolen, James; Godshalk, Andrew; Iashvili, Ia; Kharchilava, Avto; Kumar, Ashish; Rappoccio, Salvatore; Alverson, George; Barberis, Emanuela; Baumgartel, Darin; Chasco, Matthew; Massironi, Andrea; Morse, David Michael; Nash, David; Orimoto, Toyoko; Trocino, Daniele; Wang, Ren-Jie; Wood, Darien; Zhang, Jinzhong; Hahn, Kristan Allan; Kubik, Andrew; Mucia, Nicholas; Odell, Nathaniel; Pollack, Brian; Pozdnyakov, Andrey; Schmitt, Michael Henry; Stoynev, Stoyan; Sung, Kevin; Velasco, Mayda; Won, Steven; Brinkerhoff, Andrew; Chan, Kwok Ming; Drozdetskiy, Alexey; Hildreth, Michael; Jessop, Colin; Karmgard, Daniel John; Kellams, Nathan; Lannon, Kevin; Lynch, Sean; Marinelli, Nancy; Musienko, Yuri; Pearson, Tessa; Planer, Michael; Ruchti, Randy; Smith, Geoffrey; Valls, Nil; Wayne, Mitchell; Wolf, Matthias; Woodard, Anna; Antonelli, Louis; Brinson, Jessica; Bylsma, Ben; Durkin, Lloyd Stanley; Flowers, Sean; Hart, Andrew; Hill, Christopher; Hughes, Richard; Kotov, Khristian; Ling, Ta-Yung; Luo, Wuming; Puigh, Darren; Rodenburg, Marissa; Winer, Brian L; Wolfe, Homer; Wulsin, Howard Wells; Driga, Olga; Elmer, Peter; Hardenbrook, Joshua; Hebda, Philip; Koay, Sue Ann; Lujan, Paul; Marlow, Daniel; Medvedeva, Tatiana; Mooney, Michael; Olsen, James; Piroué, Pierre; Quan, Xiaohang; Saka, Halil; Stickland, David; Tully, Christopher; Werner, Jeremy Scott; Zuranski, Andrzej; Brownson, Eric; Malik, Sudhir; Mendez, Hector; Ramirez Vargas, Juan Eduardo; Barnes, Virgil E; Benedetti, Daniele; Bortoletto, Daniela; De Mattia, Marco; Gutay, Laszlo; Hu, Zhen; Jha, Manoj; Jones, Matthew; Jung, Kurt; Kress, Matthew; Leonardo, Nuno; Miller, David Harry; Neumeister, Norbert; Radburn-Smith, Benjamin Charles; Shi, Xin; Shipsey, Ian; Silvers, David; Svyatkovskiy, Alexey; Wang, Fuqiang; Xie, Wei; Xu, Lingshan; Zablocki, Jakub; Parashar, Neeti; Stupak, John; Adair, Antony; Akgun, Bora; Ecklund, Karl Matthew; Geurts, Frank J.M.; Li, Wei; Michlin, Benjamin; Padley, Brian Paul; Redjimi, Radia; Roberts, Jay; Zabel, James; Betchart, Burton; Bodek, Arie; Covarelli, Roberto; de Barbaro, Pawel; Demina, Regina; Eshaq, Yossof; Ferbel, Thomas; Garcia-Bellido, Aran; Goldenzweig, Pablo; Han, Jiyeon; Harel, Amnon; Khukhunaishvili, Aleko; Korjenevski, Sergey; Petrillo, Gianluca; Vishnevskiy, Dmitry; Ciesielski, Robert; Demortier, Luc; Goulianos, Konstantin; Mesropian, Christina; Arora, Sanjay; Barker, Anthony; Chou, John Paul; Contreras-Campana, Christian; Contreras-Campana, Emmanuel; Duggan, Daniel; Ferencek, Dinko; Gershtein, Yuri; Gray, Richard; Halkiadakis, Eva; Hidas, Dean; Kaplan, Steven; Lath, Amitabh; Panwalkar, Shruti; Park, Michael; Patel, Rishi; Salur, Sevil; Schnetzer, Steve; Sheffield, David; Somalwar, Sunil; Stone, Robert; Thomas, Scott; Thomassen, Peter; Walker, Matthew; Rose, Keith; Spanier, Stefan; York, Andrew; Bouhali, Othmane; Castaneda Hernandez, Alfredo; Eusebi, Ricardo; Flanagan, Will; Gilmore, Jason; Kamon, Teruki; Khotilovich, Vadim; Krutelyov, Vyacheslav; Montalvo, Roy; Osipenkov, Ilya; Pakhotin, Yuriy; Perloff, Alexx; Roe, Jeffrey; Rose, Anthony; Safonov, Alexei; Suarez, Indara; Tatarinov, Aysen; Ulmer, Keith; Akchurin, Nural; Cowden, Christopher; Damgov, Jordan; Dragoiu, Cosmin; Dudero, Phillip Russell; Faulkner, James; Kovitanggoon, Kittikul; Kunori, Shuichi; Lee, Sung Won; Libeiro, Terence; Volobouev, Igor; Appelt, Eric; Delannoy, Andrés G; Greene, Senta; Gurrola, Alfredo; Johns, Willard; Maguire, Charles; Mao, Yaxian; Melo, Andrew; Sharma, Monika; Sheldon, Paul; Snook, Benjamin; Tuo, Shengquan; Velkovska, Julia; Arenton, Michael Wayne; Boutle, Sarah; Cox, Bradley; Francis, Brian; Goodell, Joseph; Hirosky, Robert; Ledovskoy, Alexander; Li, Hengne; Lin, Chuanzhe; Neu, Christopher; Wood, John; Clarke, Christopher; Harr, Robert; Karchin, Paul Edmund; Kottachchi Kankanamge Don, Chamath; Lamichhane, Pramod; Sturdy, Jared; Belknap, Donald; Carlsmith, Duncan; Cepeda, Maria; Dasu, Sridhara; Dodd, Laura; Duric, Senka; Friis, Evan; Hall-Wilton, Richard; Herndon, Matthew; Hervé, Alain; Klabbers, Pamela; Lanaro, Armando; Lazaridis, Christos; Levine, Aaron; Loveless, Richard; Mohapatra, Ajit; Ojalvo, Isabel; Perry, Thomas; Pierro, Giuseppe Antonio; Polese, Giovanni; Ross, Ian; Sarangi, Tapas; Savin, Alexander; Smith, Wesley H; Taylor, Devin; Vuosalo, Carl; Woods, Nathaniel

    2015-10-29

    Measurements of the ZZ production cross sections in proton-proton collisions at center-of-mass energies of 7 and 8 TeV are presented. Candidate events for the leptonic decay mode $\\mathrm{ZZ} \\to 2\\ell2\

  4. Asteroseismology of ZZ Ceti stars with fully evolutionary white dwarf models. I. The impact of the uncertainties from prior evolution on the period spectrum

    Science.gov (United States)

    De Gerónimo, F. C.; Althaus, L. G.; Córsico, A. H.; Romero, A. D.; Kepler, S. O.

    2017-03-01

    Context. ZZ Ceti stars are pulsating white dwarfs with a carbon-oxygen core build up during the core helium burning and thermally pulsing Asymptotic Giant Branch phases. Through the interpretation of their pulsation periods by means of asteroseismology, details about their origin and evolution can be inferred. The whole pulsation spectrum exhibited by ZZ Ceti stars strongly depends on the inner chemical structure. At present, there are several processes affecting the chemical profiles that are still not accurately determined. Aims: We present a study of the impact of the current uncertainties of the white dwarf formation and evolution on the expected pulsation properties of ZZ Ceti stars. Methods: Our analysis is based on a set of carbon-oxygen core white dwarf models with masses 0.548 and 0.837 M⊙ that are derived from full evolutionary computations from the ZAMS to the ZZ Ceti domain. We considered models in which we varied the number of thermal pulses, the amount of overshooting, and the 12C(α,γ)16O reaction rate within their uncertainties. Results: We explore the impact of these major uncertainties in prior evolution on the chemical structure and expected pulsation spectrum. We find that these uncertainties yield significant changes in the g-mode pulsation periods. Conclusions: We conclude that the uncertainties in the white dwarf progenitor evolution should be taken into account in detailed asteroseismological analyses of these pulsating stars.

  5. Inhomogeneous broadening of PAC spectra with V{sub zz} and {eta} joint probability distribution functions

    Energy Technology Data Exchange (ETDEWEB)

    Evenson, W. E.; Adams, M.; Bunker, A.; Hodges, J.; Matheson, P.; Park, T.; Stufflebeam, M. [Utah Valley University, Department of Physics (United States); Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States)

    2013-05-15

    The perturbed angular correlation (PAC) spectrum, G{sub 2}(t), is broadened by the presence of randomly distributed defects in crystals due to a distribution of electric field gradients (EFGs) experienced by probe nuclei. Heuristic approaches to fitting spectra that exhibit such inhomogeneous broadening (ihb) consider only the distribution of EFG magnitudes V{sub zz}, but the physical effect actually depends on the joint probability distribution function (pdf) of V{sub zz} and EFG asymmetry parameter {eta}. The difficulty in determining the joint pdf leads us to more appropriate representations of the EFG coordinates, and to express the joint pdf as the product of two approximately independent pdfs describing each coordinate separately. We have pursued this case in detail using as an initial illustration of the method a simple point defect model with nuclear spin I = 5/2 in several cubic lattices, where G{sub 2}(t) is primarily induced by a defect trapped in the first neighbor shell of a probe and broadening is due to defects distributed at random outside the first neighbor shell. Effects such as lattice relaxation are ignored in this simple test of the method. The simplicity of our model is suitable for gaining insight into ihb with more than V{sub zz} alone. We simulate ihb in this simple case by averaging the net EFGs of 20,000 random defect arrangements, resulting in a broadened average G{sub 2}(t). The 20,000 random cases provide a distribution of EFG components which are first transformed to Czjzek coordinates and then further into the full Czjzek half plane by conformal mapping. The topology of this transformed space yields an approximately separable joint pdf for the EFG components. We then fit the nearly independent pdfs and reconstruct G{sub 2}(t) as a function of defect concentration. We report results for distributions of defects on simple cubic, face-centered cubic, and body-centered cubic lattices. The method explored here for analyzing ihb is

  6. Simulation of Jack-Up Overturning Using the Monte Carlo Method with Artificially Increased Significant Wave Height

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2010-01-01

    wave height irrespectively of the non-linearity in the system. However, the FORM analysis only gives an approximation to the mean out-crossing rate. A more exact result can be obtained by Monte Carlo simulations, but the necessary length of the time domain simulations for very low out-crossing rates...... might be prohibitive long. In such cases the property mentioned above for the FORM reliability index can be assumed valid in the Monte Carlo simulations making it possible to increase the out-crossing rates and thus reduced the necessary length of the time domain simulations by applying a larger......-linear processes the mean out-crossing rate depends non-linearly on the response level r and a good estimate can be found using the First Order Reliability Method (FORM), see e.g. Jensen and Capul (2006). The FORM analysis also shows that the reliability index is strictly inversely proportional to the significant...

  7. Monte-Carlo method simulation of the Bremsstrahlung mirror reflection experiment

    International Nuclear Information System (INIS)

    Aliev, F.K.; Muminov, A.T.; Skvortsov, V.V.; Osmanov, B.S.

    2004-01-01

    Full text: To detect gamma-ray mirror reflection on macroscopic smooth surface a search experiment at microtron MT-22S with 330 meter flying distance is in progress. Measured slip angles (i.e. angles between incident ray and reflector surface) don't exceed tens of micro-radian. Under such angles an effect of the reflection could be easily veiled due to negative background conditions. That is why the process needed to be simulated by Monte-Carlo method as accurate as possible and corresponding computer program was developed. A first operating mode of the MT-22S generates 13 MeV electrons that are incident on a Bremsstrahlung target. So energies of gamma-rays were simulated to be in the range of 0.01†12.5 MeV and be distributed by known Shift formula. When any gamma-quantum was incident on the reflector it resulted in following two cases. If its slip angle was more than the critical one, gamma-quantum was to be absorbed by the reflector and the program started to simulate next event. In the other case the program replaced incident gamma-quantum trajectory parameters by the reflected ones. The gamma-quantum trajectory behind the reflector was traced till its detector. Any gamma-quantum that got the detector was to be registered. As any simulated gamma-quantum was of random energy the critical slip angle of every simulated event was evaluated by the following formula: α crit = eh/E √ZN A ρ/πAm. Table values of the absorption coefficients were used for random simulation of gamma-quanta absorption in the air. And it was assumed that any gamma-quantum interaction with air resulted in its disappearance. Dependence of different flying distances (120 and 330 m), gap heights (10, 20 and 50 μ) of the gap collimator and inclinations (20 and 40 μrad) of the reflector's plane on detected gamma-quanta energy distribution and vertical angle one was studied with a help of the developed program

  8. A calibration method for whole-body counters, using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Ishikawa, T.; Matsumoto, M.; Uchiyama, M.

    1996-01-01

    A Monte Carlo simulation code was developed to estimate the counting efficiencies in whole-body counting for various body sizes. The code consists of mathematical models and parameters which are categorised into three groups: a geometrical model for phantom and detectors, a photon transport model, and a detection system model. Photon histories were simulated with these models. The counting efficiencies for five 137 Cs block phantoms of different sizes were calculated by the code and compared with those measured with a whole-body counter at NIRS (Japan). The phantoms corresponded to a newborn, a 5 month old, a 6 year old, and 11 year old and an adult. The differences between the measured and calculated values were within 6%. For the adult phantom, the difference was 0.5%. The results suggest that the Monte Carlo simulation code can be used to estimate the counting efficiencies for various body sizes. (Author)

  9. A calibration method for whole-body counters, using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, T.; Matsumoto, M.; Uchiyama, M. [National Inst. of Radiological Sciences, Chiba (Japan)

    1996-11-01

    A Monte Carlo simulation code was developed to estimate the counting efficiencies in whole-body counting for various body sizes. The code consists of mathematical models and parameters which are categorised into three groups: a geometrical model for phantom and detectors, a photon transport model, and a detection system model. Photon histories were simulated with these models. The counting efficiencies for five {sup 137}Cs block phantoms of different sizes were calculated by the code and compared with those measured with a whole-body counter at NIRS (Japan). The phantoms corresponded to a newborn, a 5 month old, a 6 year old, and 11 year old and an adult. The differences between the measured and calculated values were within 6%. For the adult phantom, the difference was 0.5%. The results suggest that the Monte Carlo simulation code can be used to estimate the counting efficiencies for various body sizes. (Author).

  10. Simulation based sequential Monte Carlo methods for discretely observed Markov processes

    OpenAIRE

    Neal, Peter

    2014-01-01

    Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...

  11. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    Science.gov (United States)

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-08-01

    A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. The codes WAV3BDY and WAV4BDY and the variational Monte Carlo method

    International Nuclear Information System (INIS)

    Schiavilla, R.

    1987-01-01

    A description of the codes WAV3BDY and WAV4BDY, which generate the variational ground state wave functions of the A=3 and 4 nuclei, is given, followed by a discussion of the Monte Carlo integration technique, which is used to calculate expectation values and transition amplitudes of operators, and for whose implementation WAV3BDY and WAV4BDY are well suited

  13. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Science.gov (United States)

    Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.

    2014-06-01

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.

  14. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  15. The effects of weekly augmentation therapy in patients with PiZZ α1-antitrypsin deficiency

    Directory of Open Access Journals (Sweden)

    Schmid ST

    2012-09-01

    Full Text Available ST Schmid,1 J Koepke,1 M Dresel,1 A Hattesohl,1 E Frenzel,2 J Perez,3 DA Lomas,4 E Miranda,5 T Greulich,1 S Noeske,1 M Wencker,6 H Teschler,6 C Vogelmeier,1 S Janciauskiene,2,* AR Koczulla1,*1Department of Internal Medicine, Division for Pulmonary Diseases, University Hospital Marburg, Marburg, Germany; 2Department of Respiratory Medicine, Hannover Medical School, Hannover, Germany; 3Department of Cellular Biology, University of Malaga, Malaga, Spain; 4Department of Medicine, Cambridge Institute for Medical Research, University of Cambridge, Cambridge, United Kingdom; 5Department of Biology and Biotechnology, Istituto Pasteur – Fondazione Cenci Bolognetti, Sapienza University of Rome, Rome, Italy; 6Department of Pneumology, West German Lung Clinic, Essen University Hospital, Essen, Germany*These authors contributed equally to this workBackground: The major concept behind augmentation therapy with human α1-antitrypsin (AAT is to raise the levels of AAT in patients with protease inhibitor phenotype ZZ (Glu342Lys-inherited AAT deficiency and to protect lung tissues from proteolysis and progression of emphysema.Objective: To evaluate the short-term effects of augmentation therapy (Prolastin® on plasma levels of AAT, C-reactive protein, and chemokines/cytokines.Materials and methods: Serum and exhaled breath condensate were collected from individuals with protease inhibitor phenotype ZZ AAT deficiency-related emphysema (n = 12 on the first, third, and seventh day after the infusion of intravenous Prolastin. Concentrations of total and polymeric AAT, interleukin-8 (IL-8, monocyte chemotactic protein-1, IL-6, tumor necrosis factor-α, vascular endothelial growth factor, and C-reactive protein were determined. Blood neutrophils and primary epithelial cells were also exposed to Prolastin (1 mg/mL.Results: There were significant fluctuations in serum (but not in exhaled breath condensate levels of AAT polymers, IL-8, monocyte chemotactic protein-1, IL

  16. Comparison of finite element and finite difference methods for 2D and 3D calculations with Monte Carlo method results for idealized cases of a heavy water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Carlos; Marconi, Javier; Serra, Oscar [Comision Nacional de Energia Atomica, Buenos Aires (Argentina)]. E-mail: grant@cnea.gov.ar; Mollerach, Ricardo; Fink, Jose [Nucleoelectrica Argentina S.A., Buenos Aires (Argentina)]. E-mail: RMollerach@na-sa.com.ar; JFink@na-sa.com.ar

    2005-07-01

    Nowadays, the increased calculation capacity of modern computers allows us to evaluate the 2D and 3D flux and power distribution of nuclear reactor in a reasonable amount of time using a Monte Carlo method. This method gives results that can be considered the most reliable evaluation of flux and power distribution with a great amount of detail. This is the reason why these results can be considered as benchmark cases that can be used for the validation of other methods. For this purpose, idealized models were calculated using Monte Carlo (code MCNP5) for the ATUCHA I reactor. 2D and 3D cases with and without control rods and channels without fuel element were analyzed. All of them were modeled using a finite element code (DELFIN) and a finite difference code (PUMA). In both cases two energy groups were use. (author)

  17. A CUMULATIVE MIGRATION METHOD FOR COMPUTING RIGOROUS TRANSPORT CROSS SECTIONS AND DIFFUSION COEFFICIENTS FOR LWR LATTICES WITH MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi

    2016-05-01

    A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.

  18. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  19. Application of voxel phantoms and Monte Carlo methods to internal and external dosimetry

    International Nuclear Information System (INIS)

    Hunt, J.G.; Santos, D. de S.; Silva, F.C. da; Dantas, B.M.; Azeredo, A.; Malatova, I.; Foltanova, S.

    2000-01-01

    Voxel phantoms and the Monte Carlo technique are applied to the areas of calibration of in vivo measurement systems, Specific Effective Energy calculations, and dose calculations due to external sources of radiation. The main advantages of the use of voxel phantoms is their high level of detail of body structures, and the ease with which their physical dimensions can be changed. For the simulation of in vivo measurement systems for calibration purposes, a voxel phantom with a format of 871 'slices' each of 277 x 148 picture elements was used. The Monte Carlo technique is used to simulate the tissue contamination, to transport the photons through the tissues and to simulate the detection of the radiation. For benchmarking, the program was applied to obtain calibration factors for the in vivo measurement of 241 Am, U nat and 137 Cs deposited in various tissues or in the whole body, as measured with a NaI or Gernlanium detector. The calculated and real activities in all cases were found to be in good agreement. For the calculation of Specific Effective Energies (SEEs) and the calculation of dose received from external sources, the Yale voxel phantom with a format of 493 slices' each of 87 x 147 picture elements was used. The Monte Carlo program was developed to calculate external doses due to environmental, occupational or accidental exposures. The program calculates tissue and effective dose for the following geometries: cloud immersion, ground contamination, X-ray irradiation, point source irradiation or others. The benchmarking results for the external source are in good agreement with the measured values. The results obtained for the SEEs are compatible with the ICRP values. (author)

  20. Study on the Development of New BWR Core Analysis Scheme Based on the Continuous Energy Monte Carlo Burn-up Calculation Method

    OpenAIRE

    東條, 匡志; tojo, masashi

    2007-01-01

    In this study, a BWR core calculation method is developed. The continuous energy Monte Carlo burn-up calculation code is newly applied to BWR assembly calculations of production level. The applicability of the present new calculation method is verified through the tracking-calculation of commercial BWR.The mechanism and quantitative effects of the error propagations, the spatial discretization and of the temperature distribution in fuel pellet on the Monte Carlo burn-up calculations are clari...

  1. Review of the theory and applications of Monte Carlo methods. Proceedings of a seminar-workshop, Oak Ridge, Tennessee, April 21-23, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Trubey, D.K.; McGill, B.L. (eds.)

    1980-08-01

    This report consists of 24 papers which were presented at the seminar on Theory and Application of Monte Carlo Methods, held in Oak Ridge on April 21-23, plus a summary of the three-man panel discussion which concluded the seminar and two papers which were not given orally. These papers constitute a current statement of the state of the art of the theory and application of Monte Carlo methods for radiation transport problems in shielding and reactor physics.

  2. Evaluation of occupational exposure in interventionist procedures using Monte Carlo Method

    International Nuclear Information System (INIS)

    Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V.E.; Belinato, Walmir; Maia, Ana F.

    2014-01-01

    This study presents a computational model of exposure for a patient, cardiologist and nurse in a typical scenario of cardiac interventional procedures. In this case a set of conversion coefficient (CC) for effective dose (E) in terms of kerma-area product (KAP) for all individuals involved using seven different energy spectra and eight beam projections. The CC was also calculated for the entrance skin dose (ESD) normalized to the PKA for the patient. All individuals were represented by anthropomorphic phantoms incorporated in a radiation transport code based on Monte Carlo simulation. (author)

  3. Monte Carlo method for determining free-energy differences and transition state theory rate constants

    International Nuclear Information System (INIS)

    Voter, A.F.

    1985-01-01

    We present a new Monte Carlo procedure for determining the Helmholtz free-energy difference between two systems that are separated in configuration space. Unlike most standard approaches, no integration over intermediate potentials is required. A Metropolis walk is performed for each system, and the average Metropolis acceptance probability for a hypothetical step along a probe vector into the other system is accumulated. Either classical or quantum free energies may be computed, and the procedure is also ideally suited for evaluating generalized transition state theory rate constants. As an application we determine the relative free energies of three configurations of a tungsten dimer on the W(110) surface

  4. Effective dose in individuals from exposure the patients treated with 131I using Monte Carlo method

    International Nuclear Information System (INIS)

    Carvalho Junior, Alberico B. de; Silva, Ademir X.

    2007-01-01

    In this work, using the Visual Monte Carlo code and the voxel phantom FAX, elaborated similar scenes of irradiation to the treatments used in the nuclear medicine, with the intention of estimate the effective dose in individuals from exposure the patients treated with 131 I. We considered often specific situations, such as doses to others while sleeping, using public or private transportation, or being in a cinema for a few hours. In the possible situations that has been considered, the value of the effective dose did not overcome 0.05 mSv, demonstrating that, for the considered parameters the patient could be release without receiving instructions from radioprotection. (author)

  5. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    Science.gov (United States)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  6. Simulation of diffuse photon migration in tissue by a Monte Carlo method derived from the optical scattering of spheroids.

    Science.gov (United States)

    Hart, Vern P; Doyle, Timothy E

    2013-09-01

    A Monte Carlo method was derived from the optical scattering properties of spheroidal particles and used for modeling diffuse photon migration in biological tissue. The spheroidal scattering solution used a separation of variables approach and numerical calculation of the light intensity as a function of the scattering angle. A Monte Carlo algorithm was then developed which utilized the scattering solution to determine successive photon trajectories in a three-dimensional simulation of optical diffusion and resultant scattering intensities in virtual tissue. Monte Carlo simulations using isotropic randomization, Henyey-Greenstein phase functions, and spherical Mie scattering were additionally developed and used for comparison to the spheroidal method. Intensity profiles extracted from diffusion simulations showed that the four models differed significantly. The depth of scattering extinction varied widely among the four models, with the isotropic, spherical, spheroidal, and phase function models displaying total extinction at depths of 3.62, 2.83, 3.28, and 1.95 cm, respectively. The results suggest that advanced scattering simulations could be used as a diagnostic tool by distinguishing specific cellular structures in the diffused signal. For example, simulations could be used to detect large concentrations of deformed cell nuclei indicative of early stage cancer. The presented technique is proposed to be a more physical description of photon migration than existing phase function methods. This is attributed to the spheroidal structure of highly scattering mitochondria and elongation of the cell nucleus, which occurs in the initial phases of certain cancers. The potential applications of the model and its importance to diffusive imaging techniques are discussed.

  7. Application of the direct simulation Monte Carlo method to nanoscale heat transfer between a soot particle and the surrounding gas

    International Nuclear Information System (INIS)

    Yang, M.; Liu, F.; Smallwood, G.J.

    2004-01-01

    Laser-Induced Incandescence (LII) technique has been widely used to measure soot volume fraction and primary particle size in flames and engine exhaust. Currently there is lack of quantitative understanding of the shielding effect of aggregated soot particles on its conduction heat loss rate to the surrounding gas. The conventional approach for this problem would be the application of the Monte Carlo (MC) method. This method is based on simulation of the trajectories of individual molecules and calculation of the heat transfer at each of the molecule/molecule collisions and the molecule/particle collisions. As the first step toward calculating the heat transfer between a soot aggregate and the surrounding gas, the Direct Simulation Monte Carlo (DSMC) method was used in this study to calculate the heat transfer rate between a single spherical aerosol particle and its cooler surrounding gas under different conditions of temperature, pressure, and the accommodation coefficient. A well-defined and simple hard sphere model was adopted to describe molecule/molecule elastic collisions. A combination of the specular reflection and completely diffuse reflection model was used to consider molecule/particle collisions. The results obtained by DSMC are in good agreement with the known analytical solution of heat transfer rate for an isolated, motionless sphere in the free-molecular regime. Further the DSMC method was applied to calculate the heat transfer in the transition regime. Our present DSMC results agree very well with published DSMC data. (author)

  8. Fast protein loop sampling and structure prediction using distance-guided sequential chain-growth Monte Carlo method.

    Directory of Open Access Journals (Sweden)

    Ke Tang

    2014-04-01

    Full Text Available Loops in proteins are flexible regions connecting regular secondary structures. They are often involved in protein functions through interacting with other molecules. The irregularity and flexibility of loops make their structures difficult to determine experimentally and challenging to model computationally. Conformation sampling and energy evaluation are the two key components in loop modeling. We have developed a new method for loop conformation sampling and prediction based on a chain growth sequential Monte Carlo sampling strategy, called Distance-guided Sequential chain-Growth Monte Carlo (DISGRO. With an energy function designed specifically for loops, our method can efficiently generate high quality loop conformations with low energy that are enriched with near-native loop structures. The average minimum global backbone RMSD for 1,000 conformations of 12-residue loops is 1:53 A° , with a lowest energy RMSD of 2:99 A° , and an average ensembleRMSD of 5:23 A° . A novel geometric criterion is applied to speed up calculations. The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about 10 cpu minutes for 12-residue loops, compared to ca 180 cpu minutes using the FALCm method. Test results on benchmark datasets show that DISGRO performs comparably or better than previous successful methods, while requiring far less computing time. DISGRO is especially effective in modeling longer loops (10-17 residues.

  9. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    Science.gov (United States)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  10. Evaluation of high packing density powder X-ray screens by Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Liaparinos, P. [Department of Medical Physics, Medical School, University of Patras, 26500 Patras (Greece); Kandarakis, I.; Cavouras, D. [Department of Medical Instruments Technology, Technological Educational Institution of Athens, Ag. Spyridonos Street, Aigaleo, 12210 Athens (Greece); Kalivas, N. [Greek Atomic Energy Commission, 15310 Athens (Greece); Delis, H. [Department of Medical Physics, Medical School, University of Patras, 26500 Patras (Greece); Panayiotakis, G. [Department of Medical Physics, Medical School, University of Patras, 26500 Patras (Greece)], E-mail: panayiot@upatras.gr

    2007-09-21

    Phosphor materials are employed in intensifying screens of both digital and conventional X-ray imaging detectors. High packing density powder screens have been developed (e.g. screens in ceramic form) exhibiting high-resolution and light emission properties, and thus contributing to improved image transfer characteristics and higher radiation to light conversion efficiency. For the present study, a custom Monte Carlo simulation program was used in order to examine the performance of ceramic powder screens, under various radiographic conditions. The model was developed using Mie scattering theory for the description of light interactions, based on the physical characteristics (e.g. complex refractive index, light wavelength) of the phosphor material. Monte Carlo simulations were carried out assuming: (a) X-ray photon energy ranging from 18 up to 49 keV, (b) Gd{sub 2}O{sub 2}S:Tb phosphor material with packing density of 70% and grain size of 7 {mu}m and (c) phosphor thickness ranging between 30 and 70 mg/cm{sup 2}. The variation of the Modulation Transfer Function (MTF) and the Luminescence Efficiency (LE) with respect to the X-ray energy and the phosphor thickness was evaluated. Both aforementioned imaging characteristics were shown to take high values at 49 keV X-ray energy and 70 mg/cm{sup 2} phosphor thickness. It was found that high packing density screens may be appropriate for use in medical radiographic systems.

  11. Evaluation of high packing density powder X-ray screens by Monte Carlo methods

    Science.gov (United States)

    Liaparinos, P.; Kandarakis, I.; Cavouras, D.; Kalivas, N.; Delis, H.; Panayiotakis, G.

    2007-09-01

    Phosphor materials are employed in intensifying screens of both digital and conventional X-ray imaging detectors. High packing density powder screens have been developed (e.g. screens in ceramic form) exhibiting high-resolution and light emission properties, and thus contributing to improved image transfer characteristics and higher radiation to light conversion efficiency. For the present study, a custom Monte Carlo simulation program was used in order to examine the performance of ceramic powder screens, under various radiographic conditions. The model was developed using Mie scattering theory for the description of light interactions, based on the physical characteristics (e.g. complex refractive index, light wavelength) of the phosphor material. Monte Carlo simulations were carried out assuming: (a) X-ray photon energy ranging from 18 up to 49 keV, (b) Gd 2O 2S:Tb phosphor material with packing density of 70% and grain size of 7 μm and (c) phosphor thickness ranging between 30 and 70 mg/cm 2. The variation of the Modulation Transfer Function (MTF) and the Luminescence Efficiency (LE) with respect to the X-ray energy and the phosphor thickness was evaluated. Both aforementioned imaging characteristics were shown to take high values at 49 keV X-ray energy and 70 mg/cm 2 phosphor thickness. It was found that high packing density screens may be appropriate for use in medical radiographic systems.

  12. Accounting for inhomogeneous broadening in nano-optics by electromagnetic modeling based on Monte Carlo methods

    Science.gov (United States)

    Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico

    2014-01-01

    Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797

  13. Evaluation of high packing density powder X-ray screens by Monte Carlo methods

    International Nuclear Information System (INIS)

    Liaparinos, P.; Kandarakis, I.; Cavouras, D.; Kalivas, N.; Delis, H.; Panayiotakis, G.

    2007-01-01

    Phosphor materials are employed in intensifying screens of both digital and conventional X-ray imaging detectors. High packing density powder screens have been developed (e.g. screens in ceramic form) exhibiting high-resolution and light emission properties, and thus contributing to improved image transfer characteristics and higher radiation to light conversion efficiency. For the present study, a custom Monte Carlo simulation program was used in order to examine the performance of ceramic powder screens, under various radiographic conditions. The model was developed using Mie scattering theory for the description of light interactions, based on the physical characteristics (e.g. complex refractive index, light wavelength) of the phosphor material. Monte Carlo simulations were carried out assuming: (a) X-ray photon energy ranging from 18 up to 49 keV, (b) Gd 2 O 2 S:Tb phosphor material with packing density of 70% and grain size of 7 μm and (c) phosphor thickness ranging between 30 and 70 mg/cm 2 . The variation of the Modulation Transfer Function (MTF) and the Luminescence Efficiency (LE) with respect to the X-ray energy and the phosphor thickness was evaluated. Both aforementioned imaging characteristics were shown to take high values at 49 keV X-ray energy and 70 mg/cm 2 phosphor thickness. It was found that high packing density screens may be appropriate for use in medical radiographic systems

  14. Calculation of extended shields in the Monte Carlo method using importance function (BRAND and DD code systems)

    International Nuclear Information System (INIS)

    Androsenko, A.A.; Androsenko, P.A.; Kagalenko, I.Eh.; Mironovich, Yu.N.

    1992-01-01

    Consideration is given of a technique and algorithms of constructing neutron trajectories in the Monte-Carlo method taking into account the data on adjoint transport equation solution. When simulating the transport part of transfer kernel the use is made of piecewise-linear approximation of free path length density along the particle motion direction. The approach has been implemented in programs within the framework of the BRAND code system. The importance is calculated in the multigroup P 1 -approximation within the framework of the DD-30 code system. The efficiency of the developed computation technique is demonstrated by means of solution of two model problems. 4 refs.; 2 tabs

  15. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  16. A line-by-line hybrid unstructured finite volume/Monte Carlo method for radiation transfer in 3D non-gray medium

    Science.gov (United States)

    Sun, Hai-Feng; Sun, Feng-Xian; Xia, Xin-Lin

    2018-01-01

    A hybrid method combing the unstructured finite volume method and the Monte Carlo method and incorporating the line-by-line model has been developed to simulate the radiative transfer in highly spectral and inhomogeneous medium. In this method, the unstructured finite volume method is adopted to solve the spectral radiative transfer equation at wave numbers or spectral locations determined by the Monte Carlo method. The Monte Carlo method takes effects by firstly defining the monotonic random number relations corresponding to the spectral emitted power density of every discretized element of the concerning medium, and then by reversing the spectral location through comparison of these relations with predefined random numbers. Through this Monte Carlo method, the actual number of spectral locations on which the spectral radiative transfer equations are solved may be reduced: only the spectral locations that have higher spectral emissive powers would be more possibly selected. To increase the performance of the presented method, the total variation diminishing scheme on unstructured grids is adopted in treating the spectral radiative intensity at interface between control volumes. And, the discretized radiative transfer equation is implicitly and iteratively solved by an algebraic multi-grid solution approach to accelerate the convergence of the equation. The presented method was applied to 3D homogeneous and inhomogeneous cases for the validation and performance studies. Results show that for both cases, the presented method agree well with pure Monte Carlo benchmark solutions with acceptable number of spectral locations and computing time.

  17. Monte-Carlo method - codes for the study of criticality problems (on IBM 7094); Methode de Monte- Carlo - codes pour l'etude des problemes de criticite (IBM 7094)

    Energy Technology Data Exchange (ETDEWEB)

    Moreau, J.; Rabot, H.; Robin, C. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1965-07-01

    The two codes presented here allow to determine the multiplication constant of media containing fissionable materials under numerous and divided forms; they are based on the Monte-Carlo method. The first code apply to x, y, z, geometries. The volume to be studied ought to be divisible in parallelepipeds, the media within each parallelepiped being limited by non-secant surfaces. The second code is intended for r, 0, z geometries. The results include an analysis of collisions in each medium. Applications and examples with informations on time and accuracy are given. (authors) [French] Les deux codes presentes dans ce rapport permettent la determination des coefficients de multiplication de milieux contenant des matieres fissiles sous des formes tres variees et divisees, ils reposent sur la methode de Monte-Carlo. Le premier code s'applique aux geometries x, y, z, le volume a etudier doit pouvoir etre decompose en parallelepipedes, les milieux a l'interieur de chaque parallelepipede etant limites par des surfaces non secantes. Le deuxieme code s'applique aux geometries r, 0, z. Les resultats comportent une analyse des collisions dans chaque milieu. Des applications et des exemples avec les indications de temps et de precision sont fournis. (auteurs)

  18. Two New ZZ Ceti Stars from the LAMOST Survey

    Science.gov (United States)

    Su, J.; Fu, J.; Khokhuntod, P.; Lin, G.

    2017-03-01

    We report the progress of our search for new pulsating white dwarfs. A few pulsating DA white dwarf candidates were selected from the published catalogs based on the data of the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) survey. We carried out follow-up photometric observations on the andidates to ascertain if they are pulsators. Two candidates: J004628.31+343319.90 and J062159.50+252335.90 have been identified as new pulsating DA white dwarfs (ZZ Ceti stars). J004628.31+343319.90 has a main frequency at 2114.4μHz and J062159.50+252335.90 has a lower main frequency at 1204.6μHz.

  19. Asteroseismology of the ZZ Ceti and DAZ GD133

    Science.gov (United States)

    Fu, J.-N.; Vauclair, G.; Su, J.

    2017-09-01

    GD 133 is a DAZ white dwarf with an atmosphere polluted by heavy elements accreted from a debris disk, which is formed by the disruption of rocky planetesimals with orbits bringing them at the white dwarf tidal radius. To reach such orbits implies the potential presence of a perturbing planet. GD133 is a ZZ Ceti pulsator close to the blue edge of the instability strip. The presence of a planet could be revealed by the periodical variation of the observed pulsation periods induced by the orbital motion of the white dwarf. We started a multi-site photometric follow-up aimed at detecting the signature of this potential planet. As a partial result of this work in progress, we give the parameters of a preliminary best-fit model derived from asteroseismology.

  20. Asteroseismology of the ZZ Ceti and DAZ GD133

    Directory of Open Access Journals (Sweden)

    Fu J.-N.

    2017-01-01

    Full Text Available GD 133 is a DAZ white dwarf with an atmosphere polluted by heavy elements accreted from a debris disk, which is formed by the disruption of rocky planetesimals with orbits bringing them at the white dwarf tidal radius. To reach such orbits implies the potential presence of a perturbing planet. GD133 is a ZZ Ceti pulsator close to the blue edge of the instability strip. The presence of a planet could be revealed by the periodical variation of the observed pulsation periods induced by the orbital motion of the white dwarf. We started a multi-site photometric follow-up aimed at detecting the signature of this potential planet. As a partial result of this work in progress, we give the parameters of a preliminary best-fit model derived from asteroseismology.

  1. Investigation of the spectral reflectance and bidirectional reflectance distribution function of sea foam layer by the Monte Carlo method.

    Science.gov (United States)

    Ma, L X; Wang, F Q; Wang, C A; Wang, C C; Tan, J Y

    2015-11-20

    Spectral properties of sea foam greatly affect ocean color remote sensing and aerosol optical thickness retrieval from satellite observation. This paper presents a combined Mie theory and Monte Carlo method to investigate visible and near-infrared spectral reflectance and bidirectional reflectance distribution function (BRDF) of sea foam layers. A three-layer model of the sea foam is developed in which each layer is composed of large air bubbles coated with pure water. A pseudo-continuous model and Mie theory for coated spheres is used to determine the effective radiative properties of sea foam. The one-dimensional Cox-Munk surface roughness model is used to calculate the slope density functions of the wind-blown ocean surface. A Monte Carlo method is used to solve the radiative transfer equation. Effects of foam layer thickness, bubble size, wind speed, solar zenith angle, and wavelength on the spectral reflectance and BRDF are investigated. Comparisons between previous theoretical results and experimental data demonstrate the feasibility of our proposed method. Sea foam can significantly increase the spectral reflectance and BRDF of the sea surface. The absorption coefficient of seawater near the surface is not the only parameter that influences the spectral reflectance. Meanwhile, the effects of bubble size, foam layer thickness, and solar zenith angle also cannot be obviously neglected.

  2. Analysis of the economic viability of a rural tourism enterprise in Brazil: an application of the Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Fernando Rodrigues Amorim

    2017-12-01

    Full Text Available The acquisition of projects aimed at rural tourism represents an alternative for generating income. The objective of this study was to evaluate the viability of purchasing a farm that is structured as a hostel, located in Joanópolis, interior of São Paulo, Brazil. The method was based on exploratory research based on a case study comparing the economic viability of this project. However, this viability is surrounded by uncertainties and risks. With this, the Monte Carlo method was used to analyze this probability. The data were obtained through the Department of Tourism in the city of Joanópolis from primary and secondary data. The calculations were made for work during a year drawn up in a cash flow with the monthly expenses of the hostel. From the results it was concluded that it is feasible to buy this hostel in the real and optimistic scenario and in the Monte Carlo method analyzing the project’s total NPV values

  3. Characterization of the water filters cartridges from the iea-r1 reactor using the Monte Carlo method

    International Nuclear Information System (INIS)

    Costa, Priscila; Potiens Junior, Ademar J.

    2015-01-01

    Filter cartridges are part of the primary water treatment system of the IEA-R1 Research Reactor and, when saturated, they are replaced and become radioactive waste. The IEA-R1 is located at the Nuclear and Energy Research Institute (IPEN), in Sao Paulo, Brazil. The primary characterization is the main step of the radioactive waste management in which the physical, chemical and radiological properties are determined. It is a very important step because the information obtained in this moment enables the choice of the appropriate management process and the definition of final disposal options. In this paper, it is presented a non-destructive method for primary characterization, using the Monte Carlo method associated with the gamma spectrometry. Gamma spectrometry allows the identification of radionuclides and their activity values. The detection efficiency is an important parameter, which is related to the photon energy, detector geometry and the matrix of the sample to be analyzed. Due to the difficult to obtain a standard source with the same geometry of the filter cartridge, another technique is necessary to calibrate the detector. The technique described in this paper uses the Monte Carlo method for primary characterization of the IEA-R1 filter cartridges. (author)

  4. Comparisons of Wilks’ and Monte Carlo Methods in Response to the 10CFR50.46(c) Proposed Rulemaking

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); Szilard, Ronaldo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-10-01

    The Nuclear Regulatory Commission (NRC) is proposing a new rulemaking on emergency core system/loss-of-coolant accident (LOCA) performance analysis. In the proposed rulemaking, designated as 10CFR50.46(c), the US NRC put forward an equivalent cladding oxidation criterion as a function of cladding pre-transient hydrogen content. The proposed rulemaking imposes more restrictive and burnup-dependent cladding embrittlement criteria; consequently nearly all the fuel rods in a reactor core need to be analyzed under LOCA conditions to demonstrate compliance to the safety limits. New analysis methods are required to provide a thorough characterization of the reactor core in order to identify the locations of the limiting rods as well as to quantify the safety margins under LOCA conditions. With the new analysis method presented in this work, the limiting transient case and the limiting rods can be easily identified to quantify the safety margins in response to the proposed new rulemaking. In this work, the best-estimate plus uncertainty (BEPU) analysis capability for large break LOCA with the new cladding embrittlement criteria using the RELAP5-3D code is established and demonstrated with a reduced set of uncertainty parameters. Both the direct Monte Carlo method and the Wilks’ nonparametric statistical method can be used to perform uncertainty quantification. Wilks’ method has become the de-facto industry standard to perform uncertainty quantification in BEPU LOCA analyses. Despite its widespread adoption by the industry, the use of small sample sizes to infer statement of compliance to the existing 10CFR50.46 rule, has been a major cause of unrealized operational margin in today’s BEPU methods. Moreover the debate on the proper interpretation of the Wilks’ theorem in the context of safety analyses is not fully resolved yet, even more than two decades after its introduction in the frame of safety analyses in the nuclear industry. This represents both a regulatory

  5. Proprietes Adiabatiques des Naines Blanches Pulsantes de Type ZZ Ceti

    Science.gov (United States)

    Brassard, Pierre

    1992-01-01

    Cette these a pour but d'etudier les proprietes des oscillation non-radiales des etoiles ZZ Ceti, appelees aussi etoiles DA variables, dans le contexte de la theorie adiabatique des petites oscillations. Ces oscillations sont observables, pour ce type d'etoiles, sous forme de variations periodiques de la luminosite. A partir d'une analyse de modeles stellaires, analyse qui consiste principalement a calculer et a interpreter les periodes d'oscillations des modeles, nous voulons mieux connai tre les proprietes physiques fondamentales des ZZ Ceti. Nous developpons tout d'abord divers outils pour entreprendre cette etude. Apres avoir presente le formalisme mathematique de base decrivant les oscillations non-radiales d'une etoile, nous discutons des difficultes pouvant etre rencontrees dans le calcul de la frequence de Brunt-Vaisala, une quantite fondamentale pour le calcul des periodes d'oscillations. Par la suite, nous developpons un modele theorique simple permettant d'analyser et d'interpreter la structure des periodes calculees (ou observees) en termes des proprietes de structure de l'etoile. Nous presentons aussi les outils numeriques tout a fait originaux utilises pour calculer nos periodes a partir de modeles stellaires. Finalement, nous presentons les resultats d'ensemble de l'analyse de nos modeles, et discutons de l'interpretation des observations de periodes et du taux de variation de ces periodes en termes de structure de l'etoile et de composition du noyau de l'etoile, respectivement. Ces resultats representent l'etude la plus complete a ce jour de la seismologie des naines blanches.

  6. Simulation of the functioning of a gamma camera using Monte Carlo method; Simulacion del funcionamiento de una camara gamma mediante metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Oramas Polo, I.

    2014-07-01

    This paper presents the simulation of the gamma camera Park Isocam II by Monte Carlo code SIMIND. This simulation allows detailed assessment of the functioning of the gamma camera. The parameters evaluated by means of the simulation are: the intrinsic uniformity with different window amplitudes, the system uniformity, the extrinsic spatial resolution, the maximum rate of counts, the intrinsic sensitivity, the system sensitivity, the energy resolution and the pixel size. The results of the simulation are compared and evaluated against the specifications of the manufacturer of the gamma camera and taking into account the National Protocol for Quality Control of Nuclear Medicine Instruments of the Cuban Medical Equipment Control Center. The simulation reported here demonstrates the validity of the SIMIND Monte Carlo code to evaluate the performance of the gamma camera Park Isocam II and as result a computational model of the camera has been obtained. (Author)

  7. Application of monte carlo method in confirming the shield of ore grade online analyzer

    International Nuclear Information System (INIS)

    Gong Yalin; Liu Hui; Zhang Wei; Shang Qingmin; Song Qingfeng; Wu Zhiqiang; Li Yanfeng; Zhao Zhonghua

    2010-01-01

    Because of potential harm of radioactive material, the security of radionuclide gauges must be considerable for the configuration design, the dose around the gauge must satisfy those standards enacted by the country. Ore grade online analyzer is a kind of nuclear gauge. Because its structure and the energy of source particle are complex, the doses of the different positions around the ore grade online analyzer are simulated and calculated by Monte Carlo software MCNP, then the material and the thickness of shielding is chosen to satisfy the radiation safety standard and installing demand. The doses of the corresponding positions around the analyzer are measured using dose monitor to find out whether the doses are accord with the simulated doses. The results show that the simulating doses can resemble the measuring doses precisely, and the simulation using MCNP is reasonable to design the shielding of gauges containing sealed source. (authors)

  8. Modeling of radiation-induced bystander effect using Monte Carlo methods

    Science.gov (United States)

    Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

    2009-03-01

    Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

  9. Research on output signal of piezoelectric lead zirconate titanate detector using Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Takechi, Seiji, E-mail: takechi@elec.eng.osaka-cu.ac.jp [Graduate School of Engineering, Osaka City University, Osaka 558-8585 (Japan); Mitsuhashi, Tomoaki; Miura, Yoshinori [Graduate School of Engineering, Osaka City University, Osaka 558-8585 (Japan); Miyachi, Takashi; Kobayashi, Masanori; Okudaira, Osamu [Planetary Exploration Research Center, Chiba Institute of Technology, Narashino, Chiba 275-0016 (Japan); Shibata, Hiromi [The Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka 567-0047 (Japan); Fujii, Masayuki [Famscience Co., Ltd., Tsukubamirai, Ibaraki 300-2435 (Japan); Okada, Nagaya [Honda Electronics Co., Ltd., Toyohashi, Aichi 441-3193 (Japan); Murakami, Takeshi; Uchihori, Yukio [National Institute of Radiological Sciences, Chiba 263-8555 (Japan)

    2017-06-21

    The response of a radiation detector fabricated from piezoelectric lead zirconate titanate (PZT) was studied. The response signal due to a single 400 MeV/n xenon (Xe) ion was assumed to have a simple form that was composed of two variables, the amplitude and time constant. These variables were estimated by comparing two output waveforms obtained from a computer simulation and an experiment on Xe beam irradiation. Their values appeared to be dependent on the beam intensity. - Highlights: • The performance of PZT detector was studied by irradiation of a 400 MeV/n Xe beam. • Monte Carlo simulation was used to examine the formation process of the output. • The response signal due to a single Xe ion was assumed to have a simple form. • The form was composed of two variables, the amplitude and time constant. • These variables appeared to be dependent on the beam intensity.

  10. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    Energy Technology Data Exchange (ETDEWEB)

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  11. Unitary Dynamics of Strongly Interacting Bose Gases with the Time-Dependent Variational Monte Carlo Method in Continuous Space

    Science.gov (United States)

    Carleo, Giuseppe; Cevolani, Lorenzo; Sanchez-Palencia, Laurent; Holzmann, Markus

    2017-07-01

    We introduce the time-dependent variational Monte Carlo method for continuous-space Bose gases. Our approach is based on the systematic expansion of the many-body wave function in terms of multibody correlations and is essentially exact up to adaptive truncation. The method is benchmarked by comparison to an exact Bethe ansatz or existing numerical results for the integrable Lieb-Liniger model. We first show that the many-body wave function achieves high precision for ground-state properties, including energy and first-order as well as second-order correlation functions. Then, we study the out-of-equilibrium, unitary dynamics induced by a quantum quench in the interaction strength. Our time-dependent variational Monte Carlo results are benchmarked by comparison to exact Bethe ansatz results available for a small number of particles, and are also compared to quench action results available for noninteracting initial states. Moreover, our approach allows us to study large particle numbers and general quench protocols, previously inaccessible beyond the mean-field level. Our results suggest that it is possible to find correlated initial states for which the long-term dynamics of local density fluctuations is close to the predictions of a simple Boltzmann ensemble.

  12. CMS discovery potential for the Higgs boson in the H → ZZ* → 4e± decay channel, contribution to the construction of the CMS electromagnetic calorimeter

    International Nuclear Information System (INIS)

    Puljak, I.

    2000-01-01

    The subject of this thesis is the study of CMS (compact muon solenoid) potential for the Higgs boson search through the H→ ZZ * →4e ± channel. The theoretical arguments and the experimental data from the electroweak precision measurements, combined with the direct search results, tend to prefer the intermediate mass Higgs boson where this channel is expected to be used for the Higgs boson search at the LHC. After indicating the importance of the electromagnetic calorimeter in the electron reconstruction process, the mechanical structure and the optical properties of alveolar containers are described. The system for the quality control of the alveolar structures is developed, consisting of the production process monitoring system, the precise geometrical measurements and the optical quality control. For the optical quality control, the apparatus is constructed for measuring the reflexivity and the diffusivity of the raw material before the production and the alveolar structure after the complete production process. The developed quality control system ensures that the alveolar containers properties remain on the level not deteriorating the properties of the electromagnetic calorimeter. The evaluation of the CMS potential for the Higgs search through its four electrons decay consists of the signal and background studies at the particle level and the reconstruction studies including the precise detector description. To combine the Monte Carlo generated events with the recent theoretical calculations, the distributions of the Higgs transverse momentum predicted by the parton shower model and the soft gluon resummation calculations are compared. The agreement is found for the low transverse momentum, while for the agreement at higher values the parton shower model can be adjusted. The evaluation of the Zbb-bar background is done with properly modeling the phase space generation and the up date theoretical results and Monte Carlo simulations are used for two other

  13. Monte-Carlo method for studying the slowing down of neutrons in a thin plate of hydrogenated matter

    International Nuclear Information System (INIS)

    Ribon, P.; Michaudon, A.

    1965-01-01

    The studies of interaction of slow neutrons with atomic nuclei by means of the time of flight methods are made with a pulsed neutron source with a broad energy spectrum. The measurement accuracy needs a high intensity and an output time as short as possible and well defined. If the neutrons source is a target bombarded by the beam of a pulsed accelerator, it is usually required to slow down the neutrons to obtain a sufficient intensity at low energies. The purpose of the Monte-Carlo method which is described in this paper is to study the slowing down properties, mainly the intensity and the output time distribution of the slowed-down neutrons. The choice of the method and parameters studied is explained as well as the principles, some calculations and the program organization. A few results given as examples were obtained in the line of this program, the limits of which are principally due to simplifying physical hypotheses. (author) [fr

  14. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Song Hyun Kim

    2015-08-01

    Full Text Available Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  15. Evaluation of the shield calculation adequacy of radiotherapy rooms through Monte Carlo Method and experimental measures

    International Nuclear Information System (INIS)

    Meireles, Ramiro Conceicao

    2016-01-01

    The shielding calculation methodology for radiotherapy services adopted in Brazil and in several countries is that described in publication 151 of the National Council on Radiation Protection and Measurements (NCRP 151). This methodology however, markedly employs several approaches that can impact both in the construction cost and in the radiological safety of the facility. Although this methodology is currently well established by the high level of use, some parameters employed in the calculation methodology did not undergo to a detailed assessment to evaluate the impact of the various approaches considered. In this work the MCNP5 Monte Carlo code was used with the purpose of evaluating the above mentioned approaches. TVLs values were obtained for photons in conventional concrete (2.35g / cm 3 ), considering the energies of 6, 10 and 25 MeV, respectively, first considering an isotropic radiation source impinging perpendicular to the barriers, and subsequently a lead head shielding emitting a shaped beam, in the format of a pyramid trunk. Primary barriers safety margins, taking in account the head shielding emitting photon beam pyramid-shaped in the energies of 6, 10, 15 and 18 MeV were assessed. A study was conducted considering the attenuation provided by the patient's body in the energies of 6,10, 15 and 18 MeV, leading to new attenuation factors. Experimental measurements were performed in a real radiotherapy room, in order to map the leakage radiation emitted by the accelerator head shielding and the results obtained were employed in the Monte Carlo simulation, as well as to validate the entire study. The study results indicate that the TVLs values provided by (NCRP, 2005) show discrepancies in comparison with the values obtained by simulation and that there may be some barriers that are calculated with insufficient thickness. Furthermore, the simulation results show that the additional safety margins considered when calculating the width of the primary

  16. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy; Uso de tecnicas Monte Carlo para determinacao de curvas de isodose em braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  17. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    Science.gov (United States)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  18. Calculating infinite-medium α-eigenvalue spectra with Monte Carlo using a transition rate matrix method

    Energy Technology Data Exchange (ETDEWEB)

    Betzler, Benjamin R., E-mail: betzlerbr@ornl.gov [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Kiedrowski, Brian C., E-mail: bckiedro@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States); Brown, Forrest B., E-mail: fbrown@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS A143, Los Alamos, NM 87545 (United States); Martin, William R., E-mail: wrm@umich.edu [Department of Nuclear Engineering and Radiological Sciences, University of Michigan, 2355 Bonisteel Boulevard, Ann Arbor, MI 48109 (United States)

    2015-12-15

    Highlights: • A transition rate matrix method for calculating α-eigenvalues is formulated. • Verification of this method is performed using multigroup infinite-medium problems. • Applications to continuous-energy media examine the slowing down of neutrons. • The effect of the α-eigenvalue spectrum on the short-time flux behavior is discussed. - Abstract: The time-dependent behavior of the energy spectrum in neutron transport was investigated with a formulation, based on continuous-time Markov processes, for computing α eigenvalues and eigenvectors in an infinite medium. For this, a research Monte Carlo code called “TORTE” (To Obtain Real Time Eigenvalues) was created and used to estimate elements of a transition rate matrix. TORTE is capable of using both multigroup and continuous-energy nuclear data, and verification was performed. Eigenvalue spectra for infinite homogeneous mixtures were obtained, and an eigenfunction expansion was used to investigate transient behavior of the neutron energy spectrum.

  19. Monte Carlo simulation methods in moment-based scale-bridging algorithms for thermal radiative-transfer problems

    International Nuclear Information System (INIS)

    Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.

    2015-01-01

    We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm

  20. Radiation field characterization of a BNCT research facility using Monte Carlo method - code MCNP-4B

    International Nuclear Information System (INIS)

    Hernandez, Antonio Carlos

    2002-01-01

    Boron Neutron Capture Therapy - BNCT - is a selective cancer treatment and arises as an alternative therapy to treat cancer when usual techniques - surgery, chemotherapy or radiotherapy - show no satisfactory results. The main proposal of this work is to project a facility to BNCT studies. This facility relies on the use of an Am Be neutron source and on a set of moderators, filters and shielding which will provide the best neutron/gamma beam characteristic for these Becton studies, i.e., high intensity thermal and/or epithermal neutron fluxes and with the minimum feasible gamma rays and fast neutrons contaminants. A computational model of the experiment was used to obtain the radiation field in the sample irradiation position. The calculations have been performed with the MCNP 4B Monte Carlo Code and the results obtained can be regarded as satisfactory, i.e., a thermal neutron fluencyN T = 1,35x10 8 n/cm , a fast neutron dose of 5,86x10 -10 Gy/N T and a gamma ray dose of 8,30x10 -14 Gy/N T . (author)

  1. Monte Carlo simulation studies of spatial resolution in magnification mammography using the edge method

    Science.gov (United States)

    Koutalonis, M.; Delis, H.; Spyrou, G.; Costaridou, L.; Tzanakos, G.; Panayiotakis, G.

    2009-05-01

    Small sized focal spots are very essential when magnification is performed in mammography, as they represent the only way to reduce the geometrical unsharpness. The effect of focal spot size on spatial resolution in contact mammography or under magnification has been experimentally investigated but, due to construction limitations, only a small range of focal spot sizes has been studied. In this study, a Monte Carlo simulation model is utilized in order to examine the effect of a wide range of focal spots on spatial resolution under magnification conditions. A thick sharp edge consisted of lead was imaged under various conditions and the corresponding spatial resolution was calculated through the Modulation Transfer Function. Results demonstrate that increasing the degree of magnification from 1.0 to 2.0 induces degradation on spatial resolution which varies from 49% for a 0.04 mm focal spot to 53.2% for a 0.14 mm one. Larger focal spots cause higher degradation even for low magnification. Focal spots larger than 0.10 mm are considered appropriate only for low degrees of magnification according to the IAEA regulations that designate spatial resolution for mammography higher than 12 lp/mm. However, for high degrees of magnification the focal spot size should be even smaller. The construction of a microfocus of 0.04 mm would result in acceptable values of spatial resolution even for degrees of magnification up to 1.9.

  2. Monte Carlo simulation studies of spatial resolution in magnification mammography using the edge method

    Energy Technology Data Exchange (ETDEWEB)

    Koutalonis, M; Delis, H; Costaridou, L; Panayiotakis, G [University of Patras, Department of Medical Physics, 26500 Rio-Patras (Greece); Spyrou, G [Academy of Athens, Biomedical Research Foundation, 11527 Athens (Greece); Tzanakos, G [University of Athens, Department of Physics, 15771 Athens (Greece)], E-mail: mkoutalonis@med.upatras.gr

    2009-05-15

    Small sized focal spots are very essential when magnification is performed in mammography, as they represent the only way to reduce the geometrical unsharpness. The effect of focal spot size on spatial resolution in contact mammography or under magnification has been experimentally investigated but, due to construction limitations, only a small range of focal spot sizes has been studied. In this study, a Monte Carlo simulation model is utilized in order to examine the effect of a wide range of focal spots on spatial resolution under magnification conditions. A thick sharp edge consisted of lead was imaged under various conditions and the corresponding spatial resolution was calculated through the Modulation Transfer Function. Results demonstrate that increasing the degree of magnification from 1.0 to 2.0 induces degradation on spatial resolution which varies from 49% for a 0.04 mm focal spot to 53.2% for a 0.14 mm one. Larger focal spots cause higher degradation even for low magnification. Focal spots larger than 0.10 mm are considered appropriate only for low degrees of magnification according to the IAEA regulations that designate spatial resolution for mammography higher than 12 lp/mm. However, for high degrees of magnification the focal spot size should be even smaller. The construction of a microfocus of 0.04 mm would result in acceptable values of spatial resolution even for degrees of magnification up to 1.9.

  3. Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics

    DEFF Research Database (Denmark)

    Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.

    2014-01-01

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....... with tailored properties, for example, bacteria colonies of cells or nanodevices with desired properties. Theoretical studies and simulations provide an important tool for unraveling the principles of self-organization and, therefore, have recently gained an increasing interest. The present article features...... an extension of a popular code MBN EXPLORER (MesoBioNano Explorer) aiming to provide a universal approach to study self-assembly phenomena in biology and nanoscience. In particular, this extension involves a highly parallelized module of MBN EXPLORER that allows simulating stochastic processes using...

  4. Size dependence study of the ordering temperature in the Fast Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Velasquez, E. A., E-mail: eavelas@gmail.com [Universidad de San Buenaventura Seccional Medellin, Grupo de Investigacion en Modelamiento y Simulacion Computacional, Facultad de Ingenierias (Colombia); Mazo-Zuluaga, J., E-mail: johanmazo@gmail.com [Universidad de Antioquia, Grupo de Estado Solido, Grupo de Instrumentacion Cientifica y Microelectronica, Instituto de Fisica-FCEN (Colombia); Mejia-Lopez, J., E-mail: jmejia@puc.cl [Universidad de Antioquia, Instituto de Fisica-FCEN (Colombia)

    2013-02-15

    Based on the framework of the Fast Monte Carlo approach, we study the diameter dependence of the ordering temperature in magnetic nanostructures of cylindrical shape. For the purposes of this study, Fe cylindrical-shaped samples of different sizes (20 nm height, 30-100 nm in diameter) have been chosen, and their magnetic properties have been computed as functions of the scaled temperature. Two main set of results are concluded: (a) the ordering temperature of nanostructures follows a linear scaling relationship as a function of the scaling factor x, for all the studied sizes. This finding rules out a scaling relation T Prime {sub c} = x{sup 3{eta}}T{sub c} (where {eta} is a scaling exponent, and T Prime {sub c} and T{sub c} are the scaled and true ordering temperatures) that has been proposed in the literature, and suggests that temperature should scale linearly with the scaling factor x. (b) For the nanostructures, there are three different order-disorder magnetic transition modes depending on the system's size, in very good agreement with previous experimental reports.

  5. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange

    Science.gov (United States)

    Hula, Andreas; Montague, P. Read; Dayan, Peter

    2015-01-01

    Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent’s preference for equity with their partner, beliefs about the partner’s appetite for equity, beliefs about the partner’s model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP). However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference. PMID:26053429

  6. Radiation dose performance in the triple-source CT based on a Monte Carlo method

    Science.gov (United States)

    Yang, Zhenyu; Zhao, Jun

    2012-10-01

    Multiple-source structure is promising in the development of computed tomography, for it could effectively eliminate motion artifacts in the cardiac scanning and other time-critical implementations with high temporal resolution. However, concerns about the dose performance shade this technique, as few reports on the evaluation of dose performance of multiple-source CT have been proposed for judgment. Our experiments focus on the dose performance of one specific multiple-source CT geometry, the triple-source CT scanner, whose theories and implementations have already been well-established and testified by our previous work. We have modeled the triple-source CT geometry with the help of EGSnrc Monte Carlo radiation transport code system, and simulated the CT examinations of a digital chest phantom with our modified version of the software, using x-ray spectrum according to the data of physical tube. Single-source CT geometry is also estimated and tested for evaluation and comparison. Absorbed dose of each organ is calculated according to its real physics characteristics. Results show that the absorbed radiation dose of organs with the triple-source CT is almost equal to that with the single-source CT system. As the advantage of temporal resolution, the triple-source CT would be a better choice in the x-ray cardiac examination.

  7. Monte Carlo Planning Method Estimates Planning Horizons during Interactive Social Exchange.

    Directory of Open Access Journals (Sweden)

    Andreas Hula

    2015-06-01

    Full Text Available Reciprocating interactions represent a central feature of all human exchanges. They have been the target of various recent experiments, with healthy participants and psychiatric populations engaging as dyads in multi-round exchanges such as a repeated trust task. Behaviour in such exchanges involves complexities related to each agent's preference for equity with their partner, beliefs about the partner's appetite for equity, beliefs about the partner's model of their partner, and so on. Agents may also plan different numbers of steps into the future. Providing a computationally precise account of the behaviour is an essential step towards understanding what underlies choices. A natural framework for this is that of an interactive partially observable Markov decision process (IPOMDP. However, the various complexities make IPOMDPs inordinately computationally challenging. Here, we show how to approximate the solution for the multi-round trust task using a variant of the Monte-Carlo tree search algorithm. We demonstrate that the algorithm is efficient and effective, and therefore can be used to invert observations of behavioural choices. We use generated behaviour to elucidate the richness and sophistication of interactive inference.

  8. A Monte Carlo simulation method for assessing biotransformation effects on groundwater fuel hydrocarbon plume lengths

    International Nuclear Information System (INIS)

    McNab, W.W. Jr.

    2000-01-01

    Biotransformation of dissolved groundwater hydrocarbon plumes emanating from leaking underground fuel tanks should, in principle, result in plume length stabilization over relatively short distances, thus diminishing the environmental risk. However, because the behavior of hydrocarbon plumes is usually poorly constrained at most leaking underground fuel tank sites in terms of release history, groundwater velocity, dispersion, as well as the biotransformation rate, demonstrating such a limitation in plume length is problematic. Biotransformation signatures in the aquifer geochemistry, most notably elevated bicarbonate, may offer a means of constraining the relationship between plume length and the mean biotransformation rate. In this study, modeled plume lengths and spatial bicarbonate differences among a population of synthetic hydrocarbon plumes, generated through Monte Carlo simulation of an analytical solute transport model, are compared to field observations from six underground storage tank (UST) sites at military bases in California. Simulation results indicate that the relationship between plume length and the distribution of bicarbonate is best explained by biotransformation rates that are consistent with ranges commonly reported in the literature. This finding suggests that bicarbonate can indeed provide an independent means for evaluating limitations in hydrocarbon plume length resulting from biotransformation. (Author)

  9. DSMC calculations for the delta wing. [Direct Simulation Monte Carlo method

    Science.gov (United States)

    Celenligil, M. Cevdet; Moss, James N.

    1990-01-01

    Results are reported from three-dimensional direct simulation Monte Carlo (DSMC) computations, using a variable-hard-sphere molecular model, of hypersonic flow on a delta wing. The body-fitted grid is made up of deformed hexahedral cells divided into six tetrahedral subcells with well defined triangular faces; the simulation is carried out for 9000 time steps using 150,000 molecules. The uniform freestream conditions include M = 20.2, T = 13.32 K, rho = 0.00001729 kg/cu m, and T(wall) = 620 K, corresponding to lambda = 0.00153 m and Re = 14,000. The results are presented in graphs and briefly discussed. It is found that, as the flow expands supersonically around the leading edge, an attached leeside flow develops around the wing, and the near-surface density distribution has a maximum downstream from the stagnation point. Coefficients calculated include C(H) = 0.067, C(DP) = 0.178, C(DF) = 0.110, C(L) = 0.714, and C(D) = 1.089. The calculations required 56 h of CPU time on the NASA Langley Voyager CRAY-2 supercomputer.

  10. Evaluation of occupational exposure in interventionist procedures using Monte Carlo Method; Avaliacao das exposicoes dos envolvidos em procedimentos intervencionistas usando metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Santos, William S.; Neves, Lucio P.; Perini, Ana P.; Caldas, Linda V.E., E-mail: williathan@yahoo.com.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Belinato, Walmir; Maia, Ana F. [Universidade Federal de Sergipe (UFS), Sao Cristovao, SE (Brazil). Departamento de Fisica

    2014-07-01

    This study presents a computational model of exposure for a patient, cardiologist and nurse in a typical scenario of cardiac interventional procedures. In this case a set of conversion coefficient (CC) for effective dose (E) in terms of kerma-area product (KAP) for all individuals involved using seven different energy spectra and eight beam projections. The CC was also calculated for the entrance skin dose (ESD) normalized to the PKA for the patient. All individuals were represented by anthropomorphic phantoms incorporated in a radiation transport code based on Monte Carlo simulation. (author)

  11. Calibration of the identiFINDER detector for the iodine measurement in thyroid using the Monte Carlo method; Calibracion del detector identiFINDER para la medicion de yodo en tiroides utilizando el metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ramos M, D.; Yera S, Y.; Lopez B, G. M.; Acosta R, N.; Vergara G, A., E-mail: dayana@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones, Calle 20 No. 4113 e/ 41 y 47, Playa, 10600 La Habana (Cuba)

    2014-08-15

    This work is based on the determination of the detection efficiency of {sup 125}I and {sup 131}I in thyroid of the identiFINDER detector using the Monte Carlo method. The suitability of the calibration method is analyzed, when comparing the results of the direct Monte Carlo method with the corrected, choosing the latter because the differences with the real efficiency stayed below 10%. To simulate the detector their geometric parameters were optimized using a tomographic study, what allowed the uncertainties minimization of the estimates. Finally were obtained the simulations of the detector geometry-point source to find the correction factors to 5 cm, 15 cm and 25 cm, and those corresponding to the detector-simulator arrangement for the method validation and final calculation of the efficiency, demonstrating that in the Monte Carlo method implementation if simulates at a greater distance than the used in the Laboratory measurements an efficiency overestimation can be obtained, while if simulates at a shorter distance this will be underestimated, so should be simulated at the same distance to which will be measured in the reality. Also, is achieved the obtaining of the efficiency curves and minimum detectable activity for the measurement of {sup 131}I and {sup 125}I. In general is achieved the implementation of the Monte Carlo methodology for the identiFINDER calibration with the purpose of estimating the measured activity of iodine in thyroid. This method represents an ideal way to replace the lack of patterns solutions and simulators assuring the capacities of the Internal Contamination Laboratory of the Centro de Proteccion e Higiene de las Radiaciones are always calibrated for the iodine measurement in thyroid. (author)

  12. A Benchmark Approach of Counterparty Credit Exposure of Bermudan Option under Lévy Process : The Monte Carlo-COS Method

    NARCIS (Netherlands)

    Shen, Y.; Van der Weide, J.A.M.; Anderluh, J.H.M.

    2013-01-01

    An advanced method, which we call Monte Carlo-COS method, is proposed for computing the counterparty credit exposure profile of Bermudan options under Lévy process. The different exposure profiles and exercise intensity under different mea- sures, P and Q, are discussed. Since the COS method [1

  13. A Novel Technique to Reconstruct the $Z$ mass in $WZ/ZZ$ Events with Lepton(s), Missing Transverse Energy and Three Jets at CDFII

    Energy Technology Data Exchange (ETDEWEB)

    Trovato, Marco; Vernieri, Caterina

    2012-01-01

    Observing WZ/ZZ production at the Tevatron in a final state with a lepton, missing transverse energy and jets is extremely difficult because of the low signal rate and the huge background. In an attempt to increase the acceptance we study the sample where three high-energy jets are reconstructed, where about 1/3 of the diboson signal events are expected to end. Rather than choosing the two E{sub T}-leading jets to detect a Z signal, we make use of the information carried by all jets. To qualify the potential of our method, we estimate the probability of observing an inclusive diboson signal at the three standard deviations level (P{sub 3{sigma}}) to be about four times larger than when using the two leading jets only. Aiming at applying the method to the search for the exclusive WZ/ZZ {yields} {ell}{nu}q{bar q} channel in the three jets sample, we analyzed separately the sample with at least one b-tagged jet and the sample with no tags. In WZ/ZZ {yields} {ell}{nu}b{bar b} search, we observe a modest improvement in sensitivity over the option of building the Z-mass from the two leading jets in E{sub T}. Studies for improving the method further are on-going.

  14. An Auxiliary Variable Method for Markov Chain Monte Carlo Algorithms in High Dimension

    Directory of Open Access Journals (Sweden)

    Yosra Marnissi

    2018-02-01

    Full Text Available In this paper, we are interested in Bayesian inverse problems where either the data fidelity term or the prior distribution is Gaussian or driven from a hierarchical Gaussian model. Generally, Markov chain Monte Carlo (MCMC algorithms allow us to generate sets of samples that are employed to infer some relevant parameters of the underlying distributions. However, when the parameter space is high-dimensional, the performance of stochastic sampling algorithms is very sensitive to existing dependencies between parameters. In particular, this problem arises when one aims to sample from a high-dimensional Gaussian distribution whose covariance matrix does not present a simple structure. Another challenge is the design of Metropolis–Hastings proposals that make use of information about the local geometry of the target density in order to speed up the convergence and improve mixing properties in the parameter space, while not being too computationally expensive. These two contexts are mainly related to the presence of two heterogeneous sources of dependencies stemming either from the prior or the likelihood in the sense that the related covariance matrices cannot be diagonalized in the same basis. In this work, we address these two issues. Our contribution consists of adding auxiliary variables to the model in order to dissociate the two sources of dependencies. In the new augmented space, only one source of correlation remains directly related to the target parameters, the other sources of correlations being captured by the auxiliary variables. Experiments are conducted on two practical image restoration problems—namely the recovery of multichannel blurred images embedded in Gaussian noise and the recovery of signal corrupted by a mixed Gaussian noise. Experimental results indicate that adding the proposed auxiliary variables makes the sampling problem simpler since the new conditional distribution no longer contains highly heterogeneous

  15. Neutronic design and performance analysis of Korean ITER TBM by Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Chang Hyo; Han, Beom Seok; Park, Ho Jin [Seoul Nat. Univ., Seoul (Korea, Republic of)

    2006-01-15

    The objective of this project is to develop a neutronic design of the Korean TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for developing the neutronic design of the Korean ITER TBM and improving the nuclear performances. The results of the numerical experiments produced in this project will be utilized for a design optimization of the Korean ITER TBM. In this project, we proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket. In order to investigate the behavior of neutrons and photons in the fusion blanket, Monte Carlo transport calculation was conducted with MCCARD. In addition, to optimize the neutronic performances of the fusion blanket, we introduced the design concept using a graphite reflector and a Pb multiplier. Through various numerical experiments, it was verified that these design concepts can be utilized efficiently to improve neutronic performances and resolve many drawbacks. The graphite-reflected HCML blanket can provide the neutronic performances far better than the non-reflected blanket, and a slightly-enriched Li breeder can satisfy the tritium self-sufficiency. The HCSB blanket design concept with a graphite reflector and a Pb multiplier was proposed. According to results of the neutronic analyses, the graphite-reflected HCSB blanket with a Pb multiplier can provide the neutronic performances comparable with those of the conventional HCSB blanket.

  16. Thermodynamics and simulation of hard-sphere fluid and solid: Kinetic Monte Carlo method versus standard Metropolis scheme.

    Science.gov (United States)

    Ustinov, E A

    2017-01-21

    The paper aims at a comparison of techniques based on the kinetic Monte Carlo (kMC) and the conventional Metropolis Monte Carlo (MC) methods as applied to the hard-sphere (HS) fluid and solid. In the case of the kMC, an alternative representation of the chemical potential is explored [E. A. Ustinov and D. D. Do, J. Colloid Interface Sci. 366, 216 (2012)], which does not require any external procedure like the Widom test particle insertion method. A direct evaluation of the chemical potential of the fluid and solid without thermodynamic integration is achieved by molecular simulation in an elongated box with an external potential imposed on the system in order to reduce the particle density in the vicinity of the box ends. The existence of rarefied zones allows one to determine the chemical potential of the crystalline phase and substantially increases its accuracy for the disordered dense phase in the central zone of the simulation box. This method is applicable to both the Metropolis MC and the kMC, but in the latter case, the chemical potential is determined with higher accuracy at the same conditions and the number of MC steps. Thermodynamic functions of the disordered fluid and crystalline face-centered cubic (FCC) phase for the hard-sphere system have been evaluated with the kinetic MC and the standard MC coupled with the Widom procedure over a wide range of density. The melting transition parameters have been determined by the point of intersection of the pressure-chemical potential curves for the disordered HS fluid and FCC crystal using the Gibbs-Duhem equation as a constraint. A detailed thermodynamic analysis of the hard-sphere fluid has provided a rigorous verification of the approach, which can be extended to more complex systems.

  17. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    International Nuclear Information System (INIS)

    Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V

    2006-01-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  18. Seismic wavefield imaging in the Tokyo metropolitan area, Japan, based on the replica exchange Monte Carlo method

    Science.gov (United States)

    Kano, Masayuki; Nagao, Hiromichi; Nagata, Kenji; Ito, Shin-ichi; Sakai, Shin'ichi; Nakagawa, Shigeki; Hori, Muneo; Hirata, Naoshi

    2017-04-01

    Earthquakes sometimes cause serious disasters not only directly by ground motion itself but also secondarily by infrastructure damage, particularly in densely populated urban areas. To reduce these secondary disasters, it is important to rapidly evaluate seismic hazards by analyzing the seismic responses of individual structures due to the input ground motions. Such input motions are estimated utilizing an array of seismometers that are distributed more sparsely than the structures. We propose a methodology that integrates physics-based and data-driven approaches in order to obtain the seismic wavefield to be input into seismic response analysis. This study adopts the replica exchange Monte Carlo (REMC) method, which is one of the Markov chain Monte Carlo (MCMC) methods, for the estimation of the seismic wavefield together with one-dimensional local subsurface structure and source information. Numerical tests show that the REMC method is able to search the parameters related to the source and the local subsurface structure in broader parameter space than the Metropolis method, which is an ordinary MCMC method. The REMC method well reproduces the seismic wavefield consistent with the true one. In contrast, the ordinary kriging, which is a classical data-driven interpolation method for spatial data, is hardly able to reproduce the true wavefield even at low frequencies. This indicates that it is essential to take both physics-based and data-driven approaches into consideration for seismic wavefield imaging. Then the REMC method is applied to the actual waveforms observed by a dense seismic array MeSO-net (Metropolitan Seismic Observation network), in which 296 accelerometers are continuously in operation with several kilometer intervals in the Tokyo metropolitan area, Japan. The estimated wavefield within a frequency band of 0.10-0.20 Hz is absolutely consistent with the observed waveforms. Further investigation suggests that the seismic wavefield is successfully

  19. Simulation of the Interaction of X-rays with a Gas in an Ionization Chamber by the Monte Carlo Method

    International Nuclear Information System (INIS)

    Grau Carles, A.; Garcia Gomez-Tejedor, G.

    2001-01-01

    The final objective of any ionization chamber is the measurement of the energy amount or radiation dose absorbed by the gas into the chamber. The final value depends on the composition of the gas, its density and temperature, the ionization chamber geometry, and type and intensity of the radiation. We describe a Monte Carlo simulation method, which allows one to compute the dose absorbed by the gas for a X-ray beam. Verification of model has been carried out by simulating the attenuation of standard X-ray radiation through the half value layers established in the ISO 4037 report, while assuming a Weibull type energy distribution for the incident photons. (Author) 6 refs

  20. Absorbed dose measurements in mammography using Monte Carlo method and ZrO{sub 2}+PTFE dosemeters

    Energy Technology Data Exchange (ETDEWEB)

    Duran M, H. A.; Hernandez O, M. [Departamento de Investigacion en Polimeros y Materiales, Universidad de Sonora, Blvd. Luis Encinas y Rosales s/n, Col. Centro, 83190 Hermosillo, Sonora (Mexico); Salas L, M. A.; Hernandez D, V. M.; Vega C, H. R. [Unidad Academica de Estudios Nucleares, Universidad Autonoma de Zacatecas, Cipres 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Pinedo S, A.; Ventura M, J.; Chacon, F. [Hospital General de Zona No. 1, IMSS, Interior Alameda 45, 98000 Zacatecas (Mexico); Rivera M, T. [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada, IPN, Av. Legaria 694, Col. Irrigacion, 11500 Mexico D. F.(Mexico)], e-mail: hduran20_1@hotmail.com

    2009-10-15

    Mammography test is a central tool for breast cancer diagnostic. In addition, programs are conducted periodically to detect the asymptomatic women in certain age groups; these programs have shown a reduction on breast cancer mortality. Early detection of breast cancer is achieved through a mammography, which contrasts the glandular and adipose tissue with a probable calcification. The parameters used for mammography are based on the thickness and density of the breast, their values depend on the voltage, current, focal spot and anode-filter combination. To achieve an image clear and a minimum dose must be chosen appropriate irradiation conditions. Risk associated with mammography should not be ignored. This study was performed in the General Hospital No. 1 IMSS in Zacatecas. Was used a glucose phantom and measured air Kerma at the entrance of the breast that was calculated using Monte Carlo methods and ZrO{sub 2}+PTFE thermoluminescent dosemeters, this calculation was completed with calculating the absorbed dose. (author)

  1. Modeling of neutron and photon transport in iron and concrete radiation shields by using Monte Carlo method

    CERN Document Server

    Žukauskaitėa, A; Plukienė, R; Ridikas, D

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.

  2. Monte Carlo and least-squares methods applied in unfolding of X-ray spectra measured with cadmium telluride detectors

    Energy Technology Data Exchange (ETDEWEB)

    Moralles, M. [Centro do Reator de Pesquisas, Instituto de Pesquisas Energeticas e Nucleares, Caixa Postal 11049, CEP 05422-970, Sao Paulo SP (Brazil)], E-mail: moralles@ipen.br; Bonifacio, D.A.B. [Centro do Reator de Pesquisas, Instituto de Pesquisas Energeticas e Nucleares, Caixa Postal 11049, CEP 05422-970, Sao Paulo SP (Brazil); Bottaro, M.; Pereira, M.A.G. [Instituto de Eletrotecnica e Energia, Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, 1289, CEP 05508-010, Sao Paulo SP (Brazil)

    2007-09-21

    Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.

  3. Monte Carlo and least-squares methods applied in unfolding of X-ray spectra measured with cadmium telluride detectors

    Science.gov (United States)

    Moralles, M.; Bonifácio, D. A. B.; Bottaro, M.; Pereira, M. A. G.

    2007-09-01

    Spectra of calibration sources and X-ray beams were measured with a cadmium telluride (CdTe) detector. The response function of the detector was simulated using the GEANT4 Monte Carlo toolkit. Trapping of charge carriers were taken into account using the Hecht equation in the active zone of the CdTe crystal associated with a continuous function to produce drop of charge collection efficiency near the metallic contacts and borders. The rise time discrimination is approximated by a cut in the depth of the interaction relative to cathode and corrections that depend on the pulse amplitude. The least-squares method with truncation was employed to unfold X-ray spectra typically used in medical diagnostics and the results were compared with reference data.

  4. The Monte Carlo method as a tool for statistical characterisation of differential and additive phase shifting algorithms

    International Nuclear Information System (INIS)

    Miranda, M; Dorrio, B V; Blanco, J; Diz-Bugarin, J; Ribas, F

    2011-01-01

    Several metrological applications base their measurement principle in the phase sum or difference between two patterns, one original s(r,φ) and another modified t(r,φ+Δφ). Additive or differential phase shifting algorithms directly recover the sum 2φ+Δφ or the difference Δφ of phases without requiring prior calculation of the individual phases. These algorithms can be constructed, for example, from a suitable combination of known phase shifting algorithms. Little has been written on the design, analysis and error compensation of these new two-stage algorithms. Previously we have used computer simulation to study, in a linear approach or with a filter process in reciprocal space, the response of several families of them to the main error sources. In this work we present an error analysis that uses Monte Carlo simulation to achieve results in good agreement with those obtained with spatial and temporal methods.

  5. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    International Nuclear Information System (INIS)

    Nasser, Hassan; Cessac, Bruno; Marre, Olivier

    2013-01-01

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  6. Numerical simulations of a coupled radiative?conductive heat transfer model using a modified Monte Carlo method

    KAUST Repository

    Kovtanyuk, Andrey E.

    2012-01-01

    Radiative-conductive heat transfer in a medium bounded by two reflecting and radiating plane surfaces is considered. This process is described by a nonlinear system of two differential equations: an equation of the radiative heat transfer and an equation of the conductive heat exchange. The problem is characterized by anisotropic scattering of the medium and by specularly and diffusely reflecting boundaries. For the computation of solutions of this problem, two approaches based on iterative techniques are considered. First, a recursive algorithm based on some modification of the Monte Carlo method is proposed. Second, the diffusion approximation of the radiative transfer equation is utilized. Numerical comparisons of the approaches proposed are given in the case of isotropic scattering. © 2011 Elsevier Ltd. All rights reserved.

  7. A Study of The Standard Model Higgs, WW and ZZ Production in Dilepton Plus Missing Transverse Energy Final State at CDF Run II

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Shih-Chieh [Univ. of California, San Diego, CA (United States)

    2008-01-01

    We report on a search for Standard Model (SM) production of Higgs to WW* in the two charged lepton (e, μ) and two neutrino final state in p$\\bar{p}$ collisions at a center of mass energy √s = 1.96 TeV. The data were collected with the CDF II detector at the Fermilab Tevatron and correspond to an integrated luminosity of 1.9fb-1. The Matrix Element method is developed to calculate the event probability and to construct a likelihood ratio discriminator. There are 522 candidates observed with an expectation of 513 ± 41 background events and 7.8 ± 0.6 signal events for Higgs mass 160GeV/c2 at next-to-next-to-leading logarithmic level calculation. The observed 95% C.L. upper limit is 0.8 pb which is 2.0 times the SM prediction while the median expected limit is 3.1$+1.3\\atop{-0.9}$ with systematics included. Results for 9 other Higgs mass hypotheses ranging from 110GeV/c2 to 200GeV/c2 are also presented. The same dilepton plus large transverse energy imbalance (ET) final state is used in the SM ZZ production search and the WW production study. The observed significance of ZZ → llvv channel is 1.2σ. It adds extra significance to the ZZ → 4l channel and leads to a strong evidence of ZZ production with 4.4 σ significance. The potential improvement of the anomalous triple gauge coupling measurement by using the Matrix Element method in WW production is also studied.

  8. Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method

    International Nuclear Information System (INIS)

    Costa, Priscila

    2014-01-01

    The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm 3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108m Ag, 110m Ag and 60 Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq. (author)

  9. Uncertainty Determination for Aeroheating in Uranus and Saturn Probe Entries by the Monte Carlo Method

    Science.gov (United States)

    Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.

    2013-01-01

    The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While

  10. Search for ZZ resonances in the $ 2 \\ell 2 \

    CERN Document Server

    Sirunyan, Albert M; CMS Collaboration; Adam, Wolfgang; Ambrogi, Federico; Asilar, Ece; Bergauer, Thomas; Brandstetter, Johannes; Brondolin, Erica; Dragicevic, Marko; Erö, Janos; Escalante Del Valle, Alberto; Flechl, Martin; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Grossmann, Johannes; Hrubec, Josef; Jeitler, Manfred; König, Axel; Krammer, Natascha; Krätschmer, Ilse; Liko, Dietrich; Madlener, Thomas; Mikulec, Ivan; Pree, Elias; Rad, Navid; Rohringer, Herbert; Schieck, Jochen; Schöfbeck, Robert; Spanring, Markus; Spitzbart, Daniel; Taurok, Anton; Waltenberger, Wolfgang; Wittmann, Johannes; Wulz, Claudia-Elisabeth; Zarucki, Mateusz; Chekhovsky, Vladimir; Mossolov, Vladimir; Suarez Gonzalez, Juan; De Wolf, Eddi A; Di Croce, Davide; Janssen, Xavier; Lauwers, Jasper; Van De Klundert, Merijn; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Abu Zeid, Shimaa; Blekman, Freya; D'Hondt, Jorgen; De Bruyn, Isabelle; De Clercq, Jarne; Deroover, Kevin; Flouris, Giannis; Lontkovskyi, Denys; Lowette, Steven; Marchesini, Ivan; Moortgat, Seth; Moreels, Lieselotte; Python, Quentin; Skovpen, Kirill; Tavernier, Stefaan; Van Doninck, Walter; Van Mulders, Petra; Van Parijs, Isis; Beghin, Diego; Bilin, Bugra; Brun, Hugues; Clerbaux, Barbara; De Lentdecker, Gilles; Delannoy, Hugo; Dorney, Brian; Fasanella, Giuseppe; Favart, Laurent; Goldouzian, Reza; Grebenyuk, Anastasia; Kalsi, Amandeep Kaur; Lenzi, Thomas; Luetic, Jelena; Maerschalk, Thierry; Marinov, Andrey; Seva, Tomislav; Starling, Elizabeth; Vander Velde, Catherine; Vanlaer, Pascal; Vannerom, David; Yonamine, Ryo; Zenoni, Florian; Cornelis, Tom; Dobur, Didar; Fagot, Alexis; Gul, Muhammad; Khvastunov, Illia; Poyraz, Deniz; Roskas, Christos; Salva Diblen, Sinem; Tytgat, Michael; Verbeke, Willem; Zaganidis, Nicolas; Bakhshiansohi, Hamed; Bondu, Olivier; Brochet, Sébastien; Bruno, Giacomo; Caputo, Claudio; Caudron, Adrien; David, Pieter; De Visscher, Simon; Delaere, Christophe; Delcourt, Martin; Francois, Brieuc; Giammanco, Andrea; Komm, Matthias; Krintiras, Georgios; Lemaitre, Vincent; Magitteri, Alessio; Mertens, Alexandre; Musich, Marco; Piotrzkowski, Krzysztof; Quertenmont, Loic; Saggio, Alessia; Vidal Marono, Miguel; Wertz, Sébastien; Zobec, Joze; Aldá Júnior, Walter Luiz; Alves, Fábio Lúcio; Alves, Gilvan; Brito, Lucas; Correa Martins Junior, Marcos; Correia Silva, Gilson; Hensel, Carsten; Moraes, Arthur; Pol, Maria Elena; Rebello Teles, Patricia; Belchior Batista Das Chagas, Ewerton; Carvalho, Wagner; Chinellato, Jose; Coelho, Eduardo; Melo Da Costa, Eliza; Da Silveira, Gustavo Gil; De Jesus Damiao, Dilson; Fonseca De Souza, Sandro; Huertas Guativa, Lina Milena; Malbouisson, Helena; Melo De Almeida, Miqueias; Mora Herrera, Clemencia; Mundim, Luiz; Nogima, Helio; Sanchez Rosas, Luis Junior; Santoro, Alberto; Sznajder, Andre; Thiel, Mauricio; Tonelli Manganote, Edmilson José; Torres Da Silva De Araujo, Felipe; Vilela Pereira, Antonio; Ahuja, Sudha; Bernardes, Cesar Augusto; Tomei, Thiago; De Moraes Gregores, Eduardo; Mercadante, Pedro G; Novaes, Sergio F; Padula, Sandra; Romero Abad, David; Ruiz Vargas, José Cupertino; Aleksandrov, Aleksandar; Hadjiiska, Roumyana; Iaydjiev, Plamen; Misheva, Milena; Rodozov, Mircho; Shopova, Mariana; Sultanov, Georgi; Dimitrov, Anton; Litov, Leander; Pavlov, Borislav; Petkov, Peicho; Fang, Wenxing; Gao, Xuyang; Yuan, Li; Ahmad, Muhammad; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Chen, Mingshui; Chen, Ye; Jiang, Chun-Hua; Leggat, Duncan; Liao, Hongbo; Liu, Zhenan; Romeo, Francesco; Shaheen, Sarmad Masood; Spiezia, Aniello; Tao, Junquan; Wang, Chunjie; Wang, Zheng; Yazgan, Efe; Yu, Taozhe; Zhang, Huaqiao; Zhang, Sijing; Zhao, Jingzhou; Ban, Yong; Chen, Geng; Li, Jing; Li, Qiang; Liu, Shuai; Mao, Yajun; Qian, Si-Jin; Wang, Dayong; Xu, Zijun; Zhang, Fengwangdong; Wang, Yi; Avila, Carlos; Cabrera, Andrés; Chaparro Sierra, Luisa Fernanda; Florez, Carlos; González Hernández, Carlos Felipe; Ruiz Alvarez, José David; Segura Delgado, Manuel Alejandro; Courbon, Benoit; Godinovic, Nikola; Lelas, Damir; Puljak, Ivica; Ribeiro Cipriano, Pedro M; Sculac, Toni; Antunovic, Zeljko; Kovac, Marko; Brigljevic, Vuko; Ferencek, Dinko; Kadija, Kreso; Mesic, Benjamin; Starodumov, Andrei; Susa, Tatjana; Ather, Mohsan Waseem; Attikis, Alexandros; Mavromanolakis, Georgios; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Rykaczewski, Hans; Finger, Miroslav; Finger Jr, Michael; Carrera Jarrin, Edgar; Assran, Yasser; Elgammal, Sherif; Mahrous, Ayman; Bhowmik, Sandeep; Dewanjee, Ram Krishna

    2017-01-01

    A search for heavy resonances decaying to a pair of Z bosons is performed using data collected with the CMS detector at the LHC. Events are selected by requiring two oppositely charged leptons (electrons or muons), consistent with the decay of a Z boson, and large missing transverse momentum, which is interpreted as arising from the decay of a second Z boson to two neutrinos. The analysis uses data from proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb$^{-1}$. The hypothesis of a spin-2 bulk graviton (X) decaying to a pair of Z bosons is examined for 600 $\\le m_\\mathrm{X} \\le$ 2500 GeV and upper limits at 95% confidence level are set on the product of the production cross section and branching fraction of X $\\to$ ZZ ranging from 100 to 4 fb. For bulk graviton models characterized by a curvature scale parameter $\\tilde{k} =$ 0.5 in the extra dimension, the region $m_\\mathrm{X} < $ 800 GeV is excluded, providing the most stringent limit report...

  11. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    Science.gov (United States)

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  12. Application of the extended boundary condition method to Monte Carlo simulations of scattering of waves by two-dimensional random rough surfaces

    Science.gov (United States)

    Tsang, L.; Lou, S. H.; Chan, C. H.

    1991-01-01

    The extended boundary condition method is applied to Monte Carlo simulations of two-dimensional random rough surface scattering. The numerical results are compared with one-dimensional random rough surfaces obtained from the finite-element method. It is found that the mean scattered intensity from two-dimensional rough surfaces differs from that of one dimension for rough surfaces with large slopes.

  13. A novel method combining Monte Carlo-FEM simulations and experiments for simultaneous evaluation of the ultrathin film mass density and Young's modulus

    Czech Academy of Sciences Publication Activity Database

    Zapoměl, Jaroslav; Stachiv, Ivo; Ferfecki, P.

    66-67, January (2016), s. 223-231 ISSN 0888-3270 R&D Projects: GA ČR GAP107/12/0800 Institutional support: RVO:61388998 ; RVO:68378271 Keywords : film measurement * finite element method * Monte Carlo probabilistic method * resonance frequency Subject RIV: BI - Acoustics; BI - Acoustics (FZU-D) Impact factor: 4.116, year: 2016

  14. Dose estimation in the crystalline lens of industrial radiography personnel using Monte Carlo Method

    International Nuclear Information System (INIS)

    Lima, Alexandre Roza de

    2014-01-01

    The International Commission on Radiological Protection, ICRP, in its publication 103, reviewed recent epidemiological evidence and indicated that, for the eye lens, the absorbed dose threshold for induction of late detriment is around 0.5 Gy. On this basis, on April 21, 2011, the ICRP recommended changes to the occupational dose limit in planned exposure situations, reducing the eye lens equivalent dose limit from 150 mSv to 20 mSv per year, on average, during the period of 5 years, with exposure not to exceed 50 mSv in a single year. This paper presents the dose estimation to eye lens, H p (10), effective dose and doses to important organs in the body, received by industrial gamma radiography workers, during planned or accidental exposure situations. The computer program Visual Monte Carlo was used and two relevant scenarios were postulated. The first is a planned exposure situation scenario where the operator is directly exposed to radiation during the operation. 12 radiographic exposures per day for 250 days per year, which leads to an exposure of 36,000 seconds or 10 hours per year were considered. The simulation was carried out using the following parameters: a 192 Ir source with 1.0 TBq of activity, the source/operator distance varying from 5 m to 10 m at three different heights of 0.2 m, 1.0 m and 2.0 m. The eyes lens doses were estimated as being between 16.9 mSv/year and 66.9 mSv/year and for H p (10) the doses were between 17.7 mSv/year and 74.2 mSv/year. For the accidental exposure situation scenario, the same radionuclide and activity were used, but in this case the doses were calculated with and without a collimator. The heights above ground considered were 1.0 m, 1.5 m e 2.0 m, the source/operator distance was 40 cm and, the exposure time 74 seconds. The eyes lens doses, for 1.5 m, were 12.3 mGy and 0.28 mGy without and with a collimator, respectively. Three conclusions resulted from this work. The first was that the estimated doses show that the new

  15. Capture and detection of DNA hybrids on paper via the anchoring of antibodies with fusions of carbohydrate binding modules and ZZ-domains.

    Science.gov (United States)

    Rosa, Ana M M; Louro, A Filipa; Martins, Sofia A M; Inácio, João; Azevedo, Ana M; Prazeres, D Miguel F

    2014-05-06

    Microfluidic paper-based analytical devices (μPADs) fabricated by wax-printing are suitable platforms for the development of simple and affordable molecular diagnostic assays for infectious diseases, especially in resource-limited settings. Paper devices can be modified for biological assays by adding appropriate reagents to the test areas. For this purpose, the use of affinity immobilization strategies can be a good solution for bioactive paper fabrication. This paper describes a methodology to capture labeled-DNA strands and hybrids on paper via the anchoring of antibodies with a fusion protein that combines a family 3 carbohydrate binding module (CBM) from Clostridium thermocellum, with high affinity to cellulose, and the ZZ fragment of the staphyloccocal protein A, which recognizes IgG antibodies via their Fc portion. Antibodies immobilized via CBM-ZZ were able to capture appropriately labeled (biotin, fluorescein) DNA strands and DNA hybrids. The ability of an antibody specific to biotin to discriminate complementary from noncomplementary, biotin-labeled targets was demonstrated in both spot and microchannel assays. Hybridization was detected by fluorescence emission of the fluorescein-labeled DNA probe. The efficiency of the capture of labeled-DNA by antibodies immobilized on paper via the CBM-ZZ construct was significantly higher when compared with a physical adsorption method where antibodies were simply spotted on paper without the intermediation of other molecules. The experimental proof of concept of wax-printed μPADs functionalized with CBM-ZZ for DNA detection at room temperature presented in this study constitutes an important step toward the development of easy to use and affordable molecular diagnostic tests.

  16. Assessment of the effectiveness of attenuation of leaded aprons through TLD dosimetry and Monte Carlo simulation method

    Energy Technology Data Exchange (ETDEWEB)

    Olaya D, H.; Diaz M, J. A.; Martinez O, S. A. [Universidad Pedagogica y Tecnologica de Colombia, Grupo de Fisica Nuclear Aplicada y Simulacion, 150003 Tunja, Boyaca (Colombia); Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2016-10-15

    Were performed experimental setups using an X-ray equipment continuous emission Pantak DXT-3000 and three types of leaded aprons with thickness of 0.25, 0.5 and 0.75 mm coated with Mylar Fiber coated Mylar on its surface. Apron was located at a distance of 2.5 m with respect focus in order to cover a radiation field size of a meter in diameter. At the beam output were added aluminum filtration in order to reproduce qualities of narrow beams N-40 (E{sub efective} = 33 keV), N-80 (E{sub efective} = 65 keV) and N-100 (E{sub efective} = 83 keV) according to the ISO standard 4037 (1-3). Each lead apron were fixed 10 TLD dosimeters over its surface, 5 dosimeters before and 5 dosimeters after with respect to X-ray beam and were calibrated for Harshaw 4500 thermoluminescent reader system order to assess the attenuation of each apron. Were performed dosimeters readings and were calculated the attenuation coefficients for each effective energy of X-ray quality. In order to confirm the method of effective energy of ISO-4037 and evaluate effectiveness of lead aprons based on energy range for each medical practice was made a Monte Carlo simulation using code GEANT-4, calculating the fluence and absorbed dose in each one of the dosimeters Monte Carlo, then coefficients of linear attenuation were calculated and compared with the experimental data and reported by the National Institute of Standards and Technology (Nist). Finally, results are consistent between theoretical calculation and experimental measures. This work will serve to make assessments for other personalized leaded protections. (Author)

  17. Analysis of the neutrons dispersion in a semi-infinite medium based in transport theory and the Monte Carlo method

    International Nuclear Information System (INIS)

    Arreola V, G.; Vazquez R, R.; Guzman A, J. R.

    2012-10-01

    In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)

  18. Post-DFT methods for Earth materials: Quantum Monte Carlo simulations of (Mg,Fe)O (Invited)

    Science.gov (United States)

    Driver, K. P.; Militzer, B.; Cohen, R. E.

    2013-12-01

    (Mg,Fe)O is a major mineral phase in Earth's lower mantle that plays a key role in determining the structural and dynamical properties of deep Earth. A pressure-induced spin-pairing transition of Fe has been the subject of numerous theoretical and experimental studies due to the consequential effects on lower mantle physics. The standard density functional theory (DFT) method does not treat strongly correlated electrons properly and results can have dependence on the choice of exchange-correlation functional. DFT+U, offers significant improvement over standard DFT for treating strongly correlated electrons. Indeed, DFT+U calculations and experiments have narrowed the ambient spin-transition between 40-60 GPa in (Mg,Fe)O. However, DFT+U, is not an ideal method due to dependence on Hubbard U parameter among other approximations. In order to further clarify details of the spin transition, it is necessary to use methods that explicitly treat effects of electron exchange and correlation, such as quantum Monte Carlo (QMC). Here, we will discuss methods of going beyond standard DFT and present QMC results on the (Mg,Fe)O elastic properties and spin-transition pressure in order to benchmark DFT+U results.

  19. AN APPLICATION OF VALUE AT RISK BY MONTE CARLO SIMULATION METHOD IN PORTFOLIOS OF ISE30 INDEX AND GOVERNMENT SECURITIES

    Directory of Open Access Journals (Sweden)

    OKTAY TAŞ

    2013-06-01

    Full Text Available As a result of the rapid development of information technologies and globalization, growing trading volumes and variety of securities have increased the exposure of economic units on risk. Therefore, the topics with respect to risk management have drawn substantial attention from both in academic and practitioners. In recent years , Value at Risk “VaR” approach, one of the methods used for evaluating risk , has gained widespread acceptance. In the first part of this study , the types of risk , the definition and historical development of the Value at Risk Approach , its methods and differences among these methods are explained. At the next section , VaR’s of the portfolios of ISE 30 index and goverment securities are estimated at the %99 and %95 confidence level and for 1 day and 10-day periods by the Monte Carlo simulation method which is one of the nonparametric estimation approaches of VaR. For each portfolio, simulations are performed based on the normal and Student–t distributions and results are compared. In the last section, the study is evaluated as a whole and emprical results are commented.

  20. The Dynamic Monte Carlo Method for Transient Analysis of Nuclear Reactors

    NARCIS (Netherlands)

    Sjenitzer, B.L.

    2013-01-01

    In this thesis a new method for the analysis of power transients in a nuclear reactor is developed, which is more accurate than the present state-of-the-art methods. Transient analysis is important tool when designing nuclear reactors, since they predict the behaviour of a reactor during changing