WorldWideScience

Sample records for chain monte carlo

  1. Parallel Markov chain Monte Carlo simulations.

    Science.gov (United States)

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  2. Handbook of Markov chain Monte Carlo

    CERN Document Server

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  3. On nonlinear Markov chain Monte Carlo

    CERN Document Server

    Andrieu, Christophe; Doucet, Arnaud; Del Moral, Pierre; 10.3150/10-BEJ307

    2011-01-01

    Let $\\mathscr{P}(E)$ be the space of probability measures on a measurable space $(E,\\mathcal{E})$. In this paper we introduce a class of nonlinear Markov chain Monte Carlo (MCMC) methods for simulating from a probability measure $\\pi\\in\\mathscr{P}(E)$. Nonlinear Markov kernels (see [Feynman--Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004) Springer]) $K:\\mathscr{P}(E)\\times E\\rightarrow\\mathscr{P}(E)$ can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or potentially self-interacting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and Foster--Lyapunov conditions. We investigate the performance of our approximations with some simulations.

  4. Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions

    Directory of Open Access Journals (Sweden)

    Samuel Livingstone

    2014-06-01

    Full Text Available Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.

  5. On the Markov Chain Monte Carlo (MCMC) method

    Indian Academy of Sciences (India)

    Rajeeva L Karandikar

    2006-04-01

    Markov Chain Monte Carlo (MCMC) is a popular method used to generate samples from arbitrary distributions, which may be specified indirectly. In this article, we give an introduction to this method along with some examples.

  6. Event-chain Monte Carlo for classical continuous spin models

    Science.gov (United States)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  7. Cosmological Markov Chain Monte Carlo simulation with Cmbeasy

    CERN Document Server

    Müller, C M

    2004-01-01

    We introduce a Markov Chain Monte Carlo simulation and data analysis package for the cosmological computation package Cmbeasy. We have taken special care in implementing an adaptive step algorithm for the Markov Chain Monte Carlo in order to improve convergence. Data analysis routines are provided which allow to test models of the Universe against up-to-date measurements of the Cosmic Microwave Background, Supernovae Ia and Large Scale Structure. The observational data is provided with the software for convenient usage. The package is publicly available as part of the Cmbeasy software at www.cmbeasy.org.

  8. Exploring Mass Perception with Markov Chain Monte Carlo

    Science.gov (United States)

    Cohen, Andrew L.; Ross, Michael G.

    2009-01-01

    Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants; perceptions of different collision mass ratios. The results reveal…

  9. Lifting—A nonreversible Markov chain Monte Carlo algorithm

    Science.gov (United States)

    Vucelja, Marija

    2016-12-01

    Markov chain Monte Carlo algorithms are invaluable tools for exploring stationary properties of physical systems, especially in situations where direct sampling is unfeasible. Common implementations of Monte Carlo algorithms employ reversible Markov chains. Reversible chains obey detailed balance and thus ensure that the system will eventually relax to equilibrium, though detailed balance is not necessary for convergence to equilibrium. We review nonreversible Markov chains, which violate detailed balance and yet still relax to a given target stationary distribution. In particular cases, nonreversible Markov chains are substantially better at sampling than the conventional reversible Markov chains with up to a square root improvement in the convergence time to the steady state. One kind of nonreversible Markov chain is constructed from the reversible ones by enlarging the state space and by modifying and adding extra transition rates to create non-reversible moves. Because of the augmentation of the state space, such chains are often referred to as lifted Markov Chains. We illustrate the use of lifted Markov chains for efficient sampling on several examples. The examples include sampling on a ring, sampling on a torus, the Ising model on a complete graph, and the one-dimensional Ising model. We also provide a pseudocode implementation, review related work, and discuss the applicability of such methods.

  10. Quantum Monte Carlo Study of Random Antiferromagnetic Heisenberg Chain

    OpenAIRE

    Todo, Synge; Kato, Kiyoshi; Takayama, Hajime

    1998-01-01

    Effects of randomness on the spin-1/2 and 1 antiferromagnetic Heisenberg chains are studied using the quantum Monte Carlo method with the continuous-time loop algorithm. We precisely calculated the uniform susceptibility, string order parameter, spatial and temporal correlation length, and the dynamical exponent, and obtained a phase diagram. The generalization of the continuous-time loop algorithm for the systems with higher-S spins is also presented.

  11. Efficient Word Alignment with Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Östling Robert

    2016-10-01

    Full Text Available We present EFMARAL, a new system for efficient and accurate word alignment using a Bayesian model with Markov Chain Monte Carlo (MCMC inference. Through careful selection of data structures and model architecture we are able to surpass the fast_align system, commonly used for performance-critical word alignment, both in computational efficiency and alignment accuracy. Our evaluation shows that a phrase-based statistical machine translation (SMT system produces translations of higher quality when using word alignments from EFMARAL than from fast_align, and that translation quality is on par with what is obtained using GIZA++, a tool requiring orders of magnitude more processing time. More generally we hope to convince the reader that Monte Carlo sampling, rather than being viewed as a slow method of last resort, should actually be the method of choice for the SMT practitioner and others interested in word alignment.

  12. Optimized nested Markov chain Monte Carlo sampling: theory

    Energy Technology Data Exchange (ETDEWEB)

    Coe, Joshua D [Los Alamos National Laboratory; Shaw, M Sam [Los Alamos National Laboratory; Sewell, Thomas D [U. MISSOURI

    2009-01-01

    Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples of the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.

  13. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  14. Monte Carlo properties of the hydrocarbon chains of phospholipid molecules

    Science.gov (United States)

    Zhurkin, D. V.; Rabinovich, A. L.

    2015-02-01

    Properties of 65 chain hydrocarbon molecules in the unperturbed state are investigated using the Monte Carlo method at temperatures of 293, 303, and 313 K. Chains with the general structure CH3-(CH2) a -(CH=CH-CH2) d -(CH2) b -CH3 are considered. The number of carbon atoms in a skeleton N = 16, 18, 20, and 22; the number of cis-double bonds d = 0, 1, ..., 6. Conformations are generated with continuous varying of the angles of internal rotation around simple C-C bonds in the range of 0°-360°, the interdependence of each three angles along the chain is allowed for, and essential sampling is performed. Different properties of molecules are considered: the average maximum projections of hydrocarbon chains on their main axes of inertia, average squares of the radii of inertia, and relative fluctuations in the squares of the radii of inertia. The dependence of the calculated characteristics on the structural parameters of the chains is investigated.

  15. Markov Chain Monte-Carlo Models of Starburst Clusters

    Science.gov (United States)

    Melnick, Jorge

    2015-01-01

    There are a number of stochastic effects that must be considered when comparing models to observations of starburst clusters: the IMF is never fully populated; the stars can never be strictly coeval; stars rotate and their photometric properties depend on orientation; a significant fraction of massive stars are in interacting binaries; and the extinction varies from star to star. The probability distributions of each of these effects are not a priori known, but must be extracted from the observations. Markov Chain Monte-Carlo methods appear to provide the best statistical approach. Here I present an example of stochastic age effects upon the upper mass limit of the IMF of the Arches cluster as derived from near-IR photometry.

  16. Reversible jump Markov chain Monte Carlo for deconvolution.

    Science.gov (United States)

    Kang, Dongwoo; Verotta, Davide

    2007-06-01

    To solve the problem of estimating an unknown input function to a linear time invariant system we propose an adaptive non-parametric method based on reversible jump Markov chain Monte Carlo (RJMCMC). We use piecewise polynomial functions (splines) to represent the input function. The RJMCMC algorithm allows the exploration of a large space of competing models, in our case the collection of splines corresponding to alternative positions of breakpoints, and it is based on the specification of transition probabilities between the models. RJMCMC determines: the number and the position of the breakpoints, and the coefficients determining the shape of the spline, as well as the corresponding posterior distribution of breakpoints, number of breakpoints, coefficients and arbitrary statistics of interest associated with the estimation problem. Simulation studies show that the RJMCMC method can obtain accurate reconstructions of complex input functions, and obtains better results compared with standard non-parametric deconvolution methods. Applications to real data are also reported.

  17. Markov Chain Monte Carlo Bayesian Learning for Neural Networks

    Science.gov (United States)

    Goodrich, Michael S.

    2011-01-01

    Conventional training methods for neural networks involve starting al a random location in the solution space of the network weights, navigating an error hyper surface to reach a minimum, and sometime stochastic based techniques (e.g., genetic algorithms) to avoid entrapment in a local minimum. It is further typically necessary to preprocess the data (e.g., normalization) to keep the training algorithm on course. Conversely, Bayesian based learning is an epistemological approach concerned with formally updating the plausibility of competing candidate hypotheses thereby obtaining a posterior distribution for the network weights conditioned on the available data and a prior distribution. In this paper, we developed a powerful methodology for estimating the full residual uncertainty in network weights and therefore network predictions by using a modified Jeffery's prior combined with a Metropolis Markov Chain Monte Carlo method.

  18. HYDRA: a Java library for Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Gregory R. Warnes

    2002-03-01

    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  19. Variational Monte Carlo investigation of SU (N ) Heisenberg chains

    Science.gov (United States)

    Dufour, Jérôme; Nataf, Pierre; Mila, Frédéric

    2015-05-01

    Motivated by recent experimental progress in the context of ultracold multicolor fermionic atoms in optical lattices, we have investigated the properties of the SU (N) Heisenberg chain with totally antisymmetric irreducible representations, the effective model of Mott phases with m Gutzwiller projected fermionic wave functions, we have been able to verify these predictions for a representative number of cases with N ≤10 and m ≤N /2 , and we have shown that the opening of a gap is associated to a spontaneous dimerization or trimerization depending on the value of m and N . We have also investigated the marginal cases where Abelian bosonization did not lead to any prediction. In these cases, variational Monte Carlo predicts that the ground state is critical with exponents consistent with conformal field theory.

  20. de Finetti Priors using Markov chain Monte Carlo computations.

    Science.gov (United States)

    Bacallado, Sergio; Diaconis, Persi; Holmes, Susan

    2015-07-01

    Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.

  1. Extended Ensemble Monte Carlo

    OpenAIRE

    Iba, Yukito

    2000-01-01

    ``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...

  2. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  3. Physical time scale in kinetic Monte Carlo simulations of continuous-time Markov chains.

    Science.gov (United States)

    Serebrinsky, Santiago A

    2011-03-01

    We rigorously establish a physical time scale for a general class of kinetic Monte Carlo algorithms for the simulation of continuous-time Markov chains. This class of algorithms encompasses rejection-free (or BKL) and rejection (or "standard") algorithms. For rejection algorithms, it was formerly considered that the availability of a physical time scale (instead of Monte Carlo steps) was empirical, at best. Use of Monte Carlo steps as a time unit now becomes completely unnecessary.

  4. Seriation in paleontological data using markov chain Monte Carlo methods.

    Directory of Open Access Journals (Sweden)

    Kai Puolamäki

    2006-02-01

    Full Text Available Given a collection of fossil sites with data about the taxa that occur in each site, the task in biochronology is to find good estimates for the ages or ordering of sites. We describe a full probabilistic model for fossil data. The parameters of the model are natural: the ordering of the sites, the origination and extinction times for each taxon, and the probabilities of different types of errors. We show that the posterior distributions of these parameters can be estimated reliably by using Markov chain Monte Carlo techniques. The posterior distributions of the model parameters can be used to answer many different questions about the data, including seriation (finding the best ordering of the sites and outlier detection. We demonstrate the usefulness of the model and estimation method on synthetic data and on real data on large late Cenozoic mammals. As an example, for the sites with large number of occurrences of common genera, our methods give orderings, whose correlation with geochronologic ages is 0.95.

  5. Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters.

    Science.gov (United States)

    Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J

    2012-10-01

    Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley.

  6. Markov chain Monte Carlo: an introduction for epidemiologists.

    Science.gov (United States)

    Hamra, Ghassan; MacLehose, Richard; Richardson, David

    2013-04-01

    Markov Chain Monte Carlo (MCMC) methods are increasingly popular among epidemiologists. The reason for this may in part be that MCMC offers an appealing approach to handling some difficult types of analyses. Additionally, MCMC methods are those most commonly used for Bayesian analysis. However, epidemiologists are still largely unfamiliar with MCMC. They may lack familiarity either with he implementation of MCMC or with interpretation of the resultant output. As with tutorials outlining the calculus behind maximum likelihood in previous decades, a simple description of the machinery of MCMC is needed. We provide an introduction to conducting analyses with MCMC, and show that, given the same data and under certain model specifications, the results of an MCMC simulation match those of methods based on standard maximum-likelihood estimation (MLE). In addition, we highlight examples of instances in which MCMC approaches to data analysis provide a clear advantage over MLE. We hope that this brief tutorial will encourage epidemiologists to consider MCMC approaches as part of their analytic tool-kit.

  7. Searching for efficient Markov chain Monte Carlo proposal kernels.

    Science.gov (United States)

    Yang, Ziheng; Rodríguez, Carlos E

    2013-11-26

    Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.

  8. Asteroid mass estimation using Markov-Chain Monte Carlo techniques

    Science.gov (United States)

    Siltala, Lauri; Granvik, Mikael

    2016-10-01

    Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid using astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations into the OpenOrb asteroid-orbit-computation software: the very rough 'marching' approximation, in which the asteroid orbits are fixed at a given epoch, reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-Chain Monte Carlo (MCMC) approach. We will introduce each of these algorithms with particular focus on the MCMC algorithm, and present example results for both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans, particularly in connection with ESA's Gaia mission.

  9. A comparison of strategies for Markov chain Monte Carlo computation in quantitative genetics

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Ibánez-Escriche, Noelia; Sorensen, Daniel

    2008-01-01

    In quantitative genetics, Markov chain Monte Carlo (MCMC) methods are indispensable for statistical inference in non-standard models like generalized linear models with genetic random effects or models with genetically structured variance heterogeneity. A particular challenge for MCMC applications...

  10. Monte Carlo methods

    OpenAIRE

    Bardenet, R.

    2012-01-01

    ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...

  11. Event-chain Monte Carlo algorithms for three- and many-particle interactions

    Science.gov (United States)

    Harland, J.; Michel, M.; Kampmann, T. A.; Kierfeld, J.

    2017-02-01

    We generalize the rejection-free event-chain Monte Carlo algorithm from many-particle systems with pairwise interactions to systems with arbitrary three- or many-particle interactions. We introduce generalized lifting probabilities between particles and obtain a general set of equations for lifting probabilities, the solution of which guarantees maximal global balance. We validate the resulting three-particle event-chain Monte Carlo algorithms on three different systems by comparison with conventional local Monte Carlo simulations: i) a test system of three particles with a three-particle interaction that depends on the enclosed triangle area; ii) a hard-needle system in two dimensions, where needle interactions constitute three-particle interactions of the needle end points; iii) a semiflexible polymer chain with a bending energy, which constitutes a three-particle interaction of neighboring chain beads. The examples demonstrate that the generalization to many-particle interactions broadens the applicability of event-chain algorithms considerably.

  12. Event-chain Monte Carlo algorithm for continuous spin systems and its application

    Science.gov (United States)

    Nishikawa, Yoshihiko; Hukushima, Koji

    2016-09-01

    The event-chain Monte Carlo (ECMC) algorithm is described for hard-sphere systems and general potential systems including interacting particle system and continuous spin systems. Using the ECMC algorithm, large-scale equilibrium Monte Carlo simulations are performed for a three-dimensional chiral helimagnetic model under a magnetic field. It is found that critical behavior of a phase transition changes with increasing the magnetic field.

  13. The ATLAS Fast Monte Carlo Production Chain Project

    CERN Document Server

    Jansky, Roland Wolfgang; The ATLAS collaboration

    2015-01-01

    During the last years ATLAS has successfully deployed a new integrated simulation framework (ISF) which allows a flexible mixture of full and fast detector simulation techniques within the processing of one event. The thereby achieved possible speed-up in detector simulation of up to a factor 100 makes subsequent digitization and reconstruction the dominant contributions to the Monte Carlo (MC) production CPU cost. The slowest components of both digitization and reconstruction are inside the Inner Detector due to the complex signal modeling needed in the emulation of the detector readout and in reconstruction due to the combinatorial nature of the problem to solve, respectively. Alternative fast approaches have been developed for these components: for the silicon based detectors a simpler geometrical clustering approach has been deployed replacing the charge drift emulation in the standard digitization modules, which achieves a very high accuracy in describing the standard output. For the Inner Detector track...

  14. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    DEFF Research Database (Denmark)

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint......In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte...

  15. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  16. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di

  17. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior distributi

  18. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Diks, C.G.H.; Robinson, B.A.; Hyman, J.M.; Higdon, D.

    2009-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well-constructed MCMC schemes to the appropriate

  19. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, C.J.F.; Diks, C.G.H.; Robinson, B.A.; Hyman, J.M.; Higdon, D.

    2009-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate

  20. Teaching Markov Chain Monte Carlo: Revealing the Basic Ideas behind the Algorithm

    Science.gov (United States)

    Stewart, Wayne; Stewart, Sepideh

    2014-01-01

    For many scientists, researchers and students Markov chain Monte Carlo (MCMC) simulation is an important and necessary tool to perform Bayesian analyses. The simulation is often presented as a mathematical algorithm and then translated into an appropriate computer program. However, this can result in overlooking the fundamental and deeper…

  1. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  2. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    Science.gov (United States)

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  3. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  4. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    Science.gov (United States)

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  5. A lattice Monte Carlo study of long chain conformations at solid-polymer melt interfaces

    NARCIS (Netherlands)

    Bitsanis, Ioannis A.; Brinke, Gerrit ten

    1993-01-01

    In this paper we present a comprehensive lattice Monte Carlo study of long chain conformations at solid-polymer melt interfaces. Segmental scale interfacial features, like the bond orientational distribution were found to be independent of surface-segment energetics, and statistically identical with

  6. Learning Bayesian network classifiers for credit scoring using Markov Chain Monte Carlo search

    NARCIS (Netherlands)

    Baesens, B.; Egmont-Petersen, M.; Castelo, R.; Vanthienen, J.

    2002-01-01

    In this paper, we will evaluate the power and usefulness of Bayesian network classifiers for credit scoring. Various types of Bayesian network classifiers will be evaluated and contrasted including unrestricted Bayesian network classifiers learnt using Markov Chain Monte Carlo (MCMC) search. The exp

  7. Dynamic Monte Carlo simulation of chain growth polymerization and its concentration effect

    Institute of Scientific and Technical Information of China (English)

    LüWenqi

    2005-01-01

    [1]He, J., Zhang, H., Chen, J. et al., Monte Carlo simulation of kinetics and chain length distributions in living free-radical polymerization, Macromolecules, 1997, 30: 8010-8018.[2]Li, L., He, J., Yang, Y., Monte Carlo simulation on living radical polymerization with RAFT process, Chem. J. Chinese Univ. (in Chinese), 2000, 21(7): 1146-1148.[3]Ling, J., Shen, Z., Chen W., Algorithm and application of Monte Carlo simulation for multi-dispersive copolymerization system, Science in China, Series B, 2002, 45(3): 243-250.[4]Butte, A., Storti, G., Morbidelli, M., Evaluation of the chain length distribution in free-radical polymerization, 1. Bulk polymerization, Macromol. Theory Simul., 2002, 11: 22-36.[5]Smith, G. B., Russell, G. T., Heuts, J. P. A., Termination in dilute-solution free-radical polymerization: A composite model, Macromol. Theory Simul., 2003, 12: 299-314.[6]Zetterlund, P. B., Yamazoe, H., Yamada, B., Free radical bulk po- lymerization of styrene: Simulation of molecular weight distribu- tions to high conversion using experimentally obtained rate coef- ficients, Macromol. Theory Simul., 2003, 12: 379-385.[7]Binder, K., Paul, W., Monte Carlo simulations of polymer dy- namics: Recent advances, J. Polym. Sci., Polym. Phys. Ed., 1997, 35(1): 1-31.[8]Rouault, Y., Milchev, A., Monte Carlo study of living polymers with the bond-fluctuation method, Phys. Rev. E, 1995, 51(6): 5905-5910.[9]Jo, W. H., Lee, J. W., Lee, M. S. et al., Effect of interchange reactions on the molecular weight distribution of poly(ethylene terephthalate): A Monte Carlo simulation, J. Polym. Sci., Polym. Phys. Ed., 1996, 34: 725-729.[10]Jang, S. S., Ha, W. S., Jo, W. H. et al., Monte Carlo simulation of copolymerization by ester interchange reaction in miscible polyester blends, J. Polym. Sci., Polym. Phys. Ed., 1998, 36: 1637-1645.[11]Lee, Y. U., Jang, S. S., Jo, W. H., Off-lattice Monte Carlo simulation of hyperbranched polymers, 1. Polycondensation of AB2 type monomers, Macromol. Theory

  8. FREYA-a new Monte Carlo code for improved modeling of fission chains

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  9. Event-chain Monte Carlo algorithms for hard-sphere systems.

    Science.gov (United States)

    Bernard, Etienne P; Krauth, Werner; Wilson, David B

    2009-11-01

    In this paper we present the event-chain algorithms, which are fast Markov-chain Monte Carlo methods for hard spheres and related systems. In a single move of these rejection-free methods, an arbitrarily long chain of particles is displaced, and long-range coherent motion can be induced. Numerical simulations show that event-chain algorithms clearly outperform the conventional Metropolis method. Irreversible versions of the algorithms, which violate detailed balance, improve the speed of the method even further. We also compare our method with a recent implementations of the molecular-dynamics algorithm.

  10. MC3: Multi-core Markov-chain Monte Carlo code

    Science.gov (United States)

    Cubillos, Patricio; Harrington, Joseph; Lust, Nate; Foster, AJ; Stemm, Madison; Loredo, Tom; Stevenson, Kevin; Campo, Chris; Hardin, Matt; Hardy, Ryan

    2016-10-01

    MC3 (Multi-core Markov-chain Monte Carlo) is a Bayesian statistics tool that can be executed from the shell prompt or interactively through the Python interpreter with single- or multiple-CPU parallel computing. It offers Markov-chain Monte Carlo (MCMC) posterior-distribution sampling for several algorithms, Levenberg-Marquardt least-squares optimization, and uniform non-informative, Jeffreys non-informative, or Gaussian-informative priors. MC3 can share the same value among multiple parameters and fix the value of parameters to constant values, and offers Gelman-Rubin convergence testing and correlated-noise estimation with time-averaging or wavelet-based likelihood estimation methods.

  11. Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach

    OpenAIRE

    Xu, Lixin; Lu, Jianbo

    2010-01-01

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, $\\Ome...

  12. A new fuzzy Monte Carlo method for solving SLAE with ergodic fuzzy Markov chains

    Directory of Open Access Journals (Sweden)

    Maryam Gharehdaghi

    2015-05-01

    Full Text Available In this paper we introduce a new fuzzy Monte Carlo method for solving system of linear algebraic equations (SLAE over the possibility theory and max-min algebra. To solve the SLAE, we first define a fuzzy estimator and prove that this is an unbiased estimator of the solution. To prove unbiasedness, we apply the ergodic fuzzy Markov chains. This new approach works even for cases with coefficients matrix with a norm greater than one.

  13. A Markov chain Monte Carlo method family in incomplete data analysis

    Directory of Open Access Journals (Sweden)

    Vasić Vladimir V.

    2003-01-01

    Full Text Available A Markov chain Monte Carlo method family is a collection of techniques for pseudorandom draws out of probability distribution function. In recent years, these techniques have been the subject of intensive interest of many statisticians. Roughly speaking, the essence of a Markov chain Monte Carlo method family is generating one or more values of a random variable Z, which is usually multidimensional. Let P(Z = f(Z denote a density function of a random variable Z, which we will refer to as a target distribution. Instead of sampling directly from the distribution f, we will generate [Z(1, Z(2..., Z(t,... ], in which each value is, in a way, dependant upon the previous value and where the stationary distribution will be a target distribution. For a sufficient value of t, Z(t will be approximately random sampling of the distribution f. A Markov chain Monte Carlo method family is useful when direct sampling is difficult, but when sampling of each value is not.

  14. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    CERN Document Server

    Aver, Erik; Skillman, Evan D

    2010-01-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H~II regions. The helium abundance is sensitive to several physical parameters associated with the H~II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He~I emission lines. We demonstrate that introducing the electron temperature derived from the [O~III] emission lines as a prior, in a very conservative manner, produces...

  15. Exact likelihood-free Markov chain Monte Carlo for elliptically contoured distributions.

    Science.gov (United States)

    Muchmore, Patrick; Marjoram, Paul

    2015-08-01

    Recent results in Markov chain Monte Carlo (MCMC) show that a chain based on an unbiased estimator of the likelihood can have a stationary distribution identical to that of a chain based on exact likelihood calculations. In this paper we develop such an estimator for elliptically contoured distributions, a large family of distributions that includes and generalizes the multivariate normal. We then show how this estimator, combined with pseudorandom realizations of an elliptically contoured distribution, can be used to run MCMC in a way that replicates the stationary distribution of a likelihood based chain, but does not require explicit likelihood calculations. Because many elliptically contoured distributions do not have closed form densities, our simulation based approach enables exact MCMC based inference in a range of cases where previously it was impossible.

  16. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  17. Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.

    Science.gov (United States)

    Piels, Molly; Zibar, Darko

    2016-02-08

    The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation.

  18. Combined constraints on modified Chaplygin gas model from cosmological observed data: Markov Chain Monte Carlo approach

    OpenAIRE

    Lu, Jianbo; Xu, Lixin; Wu, Yabo; Liu, Molin

    2011-01-01

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the modified Chaplygin gas (MCG) model as the unification of dark matter and dark energy from the latest observational data: the Union2 dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a flat universe, the constraint results for MCG model are, $\\Omega_{b}h^{2}=0...

  19. Testing Homogeneity of Mixture of Skew-normal Distributions Via Markov Chain Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Rahman Farnoosh Morteza Ebrahimi

    2015-05-01

    Full Text Available The main purpose of this study is to intoduce an optimal penalty function for testing homogeneity of finite mixture of skew-normal distribution based on Markov Chain Monte Carlo (MCMC simulation. In the present study the penalty function is considered as a parametric function in term of parameter of mixture models and a Baysian approach is employed to estimating the parameters of model. In order to examine the efficiency of the present study in comparison with the previous approaches, some simulation studies are presented.

  20. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...

  1. Fitting Spectral Energy Distributions of AGN - A Markov Chain Monte Carlo Approach

    CERN Document Server

    Rivera, Gabriela Calistro; Hennawi, Joseph F; Hogg, David W

    2014-01-01

    We present AGNfitter: a Markov Chain Monte Carlo algorithm developed to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGN) with different physical models of AGN components. This code is well suited to determine in a robust way multiple parameters and their uncertainties, which quantify the physical processes responsible for the panchromatic nature of active galaxies and quasars. We describe the technicalities of the code and test its capabilities in the context of X-ray selected obscured AGN using multiwavelength data from the XMM-COSMOS survey.

  2. Combining Stochastics and Analytics for a Fast Monte Carlo Decay Chain Generator

    CERN Document Server

    Kazkaz, Kareem

    2011-01-01

    Various Monte Carlo programs, developed either by small groups or widely available, have been used to calculate the effects of decays of radioactive chains, from the original parent nucleus to the final stable isotopes. These chains include uranium, thorium, radon, and others, and generally have long-lived parent nuclei. Generating decays within these chains requires a certain amount of computing overhead related to simulating unnecessary decays, time-ordering the final results in post-processing, or both. We present a combination analytic/stochastic algorithm for creating a time-ordered set of decays with position and time correlations, and starting with an arbitrary source age. Thus the simulation costs are greatly reduced, while at the same time avoiding chronological post-processing. We discuss optimization methods within the approach to minimize calculation time.

  3. A Thermodynamic Model for Square-well Chain Fluid: Theory and Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A thermodynamic model for the freely jointed square-well chain fluids was developed based on the thermodynamic perturbation theory of Barker-Henderson, Zhang and Wertheim. In this derivation Zhang's expressions for square-well monomers improved from Barker-Henderson compressibility approximation were adopted as the reference fluid, and Wertheim's polymerization method was used to obtain the free energy term due to the bond connectivity. An analytic expression for the Helmholtz free energy of the square-well chain fluids was obtained. The expression without adjustable parameters leads to the thermodynamic consistent predictions of the compressibility factors, residual internal energy and constant-volume heat capacity for dimer,4-mer, 8-mer and 16-mer square-well fluids. The results are in good agreement with the Monte Carlo simulation. To obtain the MC data of residual internal energy and the constant-volume heat capacity needed, NVT MC simulations were performed for these square-well chain fluids.

  4. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    Science.gov (United States)

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time.

  5. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations.

    Science.gov (United States)

    Farr, W M; Mandel, I; Stevens, D

    2015-06-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient 'global' proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently.

  6. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    Science.gov (United States)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  7. Markov Chain Monte Carlo simulation for projection of end stage renal disease patients in Greece.

    Science.gov (United States)

    Rodina-Theocharaki, A; Bliznakova, K; Pallikarakis, N

    2012-07-01

    End stage renal disease (ESRD) treatment methods are considered to be among the most expensive procedures for chronic conditions worldwide which also have severe impact on patients' quality of life. During the last decade, Greece has been among the countries with the highest incidence and prevalence, while at the same time with the lowest kidney transplantation rates. Predicting future patients' number on Renal Replacement Therapy (RRT) is essential for health care providers in order to achieve more effective resource management. In this study a Markov Chain Monte Carlo (MCMC) simulation is presented for predicting the future number of ESRD patients for the period 2009-2020 in Greece. The MCMC model comprises Monte Carlo sampling techniques applied on probability distributions of the constructed Markov Chain. The model predicts that there will be 15,147 prevalent patients on RRT in Greece by 2020. Additionally, a cost-effectiveness analysis was performed on a scenario of gradually reducing the hemodialysis patients in favor of increasing the transplantation number by 2020. The proposed scenario showed net savings of 86.54 million Euros for the period 2009-2020 compared to the base-case prediction.

  8. Detection of dispersed short tandem repeats using reversible jump Markov chain Monte Carlo.

    Science.gov (United States)

    Liang, Tong; Fan, Xiaodan; Li, Qiwei; Li, Shuo-Yen R

    2012-10-01

    Tandem repeats occur frequently in biological sequences. They are important for studying genome evolution and human disease. A number of methods have been designed to detect a single tandem repeat in a sliding window. In this article, we focus on the case that an unknown number of tandem repeat segments of the same pattern are dispersively distributed in a sequence. We construct a probabilistic generative model for the tandem repeats, where the sequence pattern is represented by a motif matrix. A Bayesian approach is adopted to compute this model. Markov chain Monte Carlo (MCMC) algorithms are used to explore the posterior distribution as an effort to infer both the motif matrix of tandem repeats and the location of repeat segments. Reversible jump Markov chain Monte Carlo (RJMCMC) algorithms are used to address the transdimensional model selection problem raised by the variable number of repeat segments. Experiments on both synthetic data and real data show that this new approach is powerful in detecting dispersed short tandem repeats. As far as we know, it is the first work to adopt RJMCMC algorithms in the detection of tandem repeats.

  9. Comparing variational Bayes with Markov chain Monte Carlo for Bayesian computation in neuroimaging.

    Science.gov (United States)

    Nathoo, F S; Lesperance, M L; Lawson, A B; Dean, C B

    2013-08-01

    In this article, we consider methods for Bayesian computation within the context of brain imaging studies. In such studies, the complexity of the resulting data often necessitates the use of sophisticated statistical models; however, the large size of these data can pose significant challenges for model fitting. We focus specifically on the neuroelectromagnetic inverse problem in electroencephalography, which involves estimating the neural activity within the brain from electrode-level data measured across the scalp. The relationship between the observed scalp-level data and the unobserved neural activity can be represented through an underdetermined dynamic linear model, and we discuss Bayesian computation for such models, where parameters represent the unknown neural sources of interest. We review the inverse problem and discuss variational approximations for fitting hierarchical models in this context. While variational methods have been widely adopted for model fitting in neuroimaging, they have received very little attention in the statistical literature, where Markov chain Monte Carlo is often used. We derive variational approximations for fitting two models: a simple distributed source model and a more complex spatiotemporal mixture model. We compare the approximations to Markov chain Monte Carlo using both synthetic data as well as through the analysis of a real electroencephalography dataset examining the evoked response related to face perception. The computational advantages of the variational method are demonstrated and the accuracy associated with the resulting approximations are clarified.

  10. Spreaders and sponges define metastasis in lung cancer: a Markov chain Monte Carlo mathematical model.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter

    2013-05-01

    The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, although not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as "spreaders" or "sponges" and orders the timescales of progression from site to site. In light of recent experimental evidence highlighting the potential significance of self-seeding of primary tumors, we use a Markov chain Monte Carlo (MCMC) approach, based on large autopsy data sets, to quantify the stochastic, systemic, and often multidirectional aspects of cancer progression. We quantify three types of multidirectional mechanisms of progression: (i) self-seeding of the primary tumor, (ii) reseeding of the primary tumor from a metastatic site (primary reseeding), and (iii) reseeding of metastatic tumors (metastasis reseeding). The model shows that the combined characteristics of the primary and the first metastatic site to which it spreads largely determine the future pathways and timescales of systemic disease.

  11. Dynamic Monte Carlo simulation of chain growth polymerization and its concentration effect

    Institute of Scientific and Technical Information of China (English)

    L(U) Wenqi; DING Jiandong

    2005-01-01

    Free radical polymerization and living ion polymerization have been simulated via the dynamic Monte Carlo method with the bond-fluctuation model in this paper. The polymerization-related parameters such as conversion of monomers, degree of polymerization, average molecular weight and its distribution are obtained by statistics. The simulation outputs are consistent with the corresponding theoretical predictions. The scaling relationships of the coil size versus chain length are also confirmed at different volume fractions. Furthermore, the effect of diffusion on polymerization is revealed preliminarily in our simulation. Hence the simulation approach has been proven to be feasible to investigate polymerization reactions with the advantages that configuration and diffusion of polymer chains can be examined together with polymerization kinetics.

  12. Bayesian Inference for LISA Pathfinder using Markov Chain Monte Carlo Methods

    CERN Document Server

    Ferraioli, Luigi; Plagnol, Eric

    2012-01-01

    We present a parameter estimation procedure based on a Bayesian framework by applying a Markov Chain Monte Carlo algorithm to the calibration of the dynamical parameters of a space based gravitational wave detector. The method is based on the Metropolis-Hastings algorithm and a two-stage annealing treatment in order to ensure an effective exploration of the parameter space at the beginning of the chain. We compare two versions of the algorithm with an application to a LISA Pathfinder data analysis problem. The two algorithms share the same heating strategy but with one moving in coordinate directions using proposals from a multivariate Gaussian distribution, while the other uses the natural logarithm of some parameters and proposes jumps in the eigen-space of the Fisher Information matrix. The algorithm proposing jumps in the eigen-space of the Fisher Information matrix demonstrates a higher acceptance rate and a slightly better convergence towards the equilibrium parameter distributions in the application to...

  13. Markov chain Monte Carlo inference for Markov jump processes via the linear noise approximation.

    Science.gov (United States)

    Stathopoulos, Vassilios; Girolami, Mark A

    2013-02-13

    Bayesian analysis for Markov jump processes (MJPs) is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding, thus its applicability is limited to a small class of problems. In this paper, we describe the application of Riemann manifold Markov chain Monte Carlo (MCMC) methods using an approximation to the likelihood of the MJP that is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient whereas the convergence rate and mixing of the chains allow for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology.

  14. AN IMPROVED MARKOV CHAIN MONTE CARLO METHOD FOR MIMO ITERATIVE DETECTION AND DECODING

    Institute of Scientific and Technical Information of China (English)

    Han Xiang; Wei Jibo

    2008-01-01

    Recently, a new soft-in soft-out detection algorithm based on the Markov Chain Monte Carlo (MCMC) simulation technique for Multiple-Input Multiple-Output (MIMO) systems is proposed,which is shown to perform significantly better than their sphere decoding counterparts with relatively low complexity. However, the MCMC simulator is likely to get trapped in a fixed state when the channel SNR is high, thus lots of repetitive samples are observed and the accuracy of A Posteriori Probability (APP) estimation deteriorates. To solve this problem, an improved version of MCMC simulator, named forced-dispersed MCMC algorithm is proposed. Based on the a posteriori variance of each bit, the Gibbs sampler is monitored. Once the trapped state is detected, the sample is dispersed intentionally according to the a posteriori variance. Extensive simulation shows that, compared with the existing solution, the proposed algorithm enables the markov chain to travel more states, which ensures a near-optimal performance.

  15. Dynamic temperature selection for parallel-tempering in Markov chain Monte Carlo simulations

    CERN Document Server

    Vousden, Will; Mandel, Ilya

    2015-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multi-modal probability distributions. Most popular methods, such as Markov chain Monte Carlo sampling, perform poorly on strongly multi-modal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve lower autocorrelation time for the sampler (and therefore more efficient sampling). In particular, we present a simple, easily-implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling in order to ...

  16. Monte Carlo Simulations of Density Profiles for Hard-Sphere Chain Fluids Confined Between Surfaces

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Covering a wide range of bulk densities, density profiles for hard-sphere chain fluids (HSCFs) with chain length of 3,4,8,20,32 and 64 confined between two surfaces were obtained by Monte Carlo simulations using extended continuum configurational-bias (ECCB) method. It is shown that the enrichment of beads near surfaces is happened at high densities due to the bulk packing effect, on the contrary, the depletion is revealed at low densities owing to the configurational entropic contribution. Comparisons with those calculated by density functional theory presented by Cai et al. indicate that the agreement between simulations and predictions is good. Compressibility factors of bulk HSCFs calculated using volume fractions at surfaces were also used to test the reliability of various equations of state of HSCFs by different authors.

  17. Multi-Physics Markov Chain Monte Carlo Methods for Subsurface Flows

    Science.gov (United States)

    Rigelo, J.; Ginting, V.; Rahunanthan, A.; Pereira, F.

    2014-12-01

    For CO2 sequestration in deep saline aquifers, contaminant transport in subsurface, and oil or gas recovery, we often need to forecast flow patterns. Subsurface characterization is a critical and challenging step in flow forecasting. To characterize subsurface properties we establish a statistical description of the subsurface properties that are conditioned to existing dynamic and static data. A Markov Chain Monte Carlo (MCMC) algorithm is used in a Bayesian statistical description to reconstruct the spatial distribution of rock permeability and porosity. The MCMC algorithm requires repeatedly solving a set of nonlinear partial differential equations describing displacement of fluids in porous media for different values of permeability and porosity. The time needed for the generation of a reliable MCMC chain using the algorithm can be too long to be practical for flow forecasting. In this work we develop fast and effective computational methods for generating MCMC chains in the Bayesian framework for the subsurface characterization. Our strategy consists of constructing a family of computationally inexpensive preconditioners based on simpler physics as well as on surrogate models such that the number of fine-grid simulations is drastically reduced in the generated MCMC chains. In particular, we introduce a huff-puff technique as screening step in a three-stage multi-physics MCMC algorithm to reduce the number of expensive final stage simulations. The huff-puff technique in the algorithm enables a better characterization of subsurface near wells. We assess the quality of the proposed multi-physics MCMC methods by considering Monte Carlo simulations for forecasting oil production in an oil reservoir.

  18. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    Science.gov (United States)

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  19. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  20. PHAISTOS: a framework for Markov chain Monte Carlo simulation and inference of protein structure.

    Science.gov (United States)

    Boomsma, Wouter; Frellsen, Jes; Harder, Tim; Bottaro, Sandro; Johansson, Kristoffer E; Tian, Pengfei; Stovgaard, Kasper; Andreetta, Christian; Olsson, Simon; Valentin, Jan B; Antonov, Lubomir D; Christensen, Anders S; Borg, Mikael; Jensen, Jan H; Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper; Hamelryck, Thomas

    2013-07-15

    We present a new software framework for Markov chain Monte Carlo sampling for simulation, prediction, and inference of protein structure. The software package contains implementations of recent advances in Monte Carlo methodology, such as efficient local updates and sampling from probabilistic models of local protein structure. These models form a probabilistic alternative to the widely used fragment and rotamer libraries. Combined with an easily extendible software architecture, this makes PHAISTOS well suited for Bayesian inference of protein structure from sequence and/or experimental data. Currently, two force-fields are available within the framework: PROFASI and OPLS-AA/L, the latter including the generalized Born surface area solvent model. A flexible command-line and configuration-file interface allows users quickly to set up simulations with the desired configuration. PHAISTOS is released under the GNU General Public License v3.0. Source code and documentation are freely available from http://phaistos.sourceforge.net. The software is implemented in C++ and has been tested on Linux and OSX platforms.

  1. On stochastic error and computational efficiency of the Markov Chain Monte Carlo method

    KAUST Repository

    Li, Jun

    2014-01-01

    In Markov Chain Monte Carlo (MCMC) simulations, thermal equilibria quantities are estimated by ensemble average over a sample set containing a large number of correlated samples. These samples are selected in accordance with the probability distribution function, known from the partition function of equilibrium state. As the stochastic error of the simulation results is significant, it is desirable to understand the variance of the estimation by ensemble average, which depends on the sample size (i.e., the total number of samples in the set) and the sampling interval (i.e., cycle number between two consecutive samples). Although large sample sizes reduce the variance, they increase the computational cost of the simulation. For a given CPU time, the sample size can be reduced greatly by increasing the sampling interval, while having the corresponding increase in variance be negligible if the original sampling interval is very small. In this work, we report a few general rules that relate the variance with the sample size and the sampling interval. These results are observed and confirmed numerically. These variance rules are derived for theMCMCmethod but are also valid for the correlated samples obtained using other Monte Carlo methods. The main contribution of this work includes the theoretical proof of these numerical observations and the set of assumptions that lead to them. © 2014 Global-Science Press.

  2. First Passage Probability Estimation of Wind Turbines by Markov Chain Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    Markov Chain Monte Carlo simulation has received considerable attention within the past decade as reportedly one of the most powerful techniques for the first passage probability estimation of dynamic systems. A very popular method in this direction capable of estimating probability of rare events...... with low computation cost is the subset simulation (SS). The idea of the method is to break a rare event into a sequence of more probable events which are easy to be estimated based on the conditional simulation techniques. Recently, two algorithms have been proposed in order to increase the efficiency...... of the method by modifying the conditional sampler. In this paper, applicability of the original SS is compared to the recently introduced modifications of the method on a wind turbine model. The model incorporates a PID pitch controller which aims at keeping the rotational speed of the wind turbine rotor equal...

  3. Irreversible Markov chain Monte Carlo algorithm for self-avoiding walk

    Science.gov (United States)

    Hu, Hao; Chen, Xiaosong; Deng, Youjin

    2017-02-01

    We formulate an irreversible Markov chain Monte Carlo algorithm for the self-avoiding walk (SAW), which violates the detailed balance condition and satisfies the balance condition. Its performance improves significantly compared to that of the Berretti-Sokal algorithm, which is a variant of the Metropolis-Hastings method. The gained efficiency increases with spatial dimension (D), from approximately 10 times in 2D to approximately 40 times in 5D. We simulate the SAW on a 5D hypercubic lattice with periodic boundary conditions, for a linear system with a size up to L = 128, and confirm that as for the 5D Ising model, the finite-size scaling of the SAW is governed by renormalized exponents, v* = 2/ d and γ/ v* = d/2. The critical point is determined, which is approximately 8 times more precise than the best available estimate.

  4. ASSESSING CONVERGENCE OF THE MARKOV CHAIN MONTE CARLO METHOD IN MULTIVARIATE CASE

    Directory of Open Access Journals (Sweden)

    Daniel Furtado Ferreira

    2012-01-01

    Full Text Available The formal convergence diagnosis of the Markov Chain Monte Carlo (MCMC is made using univariate and multivariate criteria. In 1998, a multivariate extension of the univariate criterion of multiple sequences was proposed. However, due to some problems of that multivariate criterion, an alternative form of calculation was proposed in addition to the two new alternatives for multivariate convergence criteria. In this study, two models were used, one related to time series with two interventions and ARMA (2, 2 error and another related to a trivariate normal distribution, considering three different cases for the covariance matrix. In both the cases, the Gibbs sampler and the proposed criteria to monitor the convergence were used. Results revealed the proposed criteria to be adequate, besides being easy to implement.

  5. On the reliability of NMR relaxation data analyses: a Markov Chain Monte Carlo approach.

    Science.gov (United States)

    Abergel, Daniel; Volpato, Andrea; Coutant, Eloi P; Polimeno, Antonino

    2014-09-01

    The analysis of NMR relaxation data is revisited along the lines of a Bayesian approach. Using a Markov Chain Monte Carlo strategy of data fitting, we investigate conditions under which relaxation data can be effectively interpreted in terms of internal dynamics. The limitations to the extraction of kinetic parameters that characterize internal dynamics are analyzed, and we show that extracting characteristic time scales shorter than a few tens of ps is very unlikely. However, using MCMC methods, reliable estimates of the marginal probability distributions and estimators (average, standard deviations, etc.) can still be obtained for subsets of the model parameters. Thus, unlike more conventional strategies of data analysis, the method avoids a model selection process. In addition, it indicates what information may be extracted from the data, but also what cannot.

  6. Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research

    Science.gov (United States)

    Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.

    2002-01-01

    Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.

  7. Markov Chain Monte Carlo (MCMC) methods for parameter estimation of a novel hybrid redundant robot

    Energy Technology Data Exchange (ETDEWEB)

    Wang Yongbo, E-mail: yongbo.wang@hotmail.com [Laboratory of Intelligent Machine, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland); Wu Huapeng; Handroos, Heikki [Laboratory of Intelligent Machine, Lappeenranta University of Technology, FIN-53851 Lappeenranta (Finland)

    2011-10-15

    This paper presents a statistical method for the calibration of a redundantly actuated hybrid serial-parallel robot IWR (Intersector Welding Robot). The robot under study will be used to carry out welding, machining, and remote handing for the assembly of vacuum vessel of International Thermonuclear Experimental Reactor (ITER). The robot has ten degrees of freedom (DOF), among which six DOF are contributed by the parallel mechanism and the rest are from the serial mechanism. In this paper, a kinematic error model which involves 54 unknown geometrical error parameters is developed for the proposed robot. Based on this error model, the mean values of the unknown parameters are statistically analyzed and estimated by means of Markov Chain Monte Carlo (MCMC) approach. The computer simulation is conducted by introducing random geometric errors and measurement poses which represent the corresponding real physical behaviors. The simulation results of the marginal posterior distributions of the estimated model parameters indicate that our method is reliable and robust.

  8. Generalized event-chain Monte Carlo: constructing rejection-free global-balance algorithms from infinitesimal steps.

    Science.gov (United States)

    Michel, Manon; Kapfer, Sebastian C; Krauth, Werner

    2014-02-07

    In this article, we present an event-driven algorithm that generalizes the recent hard-sphere event-chain Monte Carlo method without introducing discretizations in time or in space. A factorization of the Metropolis filter and the concept of infinitesimal Monte Carlo moves are used to design a rejection-free Markov-chain Monte Carlo algorithm for particle systems with arbitrary pairwise interactions. The algorithm breaks detailed balance, but satisfies maximal global balance and performs better than the classic, local Metropolis algorithm in large systems. The new algorithm generates a continuum of samples of the stationary probability density. This allows us to compute the pressure and stress tensor as a byproduct of the simulation without any additional computations.

  9. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  10. The bulk phase behavior of short polyelectrolyte chains: A Monte Carlo study

    Science.gov (United States)

    Orkoulas, Gerassimos; Kumar, Sanat

    2002-03-01

    While polyelectrolytes form an important class of materials in chemistry and biochemistry, their understanding at the theoretical level is still lacking. The strong Coulombic repulsions and attractions, the resulting Debye screening and the concomitant unlike-charge association (known as counterion condensation) render standard neutral polymer theories difficult to apply. Indeed, most theoretical and numerical investigations have been focused on the dilute-to-semidilute regime at somewhat weak Coulomb couplings. In this work, we consider a lattice model of flexible charged chains with an appropriate number of oppositely charged counterions to ensure electrical neutrality. Electrostatic interactions are explicitly taken into account and handled via Ewald sums in the simulations. The solvent is modeled as a uniform dielectric continuum. By performing grand canonical Monte Carlo simulations, we demonstrate that at strong Coulomb couplings (low reduced temperatures) the system separates into polymer rich and poor phases respectively. The type of phase coexistence occurring in these polyelectrolyte systems bears resemblances to the gas-liquid transition of the restricted primitive model of ionic solutions. The approach to the Flory theta point is studied by increasing the chain length. The critical point dependence on the chain length is found to be rather weak. A plausible explanation lies in the formation of long-lived network structures via a bead-counterion mechanism. Finally, the ability of the system to form gels via bead-counterion junctions is examined and analyzed.

  11. SEMI-BLIND CHANNEL ESTIMATION OF MULTIPLE-INPUT/MULTIPLE-OUTPUT SYSTEMS BASED ON MARKOV CHAIN MONTE CARLO METHODS

    Institute of Scientific and Technical Information of China (English)

    Jiang Wei; Xiang Haige

    2004-01-01

    This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.

  12. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  13. Parallel GPGPU Evaluation of Small Angle X-ray Scattering Profiles in a Markov Chain Monte Carlo Framework

    DEFF Research Database (Denmark)

    Antonov, Lubomir Dimitrov; Andreetta, Christian; Hamelryck, Thomas Wim

    2013-01-01

    Inference of protein structure from experimental data is of crucial interest in science, medicine and biotechnology. Low-resolution methods, such as small angle X-ray scattering (SAXS), play a major role in investigating important biological questions regarding the structure of proteins in soluti......, and implements a caching procedure employed in the partial forward model evaluations within a Markov chain Monte Carlo framework....

  14. A Markov Chain Monte Carlo Approach to Estimate AIDS after HIV Infection.

    Science.gov (United States)

    Apenteng, Ofosuhene O; Ismail, Noor Azina

    2015-01-01

    The spread of human immunodeficiency virus (HIV) infection and the resulting acquired immune deficiency syndrome (AIDS) is a major health concern in many parts of the world, and mathematical models are commonly applied to understand the spread of the HIV epidemic. To understand the spread of HIV and AIDS cases and their parameters in a given population, it is necessary to develop a theoretical framework that takes into account realistic factors. The current study used this framework to assess the interaction between individuals who developed AIDS after HIV infection and individuals who did not develop AIDS after HIV infection (pre-AIDS). We first investigated how probabilistic parameters affect the model in terms of the HIV and AIDS population over a period of time. We observed that there is a critical threshold parameter, R0, which determines the behavior of the model. If R0 ≤ 1, there is a unique disease-free equilibrium; if R0 1, the disease-free equilibrium is unstable. We also show how a Markov chain Monte Carlo (MCMC) approach could be used as a supplement to forecast the numbers of reported HIV and AIDS cases. An approach using a Monte Carlo analysis is illustrated to understand the impact of model-based predictions in light of uncertain parameters on the spread of HIV. Finally, to examine this framework and demonstrate how it works, a case study was performed of reported HIV and AIDS cases from an annual data set in Malaysia, and then we compared how these approaches complement each other. We conclude that HIV disease in Malaysia shows epidemic behavior, especially in the context of understanding and predicting emerging cases of HIV and AIDS.

  15. A Markov Chain Monte Carlo Algorithm for Infrasound Atmospheric Sounding: Application to the Humming Roadrunner experiment in New Mexico

    Science.gov (United States)

    Lalande, Jean-Marie; Waxler, Roger; Velea, Doru

    2016-04-01

    As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.

  16. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    Science.gov (United States)

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  17. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  18. Dynamical Models for NGC 6503 using a Markov Chain Monte Carlo Technique

    CERN Document Server

    Puglielli, David; Courteau, Stéphane

    2010-01-01

    We use Bayesian statistics and Markov chain Monte Carlo (MCMC) techniques to construct dynamical models for the spiral galaxy NGC 6503. The constraints include surface brightness profiles which display a Freeman Type II structure; HI and ionized gas rotation curves; the stellar rotation, which is nearly coincident with the ionized gas curve; and the line of sight stellar dispersion, with a sigma-drop at the centre. The galaxy models consist of a Sersic bulge, an exponential disc with an optional inner truncation and a cosmologically motivated dark halo. The Bayesian/MCMC technique yields the joint posterior probability distribution function for the input parameters. We examine several interpretations of the data: the Type II surface brightness profile may be due to dust extinction, to an inner truncated disc or to a ring of bright stars; and we test separate fits to the gas and stellar rotation curves to determine if the gas traces the gravitational potential. We test each of these scenarios for bar stability...

  19. Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis

    Science.gov (United States)

    Karnesis, N.; Nofrarias, M.; Sopuerta, C. F.; Lobo, A.

    2012-06-01

    The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

  20. Path-integral Monte Carlo simulations for electronic dynamics on molecular chains. II. Transport across impurities

    Science.gov (United States)

    Mühlbacher, Lothar; Ankerhold, Joachim

    2005-05-01

    Electron transfer (ET) across molecular chains including an impurity is studied based on a recently improved real-time path-integral Monte Carlo (PIMC) approach [L. Mühlbacher, J. Ankerhold, and C. Escher, J. Chem. Phys. 121 12696 (2004)]. The reduced electronic dynamics is studied for various bridge lengths and defect site energies. By determining intersite hopping rates from PIMC simulations up to moderate times, the relaxation process in the extreme long-time limit is captured within a sequential transfer model. The total transfer rate is extracted and shown to be enhanced for certain defect site energies. Superexchange turns out to be relevant for extreme gap energies only and then gives rise to different dynamical signatures for high- and low-lying defects. Further, it is revealed that the entire bridge compound approaches a steady state on a much shorter time scale than that related to the total transfer. This allows for a simplified description of ET along donor-bridge-acceptor systems in the long-time range.

  1. An Efficient Interpolation Technique for Jump Proposals in Reversible-Jump Markov Chain Monte Carlo Calculations

    CERN Document Server

    Farr, Will M

    2011-01-01

    Selection among alternative theoretical models given an observed data set is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty: it requires jumps between model parameter spaces, but cannot retain a memory of the favored locations in more than one parameter space at a time. Thus, a naive jump between parameter spaces is unlikely to be accepted in the MCMC algorithm and convergence is correspondingly slow. Here we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose inter-model jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in arbitrary dimensions. We show that our technique leads to dramatically improved convergence over naive jumps in an RJMCMC, and compare it ...

  2. Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.

    Science.gov (United States)

    de Pasquale, F; Del Gratta, C; Romani, G L

    2008-08-01

    In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.

  3. Smart pilot points using reversible-jump Markov-chain Monte Carlo

    Science.gov (United States)

    Jiménez, S.; Mariethoz, G.; Brauchler, R.; Bayer, P.

    2016-05-01

    Pilot points are typical means for calibration of highly parameterized numerical models. We propose a novel procedure based on estimating not only the pilot point values, but also their number and suitable locations. This is accomplished by a trans-dimensional Bayesian inversion procedure that also allows for uncertainty quantification. The utilized algorithm, reversible-jump Markov-Chain Monte Carlo (RJ-MCMC), is computationally demanding and this challenges the application for model calibration. We present a solution for fast, approximate simulation through the application of a Bayesian inversion. A fast pathfinding algorithm is used to estimate tracer travel times instead of doing a full transport simulation. This approach extracts the information from measured breakthrough curves, which is crucial for the reconstruction of aquifer heterogeneity. As a result, the "smart pilot points" can be tuned during thousands of rapid model evaluations. This is demonstrated for both a synthetic and a field application. For the selected synthetic layered aquifer, two different hydrofacies are reconstructed. For the field investigation, multiple fluorescent tracers were injected in different well screens in a shallow alluvial aquifer and monitored in a tomographic source-receiver configuration. With the new inversion procedure, a sand layer was identified and reconstructed with a high spatial resolution in 3-D. The sand layer was successfully validated through additional slug tests at the site. The promising results encourage further applications in hydrogeological model calibration, especially for cases with simulation of transport.

  4. Estimating stepwise debromination pathways of polybrominated diphenyl ethers with an analogue Markov Chain Monte Carlo algorithm.

    Science.gov (United States)

    Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An

    2014-11-01

    A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments.

  5. Ascertainment correction for Markov chain Monte Carlo segregation and linkage analysis of a quantitative trait.

    Science.gov (United States)

    Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E

    2007-09-01

    Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci.

  6. Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics.

    Science.gov (United States)

    Tewari, Shivendra G; Zhou, Yifan; Otto, Bradley J; Dash, Ranjan K; Kwok, Wai-Meng; Beard, Daniel A

    2014-01-01

    The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.

  7. Improving Hydrologic Data Assimilation by a Multivariate Particle Filter-Markov Chain Monte Carlo

    Science.gov (United States)

    Yan, H.; DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Data assimilation (DA) is a popular method for merging information from multiple sources (i.e. models and remotely sensing), leading to improved hydrologic prediction. With the increasing availability of satellite observations (such as soil moisture) in recent years, DA is emerging in operational forecast systems. Although these techniques have seen widespread application, developmental research has continued to further refine their effectiveness. This presentation will examine potential improvements to the Particle Filter (PF) through the inclusion of multivariate correlation structures. Applications of the PF typically rely on univariate DA schemes (such as assimilating the outlet observed discharge), and multivariate schemes generally ignore the spatial correlation of the observations. In this study, a multivariate DA scheme is proposed by introducing geostatistics into the newly developed particle filter with Markov chain Monte Carlo (PF-MCMC) method. This new method is assessed by a case study over one of the basin with natural hydrologic process in Model Parameter Estimation Experiment (MOPEX), located in Arizona. The multivariate PF-MCMC method is used to assimilate the Advanced Scatterometer (ASCAT) grid (12.5 km) soil moisture retrievals and the observed streamflow in five gages (four inlet and one outlet gages) into the Sacramento Soil Moisture Accounting (SAC-SMA) model for the same scale (12.5 km), leading to greater skill in hydrologic predictions.

  8. Quantum Monte Carlo simulation

    OpenAIRE

    Wang, Yazhen

    2011-01-01

    Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...

  9. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  10. Enhancing multi-objective evolutionary algorithm performance with Markov Chain Monte Carlo

    Science.gov (United States)

    Shafii, M.; Vrugt, J. A.; Tolson, B.; Matott, L. S.

    2009-12-01

    Multi-Objective Evolutionary Algorithms (MOEAs) have emerged as successful optimization routines to solve complex and large-scale multi-objective model calibration problems. However, a common draw-back of these methods is that they require a relatively high number of function evaluations to produce an accurate approximation of Pareto front. This requirement can translate into incredibly large computational costs in hydrologic model calibration problems. Most research efforts to address this computational burden are focused on introducing or improving the operators applied in the MOEAs structure. However, population initialization, usually done through Random Sampling (RS) or Latin Hypercube Sampling (LHS), can also affect the searching efficiency and the quality of MOEA results. This study presents a novel approach to generate initial population of a MOEA (i.e. NSGA-II) by applying a Markov Chain Monte Carlo (MCMC) sampler. The basis of MCMC methods is a Markov chain generating a random walk through the search space, using a formal likelihood function to sample the high-probability-density regions of the parameter space. Therefore, these solutions, when used as initial population, are capable of carrying quite valuable information into MOEAs process. Instead of running the MCMC sampler (i.e. DREAM) to convergence, it is applied for a relatively small and fixed number of function evaluations. The MCMC samples are then processed to identify and archive the non-dominated solutions and this archive is used as NSGA-II’s initial population. In order to analyze the applicability of this approach, it is used for a number of benchmark mathematical problems, as well as multi-objective calibration of a rainfall-runoff model (HYMOD). Initial results show promising MOEA improvement when it is initialized with an MCMC based initial population. Results will be presented that comprehensively compares MOEA results with and without an MCMC based initial population in terms of the

  11. Equilibrium Statistics: Monte Carlo Methods

    Science.gov (United States)

    Kröger, Martin

    Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].

  12. Modelling heterotachy in phylogenetic inference by reversible-jump Markov chain Monte Carlo.

    Science.gov (United States)

    Pagel, Mark; Meade, Andrew

    2008-12-27

    The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon--known as heterotachy--can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.

  13. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    Science.gov (United States)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2017-01-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  14. Input estimation for drug discovery using optimal control and Markov chain Monte Carlo approaches.

    Science.gov (United States)

    Trägårdh, Magnus; Chappell, Michael J; Ahnmark, Andrea; Lindén, Daniel; Evans, Neil D; Gennemark, Peter

    2016-04-01

    Input estimation is employed in cases where it is desirable to recover the form of an input function which cannot be directly observed and for which there is no model for the generating process. In pharmacokinetic and pharmacodynamic modelling, input estimation in linear systems (deconvolution) is well established, while the nonlinear case is largely unexplored. In this paper, a rigorous definition of the input-estimation problem is given, and the choices involved in terms of modelling assumptions and estimation algorithms are discussed. In particular, the paper covers Maximum a Posteriori estimates using techniques from optimal control theory, and full Bayesian estimation using Markov Chain Monte Carlo (MCMC) approaches. These techniques are implemented using the optimisation software CasADi, and applied to two example problems: one where the oral absorption rate and bioavailability of the drug eflornithine are estimated using pharmacokinetic data from rats, and one where energy intake is estimated from body-mass measurements of mice exposed to monoclonal antibodies targeting the fibroblast growth factor receptor (FGFR) 1c. The results from the analysis are used to highlight the strengths and weaknesses of the methods used when applied to sparsely sampled data. The presented methods for optimal control are fast and robust, and can be recommended for use in drug discovery. The MCMC-based methods can have long running times and require more expertise from the user. The rigorous definition together with the illustrative examples and suggestions for software serve as a highly promising starting point for application of input-estimation methods to problems in drug discovery.

  15. Optimized nested Markov chain Monte Carlo sampling: application to the liquid nitrogen Hugoniot using density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Milton Sam [Los Alamos National Laboratory; Coe, Joshua D [Los Alamos National Laboratory; Sewell, Thomas D [UNIV OF MISSOURI-COLUMBIA

    2009-01-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  16. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  17. A method to reduce the rejection rate in Monte Carlo Markov chains

    Science.gov (United States)

    Baldassi, Carlo

    2017-03-01

    We present a method for Monte Carlo sampling on systems with discrete variables (focusing in the Ising case), introducing a prior on the candidate moves in a Metropolis–Hastings scheme which can significantly reduce the rejection rate, called the reduced-rejection-rate (RRR) method. The method employs same probability distribution for the choice of the moves as rejection-free schemes such as the method proposed by Bortz, Kalos and Lebowitz (BKL) (1975 J. Comput. Phys. 17 10–8) however, it uses it as a prior in an otherwise standard Metropolis scheme: it is thus not fully rejection-free, but in a wide range of scenarios it is nearly so. This allows to extend the method to cases for which rejection-free schemes become inefficient, in particular when the graph connectivity is not sparse, but the energy can nevertheless be expressed as a sum of two components, one of which is computed on a sparse graph and dominates the measure. As examples of such instances, we demonstrate that the method yields excellent results when performing Monte Carlo simulations of quantum spin models in presence of a transverse field in the Suzuki–Trotter formalism, and when exploring the so-called robust ensemble which was recently introduced in Baldassi et al (2016 Proc. Natl Acad. Sci. 113 E7655–62). Our code for the Ising case is publicly available (RRR Monte Carlo code https://github.com/carlobaldassi/RRRMC.jl), and extensible to user-defined models: it provides efficient implementations of standard Metropolis, the RRR method, the BKL method (extended to the case of continuous energy specra), and the waiting time method by Dall and Sibani (2001 Comput. Phys. Commun. 141 260–7).

  18. Comparison of Bootstrapping and Markov Chain Monte Carlo for Copula Analysis of Hydrological Droughts

    Science.gov (United States)

    Yang, P.; Ng, T. L.; Yang, W.

    2015-12-01

    Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC

  19. Monte Carlo estimation of total variation distance of Markov chains on large spaces, with application to phylogenetics.

    Science.gov (United States)

    Herbei, Radu; Kubatko, Laura

    2013-03-26

    Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.

  20. Geochemical Characterization Using Geophysical Data and Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Chen, J.; Hubbard, S.; Rubin, Y.; Murray, C.; Roden, E.; Majer, E.

    2002-12-01

    if they were available from direct measurements or as variables otherwise. To estimate the geochemical parameters, we first assigned a prior model for each variable and a likelihood model for each type of data, which together define posterior probability distributions for each variable on the domain. Since the posterior probability distribution may involve hundreds of variables, we used a Markov Chain Monte Carlo (MCMC) method to explore each variable by generating and subsequently evaluating hundreds of realizations. Results from this case study showed that although geophysical attributes are not necessarily directly related to geochemical parameters, geophysical data could be very useful for providing accurate and high-resolution information about geochemical parameter distribution through their joint and indirect connections with hydrogeological properties such as lithofacies. This case study also demonstrated that MCMC methods were particularly useful for geochemical parameter estimation using geophysical data because they allow incorporation into the procedure of spatial correlation information, measurement errors, and cross correlations among different types of parameters.

  1. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian

    2016-01-01

    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  2. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  3. Monte Carlo simulations on magnetic behavior of a spin-chain system in triangular lattice doped with antiferromagnetic bonds

    Institute of Scientific and Technical Information of China (English)

    YAO Xiao-yan; LI Peng-lei; DONG Shuai; LIU Jun-ming

    2007-01-01

    A three-dimensional Ising-like model doped with anti-ferromagnetic (AFM) bonds is proposed to investigate the magnetic properties of a doped triangular spin-chain system by using a Monte-Carlo simulation. The simulated results indicate that a steplike magnetization behavior is very sensitive to the concentration of AFM bonds. A low concentration of AFM bonds can suppress the stepwise behavior considerably, in accordance with doping experiments on Ca3Co206. The analysis of spin snapshots demonstrates that the AFM bond doping not only breaks the ferromagnetic ordered linear spin chains along the hexagonal c-axis but also has a great influence upon the spin configuration in the ab-plane.

  4. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  5. Nested Markov chain Monte Carlo sampling of a density functional theory potential: equilibrium thermodynamics of dense fluid nitrogen.

    Science.gov (United States)

    Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam

    2009-08-21

    An optimized variant of the nested Markov chain Monte Carlo [n(MC)(2)] method [J. Chem. Phys. 130, 164104 (2009)] is applied to fluid N(2). In this implementation of n(MC)(2), isothermal-isobaric (NPT) ensemble sampling on the basis of a pair potential (the "reference" system) is used to enhance the efficiency of sampling based on Perdew-Burke-Ernzerhof density functional theory with a 6-31G(*) basis set (PBE6-31G(*), the "full" system). A long sequence of Monte Carlo steps taken in the reference system is converted into a trial step taken in the full system; for a good choice of reference potential, these trial steps have a high probability of acceptance. Using decorrelated samples drawn from the reference distribution, the pressure and temperature of the full system are varied such that its distribution overlaps maximally with that of the reference system. Optimized pressures and temperatures then serve as input parameters for n(MC)(2) sampling of dense fluid N(2) over a wide range of thermodynamic conditions. The simulation results are combined to construct the Hugoniot of nitrogen fluid, yielding predictions in excellent agreement with experiment.

  6. Monte Carlo simulation and equation of state for flexible charged hard-sphere chain fluids: polyampholyte and polyelectrolyte solutions.

    Science.gov (United States)

    Jiang, Hao; Adidharma, Hertanto

    2014-11-07

    The thermodynamic modeling of flexible charged hard-sphere chains representing polyampholyte or polyelectrolyte molecules in solution is considered. The excess Helmholtz energy and osmotic coefficients of solutions containing short polyampholyte and the osmotic coefficients of solutions containing short polyelectrolytes are determined by performing canonical and isobaric-isothermal Monte Carlo simulations. A new equation of state based on the thermodynamic perturbation theory is also proposed for flexible charged hard-sphere chains. For the modeling of such chains, the use of solely the structure information of monomer fluid for calculating the chain contribution is found to be insufficient and more detailed structure information must therefore be considered. Two approaches, i.e., the dimer and dimer-monomer approaches, are explored to obtain the contribution of the chain formation to the Helmholtz energy. By comparing with the simulation results, the equation of state with either the dimer or dimer-monomer approach accurately predicts the excess Helmholtz energy and osmotic coefficients of polyampholyte and polyelectrolyte solutions except at very low density. It also well captures the effect of temperature on the thermodynamic properties of these solutions.

  7. Markov chain Monte Carlo searches for Galactic binaries in Mock LISA Data Challenge 1B data sets

    CERN Document Server

    Trias, Miquel; Veitch, John

    2008-01-01

    We are developing a Bayesian approach based on Markov chain Monte Carlo techniques to search for and extract information about white dwarf binary systems with the Laser Interferometer Space Antenna (LISA). Here we present results obtained by applying an initial implementation of this method to some of the data sets released in Round 1B of the Mock LISA Data Challenges. For Challenges 1B.1.1a and 1b the signals were recovered with parameters lying within the 95.5% posterior probability interval and the correlation between the true and recovered waveform is in excess of 99%. Results were not submitted for Challenge 1B.1.1c due to some convergence problems of the algorithms, despite the signal was detected in a search over a 2 mHz band.

  8. Dark matter in disk galaxies I: a Markov Chain Monte Carlo method and application to DDO 154

    CERN Document Server

    Hague, Peter R

    2013-01-01

    We present a new method to constrain the dark matter halo density profiles of disk galaxies. Our algorithm employs a Markov Chain Monte Carlo (MCMC) approach to explore the parameter space of a general family of dark matter profiles. We improve upon previous analyses by considering a wider range of halo profiles and by explicitly identifying cases in which the data are insufficient to break the degeneracies between the model parameters. We demonstrate the robustness of our algorithm using artificial data sets and show that reliable estimates of the halo density profile can be obtained from data of comparable quality to those currently available for low surface brightness (LSB) galaxies. We present our results in terms of physical quantities which are constrained by the data, and find that the logarithmic slope of the halo density profile at the radius of the innermost data point of a measured rotation curve can be strongly constrained in LSB galaxies. High surface brightness galaxies require additional inform...

  9. Path-integral Monte-Carlo simulations for electronic dynamics on molecular chains: I. Sequential hopping and super exchange

    CERN Document Server

    Mühlbacher, L; Escher, C M

    2004-01-01

    An improved real-time quantum Monte Carlo procedure is presented and applied to describe the electronic transfer dynamics along molecular chains. The model consists of discrete electronic sites coupled to a thermal environment which is integrated out exactly within the path integral formulation. The approach is numerically exact and its results reduce to known analytical findings (Marcus theory, golden rule) in proper limits. Special attention is paid to the role of superexchange and sequential hopping at lower temperatures in symmetric donor-bridge-acceptor systems. In contrast to previous approximate studies, superexchange turns out to play a significant role only for extremely high lying bridges where the transfer is basically frozen or for extremely low temperatures where for weaker dissipation a description in terms of rate constants is no longer feasible. For bridges with increasing length an algebraic decrease of the yield is found for short as well as for longer bridges. The approach can be extended t...

  10. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    Science.gov (United States)

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  11. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  12. Inverse modeling of cloud-aerosol interactions -- Part 2: Sensitivity tests on liquid phase clouds using a Markov Chain Monte Carlo based simulation approach

    NARCIS (Netherlands)

    Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Struthers, H.; Sooroshian, A.

    2012-01-01

    This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC) algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools t

  13. Recovery of Item Parameters in the Nominal Response Model: A Comparison of Marginal Maximum Likelihood Estimation and Markov Chain Monte Carlo Estimation.

    Science.gov (United States)

    Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun

    2002-01-01

    Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)

  14. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  15. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  16. Metropolis Methods for Quantum Monte Carlo Simulations

    OpenAIRE

    Ceperley, D. M.

    2003-01-01

    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  17. A Markov chain Monte Carlo (MCMC) methodology with bootstrap percentile estimates for predicting presidential election results in Ghana.

    Science.gov (United States)

    Nortey, Ezekiel N N; Ansah-Narh, Theophilus; Asah-Asante, Richard; Minkah, Richard

    2015-01-01

    Although, there exists numerous literature on the procedure for forecasting or predicting election results, in Ghana only opinion poll strategies have been used. To fill this gap, the paper develops Markov chain models for forecasting the 2016 presidential election results at the Regional, Zonal (i.e. Savannah, Coastal and Forest) and the National levels using past presidential election results of Ghana. The methodology develops a model for prediction of the 2016 presidential election results in Ghana using the Markov chains Monte Carlo (MCMC) methodology with bootstrap estimates. The results were that the ruling NDC may marginally win the 2016 Presidential Elections but would not obtain the more than 50 % votes to be declared an outright winner. This means that there is going to be a run-off election between the two giant political parties: the ruling NDC and the major opposition party, NPP. The prediction for the 2016 Presidential run-off election between the NDC and the NPP was rather in favour of the major opposition party, the NPP with a little over the 50 % votes obtained.

  18. DREAM(D: an adaptive markov chain monte carlo simulation algorithm to solve discrete, noncontinuous, posterior parameter estimation problems

    Directory of Open Access Journals (Sweden)

    J. A. Vrugt

    2011-04-01

    Full Text Available Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM algorithm, an adaptive Markov Chain Monte Carlo (MCMC method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D, has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS (Vrugt et al., 2011 and MT-DREAM(ZS (Laloy and Vrugt, 2011 as well. Two case studies involving a sudoku puzzle and rainfall – runoff model calibration problem are used to illustrate DREAM(D.

  19. Understanding for convergence monitoring for probabilistic risk assessment based on Markov Chain Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Joo Yeon; Jang, Han Ki; Jang, Sol Ah; Park, Tae Jin [Korean Association for Radiation Application, Seoul (Korea, Republic of)

    2014-04-15

    There is a question that the simulation actually leads to draws from its target distribution and the most basic one is whether such Markov chains can always be constructed and all chain values sampled from them. The problem to be solved is the determination of how large this iteration should be to achieve the target distribution. This problem can be answered as convergence monitoring. In this paper, two widely used methods, such as autocorrelation and potential scale reduction factor (PSRF) in MCMC are characterized. There is no general agreement on the subject of the convergence. Although it is generally agreed that running n parallel chains in practice is computationally inefficient and unnecessary, running multiple parallel chains is generally applied for the convergence monitoring due to easy implementation. The main debate is the number of parallel chains needed. If the convergence properties of the chain are well understood then clearly a single chain suffices. Therefore, autocorrelation using single chain and multiple parallel ones are tried and their results then compared with each other in this study. And, the following question is answered from the two convergence results: Have the Markov chain realizations for achieved the target distribution?.

  20. Bayesian reconstruction of P(r) directly from two-dimensional detector images via a Markov chain Monte Carlo method.

    Science.gov (United States)

    Paul, Sudeshna; Friedman, Alan M; Bailey-Kellogg, Chris; Craig, Bruce A

    2013-04-01

    The interatomic distance distribution, P(r), is a valuable tool for evaluating the structure of a molecule in solution and represents the maximum structural information that can be derived from solution scattering data without further assumptions. Most current instrumentation for scattering experiments (typically CCD detectors) generates a finely pixelated two-dimensional image. In contin-uation of the standard practice with earlier one-dimensional detectors, these images are typically reduced to a one-dimensional profile of scattering inten-sities, I(q), by circular averaging of the two-dimensional image. Indirect Fourier transformation methods are then used to reconstruct P(r) from I(q). Substantial advantages in data analysis, however, could be achieved by directly estimating the P(r) curve from the two-dimensional images. This article describes a Bayesian framework, using a Markov chain Monte Carlo method, for estimating the parameters of the indirect transform, and thus P(r), directly from the two-dimensional images. Using simulated detector images, it is demonstrated that this method yields P(r) curves nearly identical to the reference P(r). Furthermore, an approach for evaluating spatially correlated errors (such as those that arise from a detector point spread function) is evaluated. Accounting for these errors further improves the precision of the P(r) estimation. Experimental scattering data, where no ground truth reference P(r) is available, are used to demonstrate that this method yields a scattering and detector model that more closely reflects the two-dimensional data, as judged by smaller residuals in cross-validation, than P(r) obtained by indirect transformation of a one-dimensional profile. Finally, the method allows concurrent estimation of the beam center and Dmax, the longest interatomic distance in P(r), as part of the Bayesian Markov chain Monte Carlo method, reducing experimental effort and providing a well defined protocol for these

  1. Quantum speedup of Monte Carlo methods.

    Science.gov (United States)

    Montanaro, Ashley

    2015-09-08

    Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

  2. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  3. Monte Carlo integration on GPU

    OpenAIRE

    Kanzaki, J.

    2010-01-01

    We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...

  4. Markov Chain Monte Carlo Random Effects Modeling in Magnetic Resonance Image Processing Using the BRugs Interface to WinBUGS

    Directory of Open Access Journals (Sweden)

    David G. Gadian

    2011-10-01

    Full Text Available A common feature of many magnetic resonance image (MRI data processing methods is the voxel-by-voxel (a voxel is a volume element manner in which the processing is performed. In general, however, MRI data are expected to exhibit some level of spatial correlation, rendering an independent-voxels treatment inefficient in its use of the data. Bayesian random effect models are expected to be more efficient owing to their information-borrowing behaviour. To illustrate the Bayesian random effects approach, this paper outlines a Markov chain Monte Carlo (MCMC analysis of a perfusion MRI dataset, implemented in R using the BRugs package. BRugs provides an interface to WinBUGS and its GeoBUGS add-on. WinBUGS is a widely used programme for performing MCMC analyses, with a focus on Bayesian random effect models. A simultaneous modeling of both voxels (restricted to a region of interest and multiple subjects is demonstrated. Despite the low signal-to-noise ratio in the magnetic resonance signal intensity data, useful model signal intensity profiles are obtained. The merits of random effects modeling are discussed in comparison with the alternative approaches based on region-of-interest averaging and repeated independent voxels analysis. This paper focuses on perfusion MRI for the purpose of illustration, the main proposition being that random effects modeling is expected to be beneficial in many other MRI applications in which the signal-to-noise ratio is a limiting factor.

  5. Path-integral Monte Carlo simulations for electronic dynamics on molecular chains. I. Sequential hopping and super exchange

    Science.gov (United States)

    Mühlbacher, Lothar; Ankerhold, Joachim; Escher, Charlotte

    2004-12-01

    An improved real-time quantum Monte Carlo procedure is presented and applied to describe the electronic transfer dynamics along molecular chains. The model consists of discrete electronic sites coupled to a thermal environment which is integrated out exactly within the path integral formulation. The approach is numerically exact and its results reduce to known analytical findings (Marcus theory, golden rule) in proper limits. Special attention is paid to the role of superexchange and sequential hopping at lower temperatures in symmetric donor-bridge-acceptor systems. In contrast to previous approximate studies, superexchange turns out to play a significant role only for extremely high-lying bridges where the transfer is basically frozen or for extremely low temperatures where for weaker dissipation a description in terms of rate constants is no longer feasible. For bridges with increasing length an algebraic decrease of the yield is found for short as well as for long bridges. The approach can be extended to electronic systems with more complicated topologies including impurities and in presence of external time-dependent forces.

  6. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    CERN Document Server

    Audren, Benjamin; Bird, Simeon; Haehnelt, Martin G.; Viel, Matteo

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservat...

  7. Study on mapping Quantitative Trait Loci for animal complex binary traits using Bayesian-Markov chain Monte Carlo approach

    Institute of Scientific and Technical Information of China (English)

    LIU; Jianfeng; ZHANG; Yuan; ZHANG; Qin; WANG; Lixian; ZHANG; Jigang

    2006-01-01

    It is a challenging issue to map Quantitative Trait Loci (QTL) underlying complex discrete traits, which usually show discontinuous distribution and less information, using conventional statistical methods. Bayesian-Markov chain Monte Carlo (Bayesian-MCMC) approach is the key procedure in mapping QTL for complex binary traits, which provides a complete posterior distribution for QTL parameters using all prior information. As a consequence, Bayesian estimates of all interested variables can be obtained straightforwardly basing on their posterior samples simulated by the MCMC algorithm. In our study, utilities of Bayesian-MCMC are demonstrated using simulated several animal outbred full-sib families with different family structures for a complex binary trait underlied by both a QTL and polygene. Under the Identity-by-Descent-Based variance component random model, three samplers basing on MCMC, including Gibbs sampling, Metropolis algorithm and reversible jump MCMC, were implemented to generate the joint posterior distribution of all unknowns so that the QTL parameters were obtained by Bayesian statistical inferring. The results showed that Bayesian-MCMC approach could work well and robust under different family structures and QTL effects. As family size increases and the number of family decreases, the accuracy of the parameter estimates will be improved. When the true QTL has a small effect, using outbred population experiment design with large family size is the optimal mapping strategy.

  8. Modeling the effects and uncertainties of contaminated sediment remediation scenarios in a Norwegian fjord by Markov chain Monte Carlo simulation.

    Science.gov (United States)

    Saloranta, Tuomo M; Armitage, James M; Haario, Heikki; Naes, Kristoffer; Cousins, Ian T; Barton, David N

    2008-01-01

    Multimedia environmental fate models are useful tools to investigate the long-term impacts of remediation measures designed to alleviate potential ecological and human health concerns in contaminated areas. Estimating and communicating the uncertainties associated with the model simulations is a critical task for demonstrating the transparency and reliability of the results. The Extended Fourier Amplitude Sensitivity Test(Extended FAST) method for sensitivity analysis and Bayesian Markov chain Monte Carlo (MCMC) method for uncertainty analysis and model calibration have several advantages over methods typically applied for multimedia environmental fate models. Most importantly, the simulation results and their uncertainties can be anchored to the available observations and their uncertainties. We apply these techniques for simulating the historical fate of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the Grenland fjords, Norway, and for predicting the effects of different contaminated sediment remediation (capping) scenarios on the future levels of PCDD/Fs in cod and crab therein. The remediation scenario simulations show that a significant remediation effect can first be seen when significant portions of the contaminated sediment areas are cleaned up, and that increase in capping area leads to both earlier achievement of good fjord status and narrower uncertainty in the predicted timing for this.

  9. Application of Markov chain Monte Carlo analysis to biomathematical modeling of respirable dust in US and UK coal miners.

    Science.gov (United States)

    Sweeney, Lisa M; Parker, Ann; Haber, Lynne T; Tran, C Lang; Kuempel, Eileen D

    2013-06-01

    A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model.

  10. Bayesian parameter inference by Markov chain Monte Carlo with hybrid fitness measures: theory and test in apoptosis signal transduction network.

    Science.gov (United States)

    Murakami, Yohei; Takada, Shoji

    2013-01-01

    When model parameters in systems biology are not available from experiments, they need to be inferred so that the resulting simulation reproduces the experimentally known phenomena. For the purpose, Bayesian statistics with Markov chain Monte Carlo (MCMC) is a useful method. Conventional MCMC needs likelihood to evaluate a posterior distribution of acceptable parameters, while the approximate Bayesian computation (ABC) MCMC evaluates posterior distribution with use of qualitative fitness measure. However, none of these algorithms can deal with mixture of quantitative, i.e., likelihood, and qualitative fitness measures simultaneously. Here, to deal with this mixture, we formulated Bayesian formula for hybrid fitness measures (HFM). Then we implemented it to MCMC (MCMC-HFM). We tested MCMC-HFM first for a kinetic toy model with a positive feedback. Inferring kinetic parameters mainly related to the positive feedback, we found that MCMC-HFM reliably infer them using both qualitative and quantitative fitness measures. Then, we applied the MCMC-HFM to an apoptosis signal transduction network previously proposed. For kinetic parameters related to implicit positive feedbacks, which are important for bistability and irreversibility of the output, the MCMC-HFM reliably inferred these kinetic parameters. In particular, some kinetic parameters that have experimental estimates were inferred without using these data and the results were consistent with experiments. Moreover, for some parameters, the mixed use of quantitative and qualitative fitness measures narrowed down the acceptable range of parameters.

  11. Application of the Markov Chain Monte Carlo method for snow water equivalent retrieval based on passive microwave measurements

    Science.gov (United States)

    Pan, J.; Durand, M. T.; Vanderjagt, B. J.

    2015-12-01

    Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.

  12. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    Science.gov (United States)

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  13. A trans-dimensional Bayesian Markov chain Monte Carlo algorithm for model assessment using frequency-domain electromagnetic data

    Science.gov (United States)

    Minsley, Burke J.

    2011-01-01

    A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, ‘best’ model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequencydomain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favorably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment.

  14. Bayesian uncertainty quantification for flows in heterogeneous porous media using reversible jump Markov chain Monte Carlo methods

    KAUST Repository

    Mondal, A.

    2010-03-01

    In this paper, we study the uncertainty quantification in inverse problems for flows in heterogeneous porous media. Reversible jump Markov chain Monte Carlo algorithms (MCMC) are used for hierarchical modeling of channelized permeability fields. Within each channel, the permeability is assumed to have a lognormal distribution. Uncertainty quantification in history matching is carried out hierarchically by constructing geologic facies boundaries as well as permeability fields within each facies using dynamic data such as production data. The search with Metropolis-Hastings algorithm results in very low acceptance rate, and consequently, the computations are CPU demanding. To speed-up the computations, we use a two-stage MCMC that utilizes upscaled models to screen the proposals. In our numerical results, we assume that the channels intersect the wells and the intersection locations are known. Our results show that the proposed algorithms are capable of capturing the channel boundaries and describe the permeability variations within the channels using dynamic production history at the wells. © 2009 Elsevier Ltd. All rights reserved.

  15. Multi-Resolution Markov-Chain-Monte-Carlo Approach for System Identification with an Application to Finite-Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G; Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G

    2005-02-07

    Estimating unknown system configurations/parameters by combining system knowledge gained from a computer simulation model on one hand and from observed data on the other hand is challenging. An example of such inverse problem is detecting and localizing potential flaws or changes in a structure by using a finite-element model and measured vibration/displacement data. We propose a probabilistic approach based on Bayesian methodology. This approach does not only yield a single best-guess solution, but a posterior probability distribution over the parameter space. In addition, the Bayesian approach provides a natural framework to accommodate prior knowledge. A Markov chain Monte Carlo (MCMC) procedure is proposed to generate samples from the posterior distribution (an ensemble of likely system configurations given the data). The MCMC procedure proposed explores the parameter space at different resolutions (scales), resulting in a more robust and efficient procedure. The large-scale exploration steps are carried out using coarser-resolution finite-element models, yielding a considerable decrease in computational time, which can be a crucial for large finite-element models. An application is given using synthetic displacement data from a simple cantilever beam with MCMC exploration carried out at three different resolutions.

  16. Assimilation of Satellite Soil Moisture observation with the Particle Filter-Markov Chain Monte Carlo and Geostatistical Modeling

    Science.gov (United States)

    Moradkhani, Hamid; Yan, Hongxiang

    2016-04-01

    Soil moisture simulation and prediction are increasingly used to characterize agricultural droughts but the process suffers from data scarcity and quality. The satellite soil moisture observations could be used to improve model predictions with data assimilation. Remote sensing products, however, are typically discontinuous in spatial-temporal coverages; while simulated soil moisture products are potentially biased due to the errors in forcing data, parameters, and deficiencies of model physics. This study attempts to provide a detailed analysis of the joint and separate assimilation of streamflow and Advanced Scatterometer (ASCAT) surface soil moisture into a fully distributed hydrologic model, with the use of recently developed particle filter-Markov chain Monte Carlo (PF-MCMC) method. A geostatistical model is introduced to overcome the satellite soil moisture discontinuity issue where satellite data does not cover the whole study region or is significantly biased, and the dominant land cover is dense vegetation. The results indicate that joint assimilation of soil moisture and streamflow has minimal effect in improving the streamflow prediction, however, the surface soil moisture field is significantly improved. The combination of DA and geostatistical approach can further improve the surface soil moisture prediction.

  17. Dynamic temperature selection for parallel tempering in Markov chain Monte Carlo simulations

    Science.gov (United States)

    Vousden, W. D.; Farr, W. M.; Mandel, I.

    2016-01-01

    Modern problems in astronomical Bayesian inference require efficient methods for sampling from complex, high-dimensional, often multimodal probability distributions. Most popular methods, such as MCMC sampling, perform poorly on strongly multimodal probability distributions, rarely jumping between modes or settling on just one mode without finding others. Parallel tempering addresses this problem by sampling simultaneously with separate Markov chains from tempered versions of the target distribution with reduced contrast levels. Gaps between modes can be traversed at higher temperatures, while individual modes can be efficiently explored at lower temperatures. In this paper, we investigate how one might choose the ladder of temperatures to achieve more efficient sampling, as measured by the autocorrelation time of the sampler. In particular, we present a simple, easily implemented algorithm for dynamically adapting the temperature configuration of a sampler while sampling. This algorithm dynamically adjusts the temperature spacing to achieve a uniform rate of exchanges between chains at neighbouring temperatures. We compare the algorithm to conventional geometric temperature configurations on a number of test distributions and on an astrophysical inference problem, reporting efficiency gains by a factor of 1.2-2.5 over a well-chosen geometric temperature configuration and by a factor of 1.5-5 over a poorly chosen configuration. On all of these problems, a sampler using the dynamical adaptations to achieve uniform acceptance ratios between neighbouring chains outperforms one that does not.

  18. Estimation of trace gas fluxes with objectively determined basis functions using reversible-jump Markov chain Monte Carlo

    Science.gov (United States)

    Lunt, Mark F.; Rigby, Matt; Ganesan, Anita L.; Manning, Alistair J.

    2016-09-01

    Atmospheric trace gas inversions often attempt to attribute fluxes to a high-dimensional grid using observations. To make this problem computationally feasible, and to reduce the degree of under-determination, some form of dimension reduction is usually performed. Here, we present an objective method for reducing the spatial dimension of the parameter space in atmospheric trace gas inversions. In addition to solving for a set of unknowns that govern emissions of a trace gas, we set out a framework that considers the number of unknowns to itself be an unknown. We rely on the well-established reversible-jump Markov chain Monte Carlo algorithm to use the data to determine the dimension of the parameter space. This framework provides a single-step process that solves for both the resolution of the inversion grid, as well as the magnitude of fluxes from this grid. Therefore, the uncertainty that surrounds the choice of aggregation is accounted for in the posterior parameter distribution. The posterior distribution of this transdimensional Markov chain provides a naturally smoothed solution, formed from an ensemble of coarser partitions of the spatial domain. We describe the form of the reversible-jump algorithm and how it may be applied to trace gas inversions. We build the system into a hierarchical Bayesian framework in which other unknown factors, such as the magnitude of the model uncertainty, can also be explored. A pseudo-data example is used to show the usefulness of this approach when compared to a subjectively chosen partitioning of a spatial domain. An inversion using real data is also shown to illustrate the scales at which the data allow for methane emissions over north-west Europe to be resolved.

  19. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    Science.gov (United States)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  20. A MATLAB Package for Markov Chain Monte Carlo with a Multi-Unidimensional IRT Model

    Directory of Open Access Journals (Sweden)

    Yanyan Sheng

    2008-11-01

    Full Text Available Unidimensional item response theory (IRT models are useful when each item is designed to measure some facet of a unified latent trait. In practical applications, items are not necessarily measuring the same underlying trait, and hence the more general multi-unidimensional model should be considered. This paper provides the requisite information and description of software that implements the Gibbs sampler for such models with two item parameters and a normal ogive form. The software developed is written in the MATLAB package IRTmu2no. The package is flexible enough to allow a user the choice to simulate binary response data with multiple dimensions, set the number of total or burn-in iterations, specify starting values or prior distributions for model parameters, check convergence of the Markov chain, as well as obtain Bayesian fit statistics. Illustrative examples are provided to demonstrate and validate the use of the software package.

  1. CosmoPMC: Cosmology Population Monte Carlo

    CERN Document Server

    Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren

    2011-01-01

    We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.

  2. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  3. Inversion of coupled carbon-nitrogen model parameters against multiple datasets using Markov chain Monte Carlo methodology

    Science.gov (United States)

    Yang, Y.; Zhou, X.; Weng, E.; Luo, Y.

    2010-12-01

    The Markov chain Monte Carlo (MCMC) method has been widely used to estimate terrestrial ecosystem model parameters. However, inverse analysis is now mainly applied to estimate parameters involved in terrestrial ecosystem carbon models, and yet not used to inverse terrestrial nitrogen model parameters. In this study, the Bayesian probability inversion and MCMC technique were applied to inverse model parameters in a coupled carbon-nitrogen model, and then the trained ecosystem model was used to predict nitrogen pool sizes at the Duke Forests FACE site. We used datasets of soil respiration, nitrogen mineralization, nitrogen uptake, carbon and nitrogen pools in wood, foliage, litterfall, microbial, forest floor, and mineral soil under ambient and elevated CO2 plots from 1996-2005. Our results showed that, the initial values of C pools in leaf, wood, root, litter, microbial and forest floor were well constrained. The transfer coefficients from pools of leaf biomass, woody biomass, root biomass, litter, forest floor were also well constrained by the actual measurements. The observed datasets gave moderate information to the transfer coefficient from the slow soil carbon pool. The parameters in nitrogen parts, such as C: N in plant, litter, and soil were also well constrained. In addition, parameters about nitrogen dynamics (i.e. nitrogen uptake, nitrogen loss, and nitrogen input through biological fixation and deposition) were also well constrained. The predicted carbon and nitrogen pool sizes using the constrained ecosystem models were well consistent with the observed values. Overall, these results suggest that the MCMC inversion technique is an effective method to synthesize information from various sources for predicting the responses of ecosystem carbon and nitrogen cycling to elevated CO2.

  4. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    Science.gov (United States)

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html].

  5. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence.

    Directory of Open Access Journals (Sweden)

    Chin Lin

    Full Text Available Conventional genome-wide association studies (GWAS have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA, which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN at https://cran.r-project.org/web/packages/etma/index.html].

  6. Weighted maximum posterior marginals for random fields using an ensemble of conditional densities from multiple Markov chain Monte Carlo simulations.

    Science.gov (United States)

    Monaco, James Peter; Madabhushi, Anant

    2011-07-01

    The ability of classification systems to adjust their performance (sensitivity/specificity) is essential for tasks in which certain errors are more significant than others. For example, mislabeling cancerous lesions as benign is typically more detrimental than mislabeling benign lesions as cancerous. Unfortunately, methods for modifying the performance of Markov random field (MRF) based classifiers are noticeably absent from the literature, and thus most such systems restrict their performance to a single, static operating point (a paired sensitivity/specificity). To address this deficiency we present weighted maximum posterior marginals (WMPM) estimation, an extension of maximum posterior marginals (MPM) estimation. Whereas the MPM cost function penalizes each error equally, the WMPM cost function allows misclassifications associated with certain classes to be weighted more heavily than others. This creates a preference for specific classes, and consequently a means for adjusting classifier performance. Realizing WMPM estimation (like MPM estimation) requires estimates of the posterior marginal distributions. The most prevalent means for estimating these--proposed by Marroquin--utilizes a Markov chain Monte Carlo (MCMC) method. Though Marroquin's method (M-MCMC) yields estimates that are sufficiently accurate for MPM estimation, they are inadequate for WMPM. To more accurately estimate the posterior marginals we present an equally simple, but more effective extension of the MCMC method (E-MCMC). Assuming an identical number of iterations, E-MCMC as compared to M-MCMC yields estimates with higher fidelity, thereby 1) allowing a far greater number and diversity of operating points and 2) improving overall classifier performance. To illustrate the utility of WMPM and compare the efficacies of M-MCMC and E-MCMC, we integrate them into our MRF-based classification system for detecting cancerous glands in (whole-mount or quarter) histological sections of the prostate.

  7. Estimation of soil salinity by using Markov Chain Monte Carlo simulation for multi-configuration electromagnetic induction measurements

    Science.gov (United States)

    Jadoon, K. Z.; Altaf, M. U.; McCabe, M. F.; Hoteit, I.; Moghadas, D.

    2014-12-01

    In arid and semi-arid regions, soil salinity has a major impact on agro-ecosystems, agricultural productivity, environment and sustainability. High levels of soil salinity adversely affect plant growth and productivity, soil and water quality, and may eventually result in soil erosion and land degradation. Being essentially a hazard, it's important to monitor and map soil salinity at an early stage to effectively use soil resources and maintain soil salinity level below the salt tolerance of crops. In this respect, low frequency electromagnetic induction (EMI) systems can be used as a noninvasive method to map the distribution of soil salinity at the field scale and at a high spatial resolution. In this contribution, an EMI system (the CMD Mini-Explorer) is used to estimate soil salinity using a Bayesian approach implemented via a Markov chain Monte Carlo (MCMC) sampling for inversion of multi-configuration EMI measurements. In-situ and EMI measurements were conducted across a farm where Acacia trees are irrigated with brackish water using a drip irrigation system. The electromagnetic forward model is based on the full solution of Maxwell's equation, and the subsurface is considered as a three-layer problem. In total, five parameters (electrical conductivity of three layers and thickness of top two layers) were inverted and modeled electrical conductivities were converted into the universal standard of soil salinity measurement (i.e. using the method of electrical conductivity of a saturated soil paste extract). Simulation results demonstrate that the proposed scheme successfully recovers soil salinity and reduces the uncertainties in the prior estimate. Analysis of the resulting posterior distribution of parameters indicates that electrical conductivity of the top two layers and the thickness of the first layer are well constrained by the EMI measurements. The proposed approach allows for quantitative mapping and monitoring of the spatial electrical conductivity

  8. Phase-coexistence simulations of fluid mixtures by the Markov Chain Monte Carlo method using single-particle models

    KAUST Repository

    Li, Jun

    2013-09-01

    We present a single-particle Lennard-Jones (L-J) model for CO2 and N2. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO2 and N2 agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH4, CO2 and N2 are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available. © 2013 Elsevier Inc.

  9. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    Science.gov (United States)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  10. Monte Carlo Hamiltonian: Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx < 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.

  11. Proton Upset Monte Carlo Simulation

    Science.gov (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  12. Monte Carlo Particle Lists: MCPL

    CERN Document Server

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  13. Markov Chain Monte Carlo estimation of species distributions: a case study of the swift fox in western Kansas

    Science.gov (United States)

    Sargeant, Glen A.; Sovada, Marsha A.; Slivinski, Christiane C.; Johnson, Douglas H.

    2005-01-01

    Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997–1999, we searched 355 townships (ca. 93 km) 1–3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ≥1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between

  14. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    Energy Technology Data Exchange (ETDEWEB)

    Audren, Benjamin; Lesgourgues, Julien [Institut de Théorie des Phénomènes Physiques, École PolytechniqueFédérale de Lausanne, CH-1015, Lausanne (Switzerland); Bird, Simeon [Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ, 08540 (United States); Haehnelt, Martin G. [Kavli Institute for Cosmology and Institute of Astronomy, Madingley Road, Cambridge, CB3 0HA (United Kingdom); Viel, Matteo, E-mail: benjamin.audren@epfl.ch, E-mail: julien.lesgourgues@cern.ch, E-mail: spb@ias.edu, E-mail: haehnelt@ast.cam.ac.uk, E-mail: viel@oats.inaf.it [INAF/Osservatorio Astronomico di Trieste, Via Tiepolo 11, 34143, Trieste (Italy)

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservatively that the uncorrelated error rises above 2% at k = 0.4 h/Mpc and z = 0.5, we find that a future Euclid-like cosmic shear/galaxy survey achieves a 1-σ error on M{sub ν} close to 32 meV/25 meV, sufficient for detecting the total neutrino mass with good significance. If the residual uncorrelated errors indeed rises rapidly towards smaller scales in the non-linear regime as we have assumed here then the data on non-linear scales does not increase the sensitivity to the total neutrino mass. Assuming instead a ten times smaller theoretical error with the same scale dependence, the error on the total neutrino mass decreases moderately from σ(M{sub ν}) = 18 meV to 14 meV when mildly non-linear scales with 0.1 h/Mpc < k < 0.6 h/Mpc are included in the analysis of the galaxy survey data.

  15. Applications of Monte Carlo Methods in Calculus.

    Science.gov (United States)

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  16. Deciphering the Glacial-Interglacial Landscape History in Greenland Based on Markov Chain Monte Carlo Inversion of Existing 10Be-26Al Data

    DEFF Research Database (Denmark)

    Strunk, Astrid; Knudsen, Mads Faurschou; Larsen, Nicolaj Krog;

    investigate the landscape history in eastern and western Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to the existing 10Be-26Al data from these regions. The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated...... into account global changes in climate. The other free parameters include the glacial and interglacial erosion rates as well as the timing of the Holocene deglaciation. The model essentially simulates numerous different landscape scenarios based on these four parameters and zooms in on the most plausible...

  17. Distance-Guided Forward and Backward Chain-Growth Monte Carlo Method for Conformational Sampling and Structural Prediction of Antibody CDR-H3 Loops.

    Science.gov (United States)

    Tang, Ke; Zhang, Jinfeng; Liang, Jie

    2017-01-10

    Antibodies recognize antigens through the complementary determining regions (CDR) formed by six-loop hypervariable regions crucial for the diversity of antigen specificities. Among the six CDR loops, the H3 loop is the most challenging to predict because of its much higher variation in sequence length and identity, resulting in much larger and complex structural space, compared to the other five loops. We developed a novel method based on a chain-growth sequential Monte Carlo method, called distance-guided sequential chain-growth Monte Carlo for H3 loops (DiSGro-H3). The new method samples protein chains in both forward and backward directions. It can efficiently generate low energy, near-native H3 loop structures using the conformation types predicted from the sequences of H3 loops. DiSGro-H3 performs significantly better than another ab initio method, RosettaAntibody, in both sampling and prediction, while taking less computational time. It performs comparably to template-based methods. As an ab initio method, DiSGro-H3 offers satisfactory accuracy while being able to predict any H3 loops without templates.

  18. Density matrix quantum Monte Carlo

    CERN Document Server

    Blunt, N S; Spencer, J S; Foulkes, W M C

    2013-01-01

    This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...

  19. Efficient kinetic Monte Carlo simulation

    Science.gov (United States)

    Schulze, Tim P.

    2008-02-01

    This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.

  20. A Quantum Monte Carlo Study on Mixed-Spin Chains of 1/2-1/2-1-1 and 3/2-3/2 -1-1

    Institute of Scientific and Technical Information of China (English)

    XU Zhao-Xin; ZHANG Jun; YING He-Ping

    2003-01-01

    The ground-state and thermodynamic properties of quantum mixed-spin chains of1/2-1/2-1-1and 3/2-3/2-1-1are investigated by a quantum Monte Carlo simulation with the loop-cluster algorithm. For 1/2-1/2-1-1 chain, we find it has two phases separated by an energy-gap vanishing point in the ground-state. For 3/2-3/2-1-1 chain, the numerical results show two energy-gap vanishing points isolated by different phases in its ground-state. Our calculations indicate that all these ground state phases can be understood by means of valence-bond-solid picture, and the thermodynamic behavior at finite temperatures is continuous as a function of parameterα=J2/J1.

  1. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  2. Sequence-based Parameter Estimation for an Epidemiological Temporal Aftershock Forecasting Model using Markov Chain Monte Carlo Simulation

    Science.gov (United States)

    Jalayer, Fatemeh; Ebrahimian, Hossein

    2014-05-01

    Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the

  3. Monte Carlo study of real time dynamics

    CERN Document Server

    Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C

    2016-01-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  4. Monte Carlo approach to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik

    2009-11-15

    The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)

  5. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  6. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  7. Approaching Chemical Accuracy with Quantum Monte Carlo

    OpenAIRE

    Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.

    2012-01-01

    International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...

  8. Monte Carlo methods for light propagation in biological tissues

    OpenAIRE

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2016-01-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis–Hastings algori...

  9. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  10. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...

  11. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  12. Bayesian Markov-Chain-Monte-Carlo inversion of time-lapse crosshole GPR data to characterize the vadose zone at the Arrenaes Site, Denmark

    DEFF Research Database (Denmark)

    Scholer, Marie; Irving, James; Zibar, Majken Caroline Looms

    2012-01-01

    We examined to what extent time-lapse crosshole ground-penetrating radar traveltimes, measured during a forced infiltration experiment at the Arreneas field site in Denmark, could help to quantify vadose zone hydraulic properties and their corresponding uncertainties using a Bayesian Markov......-chain-Monte-Carlo inversion approach with different priors. The ground-penetrating radar (GPR) geophysical method has the potential to provide valuable information on the hydraulic properties of the vadose zone because of its strong sensitivity to soil water content. In particular, recent evidence has suggested...... that the stochastic inversion of crosshole GPR traveltime data can allow for a significant reduction in uncertainty regarding subsurface van Genuchten–Mualem (VGM) parameters. Much of the previous work on the stochastic estimation of VGM parameters from crosshole GPR data has considered the case of steady...

  13. Reconstruction of Exposure to m-Xylene from Human Biomonitoring Data Using PBPK Modelling, Bayesian Inference, and Markov Chain Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    Kevin McNally

    2012-01-01

    Full Text Available There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure.

  14. Reconstruction of Exposure to m-Xylene from Human Biomonitoring Data Using PBPK Modelling, Bayesian Inference, and Markov Chain Monte Carlo Simulation.

    Science.gov (United States)

    McNally, Kevin; Cotton, Richard; Cocker, John; Jones, Kate; Bartels, Mike; Rick, David; Price, Paul; Loizou, George

    2012-01-01

    There are numerous biomonitoring programs, both recent and ongoing, to evaluate environmental exposure of humans to chemicals. Due to the lack of exposure and kinetic data, the correlation of biomarker levels with exposure concentrations leads to difficulty in utilizing biomonitoring data for biological guidance values. Exposure reconstruction or reverse dosimetry is the retrospective interpretation of external exposure consistent with biomonitoring data. We investigated the integration of physiologically based pharmacokinetic modelling, global sensitivity analysis, Bayesian inference, and Markov chain Monte Carlo simulation to obtain a population estimate of inhalation exposure to m-xylene. We used exhaled breath and venous blood m-xylene and urinary 3-methylhippuric acid measurements from a controlled human volunteer study in order to evaluate the ability of our computational framework to predict known inhalation exposures. We also investigated the importance of model structure and dimensionality with respect to its ability to reconstruct exposure.

  15. RUN DMC: An efficient, parallel code for analyzing Radial Velocity Observations using N-body Integrations and Differential Evolution Markov chain Monte Carlo

    CERN Document Server

    Nelson, Benjamin E; Payne, Matthew J

    2013-01-01

    In the 20+ years of Doppler observations of stars, scientists have uncovered a diverse population of extrasolar multi-planet systems. A common technique for characterizing the orbital elements of these planets is Markov chain Monte Carlo (MCMC), using a Keplerian model with random walk proposals and paired with the Metropolis-Hastings algorithm. For approximately a couple of dozen planetary systems with Doppler observations, there are strong planet-planet interactions due to the system being in or near a mean-motion resonance (MMR). An N-body model is often required to accurately describe these systems. Further computational difficulties arise from exploring a high-dimensional parameter space ($\\sim$7 x number of planets) that can have complex parameter correlations. To surmount these challenges, we introduce a differential evolution MCMC (DEMCMC) applied to radial velocity data while incorporating self-consistent N-body integrations. Our Radial velocity Using N-body DEMCMC (RUN DMC) algorithm improves upon t...

  16. Bayes or bootstrap? A simulation study comparing the performance of Bayesian Markov chain Monte Carlo sampling and bootstrapping in assessing phylogenetic confidence.

    Science.gov (United States)

    Alfaro, Michael E; Zoller, Stefan; Lutzoni, François

    2003-02-01

    Bayesian Markov chain Monte Carlo sampling has become increasingly popular in phylogenetics as a method for both estimating the maximum likelihood topology and for assessing nodal confidence. Despite the growing use of posterior probabilities, the relationship between the Bayesian measure of confidence and the most commonly used confidence measure in phylogenetics, the nonparametric bootstrap proportion, is poorly understood. We used computer simulation to investigate the behavior of three phylogenetic confidence methods: Bayesian posterior probabilities calculated via Markov chain Monte Carlo sampling (BMCMC-PP), maximum likelihood bootstrap proportion (ML-BP), and maximum parsimony bootstrap proportion (MP-BP). We simulated the evolution of DNA sequence on 17-taxon topologies under 18 evolutionary scenarios and examined the performance of these methods in assigning confidence to correct monophyletic and incorrect monophyletic groups, and we examined the effects of increasing character number on support value. BMCMC-PP and ML-BP were often strongly correlated with one another but could provide substantially different estimates of support on short internodes. In contrast, BMCMC-PP correlated poorly with MP-BP across most of the simulation conditions that we examined. For a given threshold value, more correct monophyletic groups were supported by BMCMC-PP than by either ML-BP or MP-BP. When threshold values were chosen that fixed the rate of accepting incorrect monophyletic relationship as true at 5%, all three methods recovered most of the correct relationships on the simulated topologies, although BMCMC-PP and ML-BP performed better than MP-BP. BMCMC-PP was usually a less biased predictor of phylogenetic accuracy than either bootstrapping method. BMCMC-PP provided high support values for correct topological bipartitions with fewer characters than was needed for nonparametric bootstrap.

  17. An introduction to Monte Carlo methods

    NARCIS (Netherlands)

    Walter, J. -C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim

  18. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  19. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    KAUST Repository

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  20. Bayesian phylogeny analysis via stochastic approximation Monte Carlo.

    Science.gov (United States)

    Cheon, Sooyoung; Liang, Faming

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.

  1. The Monte Carlo method in quantum field theory

    CERN Document Server

    Morningstar, C

    2007-01-01

    This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.

  2. Information Geometry and Sequential Monte Carlo

    CERN Document Server

    Sim, Aaron; Stumpf, Michael P H

    2012-01-01

    This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...

  3. Quantum Monte Carlo Calculations of Neutron Matter

    CERN Document Server

    Carlson, J; Ravenhall, D G

    2003-01-01

    Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...

  4. Lattice gauge theories and Monte Carlo simulations

    CERN Document Server

    Rebbi, Claudio

    1983-01-01

    This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.

  5. Fast quantum Monte Carlo on a GPU

    CERN Document Server

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  6. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  7. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  8. Temporal relation between the ADC and DC potential responses to transient focal ischemia in the rat: a Markov chain Monte Carlo simulation analysis.

    Science.gov (United States)

    King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G

    2003-06-01

    Markov chain Monte Carlo simulation was used in a reanalysis of the longitudinal data obtained by Harris et al. (J Cereb Blood Flow Metab 20:28-36) in a study of the direct current (DC) potential and apparent diffusion coefficient (ADC) responses to focal ischemia. The main purpose was to provide a formal analysis of the temporal relationship between the ADC and DC responses, to explore the possible involvement of a common latent (driving) process. A Bayesian nonlinear hierarchical random coefficients model was adopted. DC and ADC transition parameter posterior probability distributions were generated using three parallel Markov chains created using the Metropolis algorithm. Particular attention was paid to the within-subject differences between the DC and ADC time course characteristics. The results show that the DC response is biphasic, whereas the ADC exhibits monophasic behavior, and that the two DC components are each distinguishable from the ADC response in their time dependencies. The DC and ADC changes are not, therefore, driven by a common latent process. This work demonstrates a general analytical approach to the multivariate, longitudinal data-processing problem that commonly arises in stroke and other biomedical research.

  9. A gradient Markov chain Monte Carlo algorithm for computing multivariate maximum likelihood estimates and posterior distributions: mixture dose-response assessment.

    Science.gov (United States)

    Li, Ruochen; Englehardt, James D; Li, Xiaoguang

    2012-02-01

    Multivariate probability distributions, such as may be used for mixture dose-response assessment, are typically highly parameterized and difficult to fit to available data. However, such distributions may be useful in analyzing the large electronic data sets becoming available, such as dose-response biomarker and genetic information. In this article, a new two-stage computational approach is introduced for estimating multivariate distributions and addressing parameter uncertainty. The proposed first stage comprises a gradient Markov chain Monte Carlo (GMCMC) technique to find Bayesian posterior mode estimates (PMEs) of parameters, equivalent to maximum likelihood estimates (MLEs) in the absence of subjective information. In the second stage, these estimates are used to initialize a Markov chain Monte Carlo (MCMC) simulation, replacing the conventional burn-in period to allow convergent simulation of the full joint Bayesian posterior distribution and the corresponding unconditional multivariate distribution (not conditional on uncertain parameter values). When the distribution of parameter uncertainty is such a Bayesian posterior, the unconditional distribution is termed predictive. The method is demonstrated by finding conditional and unconditional versions of the recently proposed emergent dose-response function (DRF). Results are shown for the five-parameter common-mode and seven-parameter dissimilar-mode models, based on published data for eight benzene-toluene dose pairs. The common mode conditional DRF is obtained with a 21-fold reduction in data requirement versus MCMC. Example common-mode unconditional DRFs are then found using synthetic data, showing a 71% reduction in required data. The approach is further demonstrated for a PCB 126-PCB 153 mixture. Applicability is analyzed and discussed. Matlab(®) computer programs are provided.

  10. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  11. Monte Carlo simulations for plasma physics

    Energy Technology Data Exchange (ETDEWEB)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  12. Improved Monte Carlo Renormalization Group Method

    Science.gov (United States)

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  13. Smart detectors for Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten

    2008-01-01

    Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...

  14. Monte Carlo methods for particle transport

    CERN Document Server

    Haghighat, Alireza

    2015-01-01

    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  15. Quantum Monte Carlo Calculations of Light Nuclei

    CERN Document Server

    Pieper, Steven C

    2007-01-01

    During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.

  16. Monte Carlo Hamiltonian:Inverse Potential

    Institute of Scientific and Technical Information of China (English)

    LUO Xiang-Qian; CHENG Xiao-Ni; Helmut KR(O)GER

    2004-01-01

    The Monte Carlo Hamiltonian method developed recently allows to investigate the ground state and low-lying excited states of a quantum system,using Monte Carlo(MC)algorithm with importance sampling.However,conventional MC algorithm has some difficulties when applied to inverse potentials.We propose to use effective potential and extrapolation method to solve the problem.We present examples from the hydrogen system.

  17. The Feynman Path Goes Monte Carlo

    OpenAIRE

    Sauer, Tilman

    2001-01-01

    Path integral Monte Carlo (PIMC) simulations have become an important tool for the investigation of the statistical mechanics of quantum systems. I discuss some of the history of applying the Monte Carlo method to non-relativistic quantum systems in path-integral representation. The principle feasibility of the method was well established by the early eighties, a number of algorithmic improvements have been introduced in the last two decades.

  18. Self-consistent kinetic lattice Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Horsfield, A.; Dunham, S.; Fujitani, Hideaki

    1999-07-01

    The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.

  19. Monte Carlo Algorithms for Linear Problems

    OpenAIRE

    DIMOV, Ivan

    2000-01-01

    MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...

  20. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    CERN Document Server

    Kleiss, R H

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  1. Efficient chain moves for Monte Carlo simulations of a wormlike DNA model: excluded volume, supercoils, site juxtapositions, knots, and comparisons with random-flight and lattice models.

    Science.gov (United States)

    Liu, Zhirong; Chan, Hue Sun

    2008-04-14

    We develop two classes of Monte Carlo moves for efficient sampling of wormlike DNA chains that can have significant degrees of supercoiling, a conformational feature that is key to many aspects of biological function including replication, transcription, and recombination. One class of moves entails reversing the coordinates of a segment of the chain along one, two, or three axes of an appropriately chosen local frame of reference. These transformations may be viewed as a generalization, to the continuum, of the Madras-Orlitsky-Shepp algorithm for cubic lattices. Another class of moves, termed T+/-2, allows for interconversions between chains with different lengths by adding or subtracting two beads (monomer units) to or from the chain. Length-changing moves are generally useful for conformational sampling with a given site juxtaposition, as has been shown in previous lattice studies. Here, the continuum T+/-2 moves are designed to enhance their acceptance rate in supercoiled conformations. We apply these moves to a wormlike model in which excluded volume is accounted for by a bond-bond repulsion term. The computed autocorrelation functions for the relaxation of bond length, bond angle, writhe, and branch number indicate that the new moves lead to significantly more efficient sampling than conventional bead displacements and crankshaft rotations. A close correspondence is found in the equilibrium ensemble between the map of writhe computed for pair of chain segments and the map of site juxtapositions or self-contacts. To evaluate the more coarse-grained freely jointed chain (random-flight) and cubic lattice models that are commonly used in DNA investigations, twisting (torsional) potentials are introduced into these models. Conformational properties for a given superhelical density sigma may then be sampled by computing the writhe and using White's formula to relate the degree of twisting to writhe and sigma. Extensive comparisons of contact patterns and knot

  2. Distributed and Adaptive Darting Monte Carlo through Regenerations

    NARCIS (Netherlands)

    Ahn, S.; Chen, Y.; Welling, M.

    2013-01-01

    Darting Monte Carlo (DMC) is a MCMC procedure designed to effectively mix between multiple modes of a probability distribution. We propose an adaptive and distributed version of this method by using regenerations. This allows us to run multiple chains in parallel and adapt the shape of the jump regi

  3. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, Wies M.W.

    1994-01-01

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to

  4. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, W.

    1998-01-01

    In order to obtain conditional maximum likelihood estimates, the conditioning constants are needed. Geyer and Thompson (1992) proposed a Markov chain Monte Carlo method that can be used to approximate these constants when they are difficult to calculate exactly. In the present paper, their method is

  5. AEOLUS: A MARKOV CHAIN MONTE CARLO CODE FOR MAPPING ULTRACOOL ATMOSPHERES. AN APPLICATION ON JUPITER AND BROWN DWARF HST LIGHT CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Karalidi, Theodora; Apai, Dániel; Schneider, Glenn; Hanson, Jake R. [Steward Observatory, Department of Astronomy, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Pasachoff, Jay M., E-mail: tkaralidi@email.arizona.edu [Hopkins Observatory, Williams College, 33 Lab Campus Drive, Williamstown, MA 01267 (United States)

    2015-11-20

    Deducing the cloud cover and its temporal evolution from the observed planetary spectra and phase curves can give us major insight into the atmospheric dynamics. In this paper, we present Aeolus, a Markov chain Monte Carlo code that maps the structure of brown dwarf and other ultracool atmospheres. We validated Aeolus on a set of unique Jupiter Hubble Space Telescope (HST) light curves. Aeolus accurately retrieves the properties of the major features of the Jovian atmosphere, such as the Great Red Spot and a major 5 μm hot spot. Aeolus is the first mapping code validated on actual observations of a giant planet over a full rotational period. For this study, we applied Aeolus to J- and H-band HST light curves of 2MASS J21392676+0220226 and 2MASS J0136565+093347. Aeolus retrieves three spots at the top of the atmosphere (per observational wavelength) of these two brown dwarfs, with a surface coverage of 21% ± 3% and 20.3% ± 1.5%, respectively. The Jupiter HST light curves will be publicly available via ADS/VIZIR.

  6. Nonlinear calibration transfer based on hierarchical Bayesian models and Lagrange Multipliers: Error bounds of estimates via Monte Carlo - Markov Chain sampling.

    Science.gov (United States)

    Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris

    2017-01-25

    The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.

  7. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive global information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.

  8. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  9. A DNA sequence evolution analysis generalized by simulation and the markov chain monte carlo method implicates strand slippage in a majority of insertions and deletions.

    Science.gov (United States)

    Nishizawa, Manami; Nishizawa, Kazuhisa

    2002-12-01

    To study the mechanisms for local evolutionary changes in DNA sequences involving slippage-type insertions and deletions, an alignment approach is explored that can consider the posterior probabilities of alignment models. Various patterns of insertion and deletion that can link the ancestor and descendant sequences are proposed and evaluated by simulation and compared by the Markov chain Monte Carlo (MCMC) method. Analyses of pseudogenes reveal that the introduction of the parameters that control the probability of slippage-type events markedly augments the probability of the observed sequence evolution, arguing that a cryptic involvement of slippage occurrences is manifested as insertions and deletions of short nucleotide segments. Strikingly, approximately 80% of insertions in human pseudogenes and approximately 50% of insertions in murids pseudogenes are likely to be caused by the slippage-mediated process, as represented by BC in ABCD --> ABCBCD. We suggest that, in both human and murids, even very short repetitive motifs, such as CAGCAG, CACACA, and CCCC, have approximately 10- to 15-fold susceptibility to insertions and deletions, compared to nonrepetitive sequences. Our protocol, namely, indel-MCMC, thus seems to be a reasonable approach for statistical analyses of the early phase of microsatellite evolution.

  10. Random frog: an efficient reversible jump Markov Chain Monte Carlo-like approach for variable selection with applications to gene selection and disease classification.

    Science.gov (United States)

    Li, Hong-Dong; Xu, Qing-Song; Liang, Yi-Zeng

    2012-08-31

    The identification of disease-relevant genes represents a challenge in microarray-based disease diagnosis where the sample size is often limited. Among established methods, reversible jump Markov Chain Monte Carlo (RJMCMC) methods have proven to be quite promising for variable selection. However, the design and application of an RJMCMC algorithm requires, for example, special criteria for prior distributions. Also, the simulation from joint posterior distributions of models is computationally extensive, and may even be mathematically intractable. These disadvantages may limit the applications of RJMCMC algorithms. Therefore, the development of algorithms that possess the advantages of RJMCMC methods and are also efficient and easy to follow for selecting disease-associated genes is required. Here we report a RJMCMC-like method, called random frog that possesses the advantages of RJMCMC methods and is much easier to implement. Using the colon and the estrogen gene expression datasets, we show that random frog is effective in identifying discriminating genes. The top 2 ranked genes for colon and estrogen are Z50753, U00968, and Y10871_at, Z22536_at, respectively. (The source codes with GNU General Public License Version 2.0 are freely available to non-commercial users at: http://code.google.com/p/randomfrog/.).

  11. Assessment of myocardial metabolic rate of glucose by means of Bayesian ICA and Markov Chain Monte Carlo methods in small animal PET imaging

    Science.gov (United States)

    Berradja, Khadidja; Boughanmi, Nabil

    2016-09-01

    In dynamic cardiac PET FDG studies the assessment of myocardial metabolic rate of glucose (MMRG) requires the knowledge of the blood input function (IF). IF can be obtained by manual or automatic blood sampling and cross calibrated with PET. These procedures are cumbersome, invasive and generate uncertainties. The IF is contaminated by spillover of radioactivity from the adjacent myocardium and this could cause important error in the estimated MMRG. In this study, we show that the IF can be extracted from the images in a rat heart study with 18F-fluorodeoxyglucose (18F-FDG) by means of Independent Component Analysis (ICA) based on Bayesian theory and Markov Chain Monte Carlo (MCMC) sampling method (BICA). Images of the heart from rats were acquired with the Sherbrooke small animal PET scanner. A region of interest (ROI) was drawn around the rat image and decomposed into blood and tissue using BICA. The Statistical study showed that there is a significant difference (p < 0.05) between MMRG obtained with IF extracted by BICA with respect to IF extracted from measured images corrupted with spillover.

  12. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    KAUST Repository

    Kadoura, Ahmad Salim

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.

  13. A Markov chain Monte Carlo (MCMC) methodology with bootstrap percentile estimates for predicting presidential election results in Ghana

    OpenAIRE

    Nortey, Ezekiel N. N.; Ansah-Narh, Theophilus; Asah-Asante, Richard; Minkah, Richard

    2015-01-01

    Although, there exists numerous literature on the procedure for forecasting or predicting election results, in Ghana only opinion poll strategies have been used. To fill this gap, the paper develops Markov chain models for forecasting the 2016 presidential election results at the Regional, Zonal (i.e. Savannah, Coastal and Forest) and the National levels using past presidential election results of Ghana. The methodology develops a model for prediction of the 2016 presidential election results...

  14. Meaningful timescales from Monte Carlo simulations of molecular systems

    CERN Document Server

    Costa, Liborio I

    2016-01-01

    A new Markov Chain Monte Carlo method for simulating the dynamics of molecular systems with atomistic detail is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.

  15. Monte Carlo Methods for Tempo Tracking and Rhythm Quantization

    CERN Document Server

    Cemgil, A T; 10.1613/jair.1121

    2011-01-01

    We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...

  16. Approaching Chemical Accuracy with Quantum Monte Carlo

    CERN Document Server

    Petruzielo, F R; Umrigar, C J

    2012-01-01

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.

  17. Monte Carlo EM加速算法%Acceleration of Monte Carlo EM Algorithm

    Institute of Scientific and Technical Information of China (English)

    罗季

    2008-01-01

    EM算法是近年来常用的求后验众数的估计的一种数据增广算法,但由于求出其E步中积分的显示表达式有时很困难,甚至不可能,限制了其应用的广泛性.而Monte Carlo EM算法很好地解决了这个问题,将EM算法中E步的积分用Monte Carlo模拟来有效实现,使其适用性大大增强.但无论是EM算法,还是Monte Carlo EM算法,其收敛速度都是线性的,被缺损信息的倒数所控制,当缺损数据的比例很高时,收敛速度就非常缓慢.而Newton-Raphson算法在后验众数的附近具有二次收敛速率.本文提出Monte Carlo EM加速算法,将Monte Carlo EM算法与Newton-Raphson算法结合,既使得EM算法中的E步用Monte Carlo模拟得以实现,又证明了该算法在后验众数附近具有二次收敛速度.从而使其保留了Monte Carlo EM算法的优点,并改进了Monte Carlo EM算法的收敛速度.本文通过数值例子,将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较,进一步说明了Monte Carlo EM加速算法的优良性.

  18. Quantum Monte Carlo with Variable Spins

    CERN Document Server

    Melton, Cody A; Mitas, Lubos

    2016-01-01

    We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.

  19. Adiabatic optimization versus diffusion Monte Carlo methods

    Science.gov (United States)

    Jarret, Michael; Jordan, Stephen P.; Lackey, Brad

    2016-10-01

    Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .

  20. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  1. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  2. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  3. Factor Graph Equalization Based on Markov Chain Monte Carlo%基于Markov链蒙特卡洛的因子图均衡算法

    Institute of Scientific and Technical Information of China (English)

    巩克现; 董政; 葛临东

    2012-01-01

    Three different effective solutions and parallel implement method of computing the posteriori probability of received signal were proposed to overcome the high calculation complexity of iterative equalization based on factor graph for nonlinear channel distor- tion. Equalizer and decoder work interactively in factor graph equalization and the performance of the system was improved while the calculation complexity grew exponentially with channel memory length. Multidimensional integration was adopted by Markov chain Monte Carlo algorithm and parallel Gibbs sampling was implemented by factor graph partition. The calculation complexity was reduced. Simulation demonstrated that it overcomes the non-linear distortion of high order modulation over satellite channel and it is suitable to hardware or multi-core implement.%针对基于因子图模型的非线性失真信道的迭代均衡计算复杂度高的问题,提出了3种不同的接收信息后验概率的有效算法以及并行实现方法。在基于因子图的均衡算法中,均衡器和译码器以迭代处理的方式联合工作,提高了系统的整体性能,但计算复杂度随信道记忆长度呈指数增加,通过Markov链蒙特卡洛算法实现多维积分的计算,并通过因子图分割实现并行Gibbs采样,降低了计算复杂度,仿真表明,该算法有效克服宽带高阶调制的卫星信道非线性失真,有利于硬件或多核并行实现。

  4. Inverse modelling of cloud-aerosol interactions – Part 2: Sensitivity tests on liquid phase clouds using a Markov chain Monte Carlo based simulation approach

    Directory of Open Access Journals (Sweden)

    D. G. Partridge

    2012-03-01

    Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the global sensitivity of a cloud model to input aerosol physiochemical parameters. Using numerically generated cloud droplet number concentration (CDNC distributions (i.e. synthetic data as cloud observations, this inverse modelling framework is shown to successfully estimate the correct calibration parameters, and their underlying posterior probability distribution.

    The employed analysis method provides a new, integrative framework to evaluate the global sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode aerosol and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insights. There is a transition in relative sensitivity from very clean marine Arctic conditions where the lognormal aerosol parameters representing the accumulation mode aerosol number concentration and mean radius and are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm−3 where particle chemistry is more important than both number concentration and size of the accumulation mode.

    The competition and compensation between the cloud model input parameters illustrates that if the soluble mass fraction is reduced, the aerosol number concentration, geometric standard deviation and mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.

    This study demonstrates that inverse modelling provides a flexible, transparent and

  5. Physiologically-based toxicokinetic model for cadmium using Markov-chain Monte Carlo analysis of concentrations in blood, urine, and kidney cortex from living kidney donors.

    Science.gov (United States)

    Fransson, Martin Niclas; Barregard, Lars; Sallsten, Gerd; Akerstrom, Magnus; Johanson, Gunnar

    2014-10-01

    The health effects of low-level chronic exposure to cadmium are increasingly recognized. To improve the risk assessment, it is essential to know the relation between cadmium intake, body burden, and biomarker levels of cadmium. We combined a physiologically-based toxicokinetic (PBTK) model for cadmium with a data set from healthy kidney donors to re-estimate the model parameters and to test the effects of gender and serum ferritin on systemic uptake. Cadmium levels in whole blood, blood plasma, kidney cortex, and urinary excretion from 82 men and women were used to calculate posterior distributions for model parameters using Markov-chain Monte Carlo analysis. For never- and ever-smokers combined, the daily systemic uptake was estimated at 0.0063 μg cadmium/kg body weight in men, with 35% increased uptake in women and a daily uptake of 1.2 μg for each pack-year per calendar year of smoking. The rate of urinary excretion from cadmium accumulated in the kidney was estimated at 0.000042 day(-1), corresponding to a half-life of 45 years in the kidneys. We have provided an improved model of cadmium kinetics. As the new parameter estimates derive from a single study with measurements in several compartments in each individual, these new estimates are likely to be more accurate than the previous ones where the data used originated from unrelated data sets. The estimated urinary excretion of cadmium accumulated in the kidneys was much lower than previous estimates, neglecting this finding may result in a marked under-prediction of the true kidney burden.

  6. Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.

    1995-12-31

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.

  7. Monte carlo simulations of organic photovoltaics.

    Science.gov (United States)

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  8. Monte Carlo dose distributions for radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)

    2001-07-01

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  9. The Rational Hybrid Monte Carlo Algorithm

    CERN Document Server

    Clark, M A

    2006-01-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  10. The Rational Hybrid Monte Carlo algorithm

    Science.gov (United States)

    Clark, Michael

    2006-12-01

    The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

  11. Monte Carlo Hamiltonian:Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUOXiang-Qian; HelmutKROEGER; 等

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.

  12. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  13. Efficiencies of dynamic Monte Carlo algorithms for off-lattice particle systems with a single impurity

    KAUST Repository

    Novotny, M.A.

    2010-02-01

    The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.

  14. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules

    CERN Document Server

    Lester, William A; Reynolds, PJ

    1994-01-01

    This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n

  15. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia

    Energy Technology Data Exchange (ETDEWEB)

    Granero Cabanero, D.

    2015-07-01

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  16. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr

  17. Monte Carlo methods beyond detailed balance

    NARCIS (Netherlands)

    Schram, Raoul D.; Barkema, Gerard T.

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  18. A comparison of Monte Carlo generators

    CERN Document Server

    Golan, Tomasz

    2014-01-01

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.

  19. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  20. Monte Carlo Simulation of Counting Experiments.

    Science.gov (United States)

    Ogden, Philip M.

    A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…

  1. Atomistic Monte Carlo Simulation of Lipid Membranes

    Directory of Open Access Journals (Sweden)

    Daniel Wüstner

    2014-01-01

    Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.

  2. Minimising biases in full configuration interaction quantum Monte Carlo.

    Science.gov (United States)

    Vigor, W A; Spencer, J S; Bearpark, M J; Thom, A J W

    2015-03-14

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  3. Subtle Monte Carlo Updates in Dense Molecular Systems

    DEFF Research Database (Denmark)

    Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.;

    2012-01-01

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule...

  4. Minimising biases in full configuration interaction quantum Monte Carlo

    Science.gov (United States)

    Vigor, W. A.; Spencer, J. S.; Bearpark, M. J.; Thom, A. J. W.

    2015-03-01

    We show that Full Configuration Interaction Quantum Monte Carlo (FCIQMC) is a Markov chain in its present form. We construct the Markov matrix of FCIQMC for a two determinant system and hence compute the stationary distribution. These solutions are used to quantify the dependence of the population dynamics on the parameters defining the Markov chain. Despite the simplicity of a system with only two determinants, it still reveals a population control bias inherent to the FCIQMC algorithm. We investigate the effect of simulation parameters on the population control bias for the neon atom and suggest simulation setups to, in general, minimise the bias. We show a reweight ing scheme to remove the bias caused by population control commonly used in diffusion Monte Carlo [Umrigar et al., J. Chem. Phys. 99, 2865 (1993)] is effective and recommend its use as a post processing step.

  5. Monte Carlo radiation transport in external beam radiotherapy

    OpenAIRE

    Çeçen, Yiğit

    2013-01-01

    The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...

  6. Nonequilibrium Candidate Monte Carlo Simulations with Configurational Freezing Schemes.

    Science.gov (United States)

    Giovannelli, Edoardo; Gellini, Cristina; Pietraperzia, Giangaetano; Cardini, Gianni; Chelli, Riccardo

    2014-10-14

    Nonequilibrium Candidate Monte Carlo simulation [Nilmeier et al., Proc. Natl. Acad. Sci. U.S.A. 2011, 108, E1009-E1018] is a tool devised to design Monte Carlo moves with high acceptance probabilities that connect uncorrelated configurations. Such moves are generated through nonequilibrium driven dynamics, producing candidate configurations accepted with a Monte Carlo-like criterion that preserves the equilibrium distribution. The probability of accepting a candidate configuration as the next sample in the Markov chain basically depends on the work performed on the system during the nonequilibrium trajectory and increases with decreasing such a work. It is thus strategically relevant to find ways of producing nonequilibrium moves with low work, namely moves where dissipation is as low as possible. This is the goal of our methodology, in which we combine Nonequilibrium Candidate Monte Carlo with Configurational Freezing schemes developed by Nicolini et al. (J. Chem. Theory Comput. 2011, 7, 582-593). The idea is to limit the configurational sampling to particles of a well-established region of the simulation sample, namely the region where dissipation occurs, while leaving fixed the other particles. This allows to make the system relaxation faster around the region perturbed by the finite-time switching move and hence to reduce the dissipated work, eventually enhancing the probability of accepting the generated move. Our combined approach enhances significantly configurational sampling, as shown by the case of a bistable dimer immersed in a dense fluid.

  7. An Introduction to Monte Carlo Simulation of Statistical physics Problem

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    A brief introduction to the technique of Monte Carlo simulations in statistical physics is presented. The topics covered include statistical ensembles random and pseudo random numbers, random sampling techniques, importance sampling, Markov chain, Metropolis algorithm, continuous phase transition, statistical errors from correlated and uncorrelated data, finite size scaling, n-fold way, critical slowing down, blocking technique,percolation, cluster algorithms, cluster counting, histogram tech...

  8. A Markov Chain Monte Carlo Inversion Approach for Assessing Reactive Chemistry Along a Flow Path with Application to Subsurface CO2 Injection

    Science.gov (United States)

    McNab, W. W.; Ramirez, A. L.; Johnson, J.

    2011-12-01

    A Markov Chain Monte Carlo (MCMC) modeling approach has been developed to identify the distribution of key reactive mineral phases along a flow path between a CO2 injector well and a monitor well at the Weyburn-Midale field in Saskatchewan. The method entailed postulating a spatially-correlated mineral distribution, consisting of calcite, dolomite, anhydrite, and K-feldspar, with specified volume fractions and intrinsic dissolution rates, in contact with an ambient brine composition along a 1-D flow path. Multiple forward reactive transport simulations for the column were run (using PHREEQC for this particular application), with simulated changes in brine chemistry compared with controlled test problem output or real field data. A composite likelihood function was calculated for a set of geochemical parameters consisting of pH and the concentrations of Ca2+, Mg2+, and Si that serves as potential indicators of dissolution reactions along the flow path. New realizations were proposed by replacing a small contiguous section of the column with a new distribution of minerals, but one which was still spatially correlated with the remainder of the column. Proposed realizations were accepted when the ratio of the composite likelihood function value to that of the prior proposal was greater than that of a random number selected from a uniform distribution between 0 and 1 (the well-known Metropolis-Hastings acceptance criteria). If a proposal was accepted, the modified mineral distribution served as the basis for a new distribution, otherwise the modification was rejected as a non-improvement. Application of the inverse modeling approach to a synthetic problem demonstrated nearly complete recovery of a specified initial mineral distribution (i.e., "synthetic truth") along the flow path, provided that parameter data were utilized across the entire column. Partial recovery of the synthetic truth was still achievable as the amount of data available for inversion were reduced to

  9. Inverse modeling of cloud-aerosol interactions – Part 2: Sensitivity tests on liquid phase clouds using a Markov Chain Monte Carlo based simulation approach

    Directory of Open Access Journals (Sweden)

    D. G. Partridge

    2011-07-01

    Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov Chain Monte Carlo (MCMC algorithm to a pseudo-adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the sensitivity of a cloud model to input aerosol physiochemical parameters. Using synthetic data as observed values of cloud droplet number concentration (CDNC distribution, this inverse modelling framework is shown to successfully converge to the correct calibration parameters.

    The employed analysis method provides a new, integrative framework to evaluate the sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insightful findings. There is a clear transition from very clean marine Arctic conditions where the aerosol parameters representing the mean radius and geometric standard deviation of the accumulation mode are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm−3 where particle chemistry is more important than both number concentration and size of the accumulation mode.

    The competition and compensation between the cloud model input parameters illustrate that if the soluble mass fraction is reduced, both the number of particles and geometric standard deviation must increase and the mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.

    For more polluted aerosol conditions, with a reduction in soluble mass fraction the parameter correlation becomes weaker and more non-linear over the range of possible solutions

  10. Application and Evaluation of a Snowmelt Runoff Model in the Tamor River Basin, Eastern Himalaya Using a Markov Chain Monte Carlo (MCMC) Data Assimilation Approach

    Science.gov (United States)

    Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.

    2013-01-01

    Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree-day melting model. Lastly, we demonstrate that the data assimilation approach is

  11. Stochastic Engine Final Report: Applying Markov Chain Monte Carlo Methods with Importance Sampling to Large-Scale Data-Driven Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Glaser, R E; Johannesson, G; Sengupta, S; Kosovic, B; Carle, S; Franz, G A; Aines, R D; Nitao, J J; Hanley, W G; Ramirez, A L; Newmark, R L; Johnson, V M; Dyer, K M; Henderson, K A; Sugiyama, G A; Hickling, T L; Pasyanos, M E; Jones, D A; Grimm, R J; Levine, R A

    2004-03-11

    Accurate prediction of complex phenomena can be greatly enhanced through the use of data and observations to update simulations. The ability to create these data-driven simulations is limited by error and uncertainty in both the data and the simulation. The stochastic engine project addressed this problem through the development and application of a family of Markov Chain Monte Carlo methods utilizing importance sampling driven by forward simulators to minimize time spent search very large state spaces. The stochastic engine rapidly chooses among a very large number of hypothesized states and selects those that are consistent (within error) with all the information at hand. Predicted measurements from the simulator are used to estimate the likelihood of actual measurements, which in turn reduces the uncertainty in the original sample space via a conditional probability method called Bayesian inferencing. This highly efficient, staged Metropolis-type search algorithm allows us to address extremely complex problems and opens the door to solving many data-driven, nonlinear, multidimensional problems. A key challenge has been developing representation methods that integrate the local details of real data with the global physics of the simulations, enabling supercomputers to efficiently solve the problem. Development focused on large-scale problems, and on examining the mathematical robustness of the approach in diverse applications. Multiple data types were combined with large-scale simulations to evaluate systems with {approx}{sup 10}20,000 possible states (detecting underground leaks at the Hanford waste tanks). The probable uses of chemical process facilities were assessed using an evidence-tree representation and in-process updating. Other applications included contaminant flow paths at the Savannah River Site, locating structural flaws in buildings, improving models for seismic travel times systems used to monitor nuclear proliferation, characterizing the source

  12. A simple new way to help speed up Monte Carlo convergence rates: Energy-scaled displacement Monte Carlo

    Science.gov (United States)

    Goldman, Saul

    1983-10-01

    A method we call energy-scaled displacement Monte Carlo (ESDMC) whose purpose is to improve sampling efficiency and thereby speed up convergence rates in Monte Carlo calculations is presented. The method involves scaling the maximum displacement a particle may make on a trial move to the particle's configurational energy. The scaling is such that on the average, the most stable particles make the smallest moves and the most energetic particles the largest moves. The method is compared to Metropolis Monte Carlo (MMC) and Force Bias Monte Carlo of (FBMC) by applying all three methods to a dense Lennard-Jones fluid at two temperatures, and to hot ST2 water. The functions monitored as the Markov chains developed were, for the Lennard-Jones case: melting, radial distribution functions, internal energies, and heat capacities. For hot ST2 water, we monitored energies and heat capacities. The results suggest that ESDMC samples configuration space more efficiently than either MMC or FBMC in these systems for the biasing parameters used here. The benefit from using ESDMC seemed greatest for the Lennard-Jones systems.

  13. An enhanced Monte Carlo outlier detection method.

    Science.gov (United States)

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  14. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  15. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  16. Hybrid Monte Carlo with Chaotic Mixing

    CERN Document Server

    Kadakia, Nirag

    2016-01-01

    We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.

  17. Composite biasing in Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  18. Accelerated Monte Carlo by Embedded Cluster Dynamics

    Science.gov (United States)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  19. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    Science.gov (United States)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  20. An introduction to Monte Carlo methods

    Science.gov (United States)

    Walter, J.-C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.

  1. A Quantum Monte Carlo Study on Mixed-Spin Chains of 1/2-1/2-1-1 and 3/2-3/2-1-1

    Institute of Scientific and Technical Information of China (English)

    XUZhao-Xin; ZHANGJun; YINGHe-Ping

    2003-01-01

    The ground-state and thermodynamic properties of quantum mixed-spin chains of 1/2-1/2-1-1 and 3/2-3/2-1-1 are investigated by a quantum Monte Carlo simulation with the loop-cluster algorithm. For 1/2-1/2-1-1 chain, we find it hastwo phases separated by an energy-gap vanishing point in the ground-state. For 3/2-3/2-1-1 chain,the numerical results show two energy-gap vanishing points isolated by different phases in its ground-state. Our calculations indicate that all these ground state phases can be understood by means of valence-bond-solid picture, and the thermodynamic behavior at finite temperatures is continuous as a function of parameter α=J2/J1.

  2. Monte Carlo methods for light propagation in biological tissues.

    Science.gov (United States)

    Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine

    2015-11-01

    Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis-Hastings algorithm. The resulting estimating methods are then compared to the so-called Wang-Prahl (or Wang) method. Finally, the formal representation allows to derive a non-linear optimization algorithm close to Levenberg-Marquardt that is used for the estimation of the scattering and absorption coefficients of the tissue from measurements.

  3. Monte Carlo Study of Real Time Dynamics on the Lattice

    Science.gov (United States)

    Alexandru, Andrei; Başar, Gökçe; Bedaque, Paulo F.; Vartak, Sohan; Warrington, Neill C.

    2016-08-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from a highly oscillatory phase of the path integral. In this Letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and, in principle, applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  4. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  5. Power System Probabilistic Transient Stability Assessment Based on Markov Chain Monte Carlo Method%基于马尔可夫链蒙特卡罗方法的电力系统暂态稳定概率评估

    Institute of Scientific and Technical Information of China (English)

    叶圣永; 王晓茹; 周曙; 刘志刚; 钱清泉

    2012-01-01

    研究了一种电力系统暂态稳定概率评估方法,提出了利用马尔可夫链蒙特卡罗方法模拟负荷水平,考虑随机序列之间的相关性;模拟过程中,提出以故障信息作为输入特征、基于AdaBoost-DT的暂态稳定评估方法。新英格兰39节点测试系统的仿真表明,本文提出的马尔可夫链蒙特卡罗方法比传统的蒙特卡罗方法更快速收敛,同时AdaBoost-DT大幅减少仿真时间,且能有效预测暂态稳定性。%In this paper, a power system probabilistic transient stability assessment was studied, and Markov Chain Monte Carlo method to emulation load level was put forward. Taking the relativity of random samples into account, this method was more suitable for actual power system. During simulation, transient stability assessment method is proposed based on AdaBoost-DT and took fault information as input features. The simulation of New England 39 bus test system shows Markov Chain Monte Carlo Method converges faster than traditional Monte Carlo method. At the same time, AdaBoost-DT can dramatically reduce emulation time and effectively forecast transient stability.

  6. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  7. Suppression of the initial transient in Monte Carlo criticality simulations; Suppression du regime transitoire initial des simulations Monte-Carlo de criticite

    Energy Technology Data Exchange (ETDEWEB)

    Richet, Y

    2006-12-15

    Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)

  8. Status of Monte-Carlo Event Generators

    Energy Technology Data Exchange (ETDEWEB)

    Hoeche, Stefan; /SLAC

    2011-08-11

    Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.

  9. FAST CONVERGENT MONTE CARLO RECEIVER FOR OFDM SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Wu Lili; Liao Guisheng; Bao Zheng; Shang Yong

    2005-01-01

    The paper investigates the problem of the design of an optimal Orthogonal Frequency Division Multiplexing (OFDM) receiver against unknown frequency selective fading. A fast convergent Monte Carlo receiver is proposed. In the proposed method, the Markov Chain Monte Carlo (MCMC) methods are employed for the blind Bayesian detection without channel estimation. Meanwhile, with the exploitation of the characteristics of OFDM systems, two methods are employed to improve the convergence rate and enhance the efficiency of MCMC algorithms.One is the integration of the posterior distribution function with respect to the associated channel parameters, which is involved in the derivation of the objective distribution function; the other is the intra-symbol differential coding for the elimination of the bimodality problem resulting from the presence of unknown fading channels. Moreover, no matrix inversion is needed with the use of the orthogonality property of OFDM modulation and hence the computational load is significantly reduced. Computer simulation results show the effectiveness of the fast convergent Monte Carlo receiver.

  10. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  11. Mosaic crystal algorithm for Monte Carlo simulations

    CERN Document Server

    Seeger, P A

    2002-01-01

    An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)

  12. Monte Carlo simulation for the transport beamline

    Energy Technology Data Exchange (ETDEWEB)

    Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)

    2013-07-26

    In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.

  13. A note on simultaneous Monte Carlo tests

    DEFF Research Database (Denmark)

    Hahn, Ute

    In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....

  14. A Monte Carlo algorithm for degenerate plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.

    2013-09-15

    A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.

  15. Archimedes, the Free Monte Carlo simulator

    CERN Document Server

    Sellier, Jean Michel D

    2012-01-01

    Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.

  16. Cluster hybrid Monte Carlo simulation algorithms

    Science.gov (United States)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  17. Introduction to Cluster Monte Carlo Algorithms

    Science.gov (United States)

    Luijten, E.

    This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.

  18. Exascale Monte Carlo R&D

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  19. Cell-veto Monte Carlo algorithm for long-range systems

    Science.gov (United States)

    Kapfer, Sebastian C.; Krauth, Werner

    2016-09-01

    We present a rigorous efficient event-chain Monte Carlo algorithm for long-range interacting particle systems. Using a cell-veto scheme within the factorized Metropolis algorithm, we compute each single-particle move with a fixed number of operations. For slowly decaying potentials such as Coulomb interactions, screening line charges allow us to take into account periodic boundary conditions. We discuss the performance of the cell-veto Monte Carlo algorithm for general inverse-power-law potentials, and illustrate how it provides a new outlook on one of the prominent bottlenecks in large-scale atomistic Monte Carlo simulations.

  20. Quasi-Monte Carlo methods for lattice systems: a first look

    CERN Document Server

    Jansen, K; Nube, A; Griewank, A; Müller-Preussker, M

    2013-01-01

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like 1/Sqrt(N), where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to 1/N. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  1. State-of-the-art Monte Carlo 1988

    Energy Technology Data Exchange (ETDEWEB)

    Soran, P.D.

    1988-06-28

    Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.

  2. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    Energy Technology Data Exchange (ETDEWEB)

    Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  3. Alternative Monte Carlo Approach for General Global Illumination

    Institute of Scientific and Technical Information of China (English)

    徐庆; 李朋; 徐源; 孙济洲

    2004-01-01

    An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.

  4. Multiple Monte Carlo Testing with Applications in Spatial Point Processes

    DEFF Research Database (Denmark)

    Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute

    with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test...

  5. Metrics for Diagnosing Undersampling in Monte Carlo Tally Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Div.; Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Div.

    2015-01-01

    This study explored the potential of using Markov chain convergence diagnostics to predict the prevalence and magnitude of biases due to undersampling in Monte Carlo eigenvalue and flux tally estimates. Five metrics were applied to two models of pressurized water reactor fuel assemblies and their potential for identifying undersampling biases was evaluated by comparing the calculated test metrics with known biases in the tallies. Three of the five undersampling metrics showed the potential to accurately predict the behavior of undersampling biases in the responses examined in this study.

  6. Monte Carlo simulation of AB-copolymers with saturating bonds

    DEFF Research Database (Denmark)

    Chertovich, A.C.; Ivanov, V.A.; Khokhlov, A.R.;

    2003-01-01

    Structural transitions in a single AB-copolymer chain where saturating bonds can be formed between A- and B-units are studied by means of Monte Carlo computer simulations using the bond fluctuation model. Three transitions are found, coil-globule, coil-hairpin and globule-hairpin, depending...... on the nature of a particular AB-sequence: statistical random sequence, diblock sequence and 'random-complementary' sequence (one-half of such an AB-sequence is random with Bernoulli statistics while the other half is complementary to the first one). The properties of random-complementary sequences are closer...

  7. Discrete range clustering using Monte Carlo methods

    Science.gov (United States)

    Chatterji, G. B.; Sridhar, B.

    1993-01-01

    For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.

  8. THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE

    Energy Technology Data Exchange (ETDEWEB)

    WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory

    2007-01-10

    MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.

  9. Chemical application of diffusion quantum Monte Carlo

    Science.gov (United States)

    Reynolds, P. J.; Lester, W. A., Jr.

    1983-10-01

    The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.

  10. Quantum Monte Carlo Endstation for Petascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Lubos Mitas

    2011-01-26

    NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13

  11. Non-Arrhenius temperature dependence of the island density of one-dimensional Al chains on Si(100): A kinetic Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Albia, Jason R.; Albao, Marvin A., E-mail: maalbao@uplb.edu.ph [Institute of Mathematical Sciences and Physics, University of the Philippines Los Baños, Los Baños 4031 (Philippines)

    2015-03-15

    Classical nucleation theory predicts that the evolution of mean island density with temperature during growth in one-dimensional systems obeys the Arrhenius relation. In this study, kinetic Monte Carlo simulations of a suitable atomistic lattice-gas model were performed to investigate the experimentally observed non-Arrhenius scaling behavior of island density in the case of one-dimensional Al islands grown on Si(100). Previously, it was proposed that adatom desorption resulted in a transition temperature signaling the departure from classical predictions. Here, the authors demonstrate that desorption above the transition temperature is not possible. Instead, the authors posit that the existence of a transition temperature is due to a combination of factors such as reversibility of island growth, presence of C-defects, adatom diffusion rates, as well as detachment rates at island ends. In addition, the authors show that the anomalous non-Arrhenius behavior vanishes when adatom binds irreversibly with C-defects as observed in In on Si(100) studies.

  12. Monte Carlo studies of model Langmuir monolayers.

    Science.gov (United States)

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  13. Commensurabilities between ETNOs: a Monte Carlo survey

    CERN Document Server

    Marcos, C de la Fuente

    2016-01-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nin...

  14. Monte Carlo exploration of warped Higgsless models

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu

    2004-10-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)

  15. Monte Carlo Exploration of Warped Higgsless Models

    CERN Document Server

    Hewett, J L; Rizzo, T G

    2004-01-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.

  16. Experimental Monte Carlo Quantum Process Certification

    CERN Document Server

    Steffen, L; Fedorov, A; Baur, M; Wallraff, A

    2012-01-01

    Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data post-processing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data is compared directly to an ideal process using Monte Carlo sampling. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of two qubit gates, such as the cphase and the cnot gate, and three qubit gates, such as the Toffoli gate and two sequential cphase gates.

  17. Variable length trajectory compressible hybrid Monte Carlo

    CERN Document Server

    Nishimura, Akihiko

    2016-01-01

    Hybrid Monte Carlo (HMC) generates samples from a prescribed probability distribution in a configuration space by simulating Hamiltonian dynamics, followed by the Metropolis (-Hastings) acceptance/rejection step. Compressible HMC (CHMC) generalizes HMC to a situation in which the dynamics is reversible but not necessarily Hamiltonian. This article presents a framework to further extend the algorithm. Within the existing framework, each trajectory of the dynamics must be integrated for the same amount of (random) time to generate a valid Metropolis proposal. Our generalized acceptance/rejection mechanism allows a more deliberate choice of the integration time for each trajectory. The proposed algorithm in particular enables an effective application of variable step size integrators to HMC-type sampling algorithms based on reversible dynamics. The potential of our framework is further demonstrated by another extension of HMC which reduces the wasted computations due to unstable numerical approximations and corr...

  18. Monte Carlo Implementation of Polarized Hadronization

    CERN Document Server

    Matevosyan, Hrayr H; Thomas, Anthony W

    2016-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...

  19. Lunar Regolith Albedos Using Monte Carlos

    Science.gov (United States)

    Wilson, T. L.; Andersen, V.; Pinsky, L. S.

    2003-01-01

    The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.

  20. Gas discharges modeling by Monte Carlo technique

    Directory of Open Access Journals (Sweden)

    Savić Marija

    2010-01-01

    Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].

  1. Morse Monte Carlo Radiation Transport Code System

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, M.B.

    1975-02-01

    The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)

  2. Variational Monte Carlo study of pentaquark states

    Energy Technology Data Exchange (ETDEWEB)

    Mark W. Paris

    2005-07-01

    Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.

  3. Monte Carlo simulation of neutron scattering instruments

    Energy Technology Data Exchange (ETDEWEB)

    Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.

    1998-12-01

    A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.

  4. Accurate barrier heights using diffusion Monte Carlo

    CERN Document Server

    Krongchon, Kittithat; Wagner, Lucas K

    2016-01-01

    Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.

  5. Atomistic Monte Carlo simulation of lipid membranes

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction......, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....

  6. Geometric Monte Carlo and Black Janus Geometries

    CERN Document Server

    Bak, Dongsu; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil

    2016-01-01

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  7. Modeling neutron guides using Monte Carlo simulations

    CERN Document Server

    Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R

    2002-01-01

    Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.

  8. Reporting Monte Carlo Studies in Structural Equation Modeling

    NARCIS (Netherlands)

    Boomsma, Anne

    2013-01-01

    In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel

  9. Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications

    NARCIS (Netherlands)

    Raedt, H. De

    1992-01-01

    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  10. Quantum Monte Carlo using a Stochastic Poisson Solver

    Energy Technology Data Exchange (ETDEWEB)

    Das, D; Martin, R M; Kalos, M H

    2005-05-06

    Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.

  11. Efficiency and accuracy of Monte Carlo (importance) sampling

    NARCIS (Netherlands)

    Waarts, P.H.

    2003-01-01

    Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed

  12. The Monte Carlo Method. Popular Lectures in Mathematics.

    Science.gov (United States)

    Sobol', I. M.

    The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…

  13. Forest canopy BRDF simulation using Monte Carlo method

    NARCIS (Netherlands)

    Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.

    2006-01-01

    Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.

  14. QWalk: A Quantum Monte Carlo Program for Electronic Structure

    CERN Document Server

    Wagner, Lucas K; Mitas, Lubos

    2007-01-01

    We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org

  15. QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS

    NARCIS (Netherlands)

    DERAEDT, H

    1992-01-01

    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  16. Recent Developments in Quantum Monte Carlo: Methods and Applications

    Science.gov (United States)

    Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.

    2007-12-01

    The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.

  17. Sensitivity of Monte Carlo simulations to input distributions

    Energy Technology Data Exchange (ETDEWEB)

    RamoRao, B. S.; Srikanta Mishra, S.; McNeish, J.; Andrews, R. W.

    2001-07-01

    The sensitivity of the results of a Monte Carlo simulation to the shapes and moments of the probability distributions of the input variables is studied. An economical computational scheme is presented as an alternative to the replicate Monte Carlo simulations and is explained with an illustrative example. (Author) 4 refs.

  18. CERN Summer Student Report 2016 Monte Carlo Data Base Improvement

    CERN Document Server

    Caciulescu, Alexandru Razvan

    2016-01-01

    During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.

  19. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  20. Subtle Monte Carlo Updates in Dense Molecular Systems.

    Science.gov (United States)

    Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper

    2012-02-14

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.

  1. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  2. Accelerated GPU based SPECT Monte Carlo simulations.

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-07

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational

  3. Accelerated GPU based SPECT Monte Carlo simulations

    Science.gov (United States)

    Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris

    2016-06-01

    Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency

  4. Fission Matrix Capability for MCNP Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  5. Vectorized Monte Carlo methods for reactor lattice analysis

    Science.gov (United States)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  6. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  7. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    CERN Document Server

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro

    2001-01-01

    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  8. Calibration of environmental radionuclide transfer models using a Bayesian approach with Markov chain Monte Carlo simulations and model comparisons - Calibration of radionuclides transfer models in the environment using a Bayesian approach with Markov chain Monte Carlo simulation and comparison of models

    Energy Technology Data Exchange (ETDEWEB)

    Nicoulaud-Gouin, V.; Giacalone, M.; Gonze, M.A. [Institut de Radioprotection et de Surete Nucleaire-PRP-ENV/SERIS/LM2E (France); Martin-Garin, A.; Garcia-Sanchez, L. [IRSN-PRP-ENV/SERIS/L2BT (France)

    2014-07-01

    Calibration of transfer models according to observation data is a challenge, especially if parameters uncertainty is required, and if competing models should be decided between them. Generally two main calibration methods are used: The frequentist approach in which the unknown parameter of interest is supposed fixed and its estimation is based on the data only. In this category, least squared method has many restrictions in nonlinear models and competing models need to be nested in order to be compared. The bayesian inference in which the unknown parameter of interest is supposed random and its estimation is based on the data and on prior information. Compared to frequentist method, it provides probability density functions and therefore pointwise estimation with credible intervals. However, in practical cases, Bayesian inference is a complex problem of numerical integration, which explains its low use in operational modeling including radioecology. This study aims to illustrate the interest and feasibility of Bayesian approach in radioecology particularly in the case of ordinary differential equations with non-constant coefficients models, which cover most radiological risk assessment models, notably those implemented in the Symbiose platform (Gonze et al, 2010). Markov Chain Monte Carlo (MCMC) method (Metropolis et al., 1953) was used because the posterior expectations are intractable integrals. The invariant distribution of the parameters was performed by the metropolis-Hasting algorithm (Hastings, 1970). GNU-MCSim software (Bois and Maszle, 2011) a bayesian hierarchical framework, was used to deal with nonlinear differential models. Two case studies including this type of model were investigated: An Equilibrium Kinetic sorption model (EK) (e.g. van Genuchten et al, 1974), with experimental data concerning {sup 137}Cs and {sup 85}Sr sorption and desorption in different soils studied in stirred flow-through reactors. This model, generalizing the K{sub d} approach

  9. Iterative acceleration methods for Monte Carlo and deterministic criticality calculations

    Energy Technology Data Exchange (ETDEWEB)

    Urbatsch, T.J.

    1995-11-01

    If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.

  10. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays

    2016-03-01

    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  11. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  12. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  13. Monte Carlo simulations for heavy ion dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Geithner, O.

    2006-07-26

    Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)

  14. Monte Carlo models of dust coagulation

    CERN Document Server

    Zsom, Andras

    2010-01-01

    The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...

  15. Monte Carlo simulations of Protein Adsorption

    Science.gov (United States)

    Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges

    2008-03-01

    Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.

  16. Commensurabilities between ETNOs: a Monte Carlo survey

    Science.gov (United States)

    de la Fuente Marcos, C.; de la Fuente Marcos, R.

    2016-07-01

    Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nine hypothesis; in particular, a number of objects may be trapped in the 5:3 and 3:1 mean motion resonances with a putative Planet Nine with semimajor axis ˜700 au.

  17. Diffusion Monte Carlo in internal coordinates.

    Science.gov (United States)

    Petit, Andrew S; McCoy, Anne B

    2013-08-15

    An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.

  18. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan

    2014-09-05

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding variance and weak error. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients. © 2014, Springer Science+Business Media Dordrecht.

  19. Rare event simulation using Monte Carlo methods

    CERN Document Server

    Rubino, Gerardo

    2009-01-01

    In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...

  20. Monte Carlo simulations for focusing elliptical guides

    Energy Technology Data Exchange (ETDEWEB)

    Valicu, Roxana [FRM2 Garching, Muenchen (Germany); Boeni, Peter [E20, TU Muenchen (Germany)

    2009-07-01

    The aim of the Monte Carlo simulations using McStas Programme was to improve the focusing of the neutron beam existing at PGAA (FRM II) by prolongation of the existing elliptic guide (coated now with supermirrors with m=3) with a new part. First we have tried with an initial length of the additional guide of 7,5cm and coatings for the neutron guide of supermirrors with m=4,5 and 6. The gain (calculated by dividing the intensity in the focal point after adding the guide by the intensity at the focal point with the initial guide) obtained for this coatings indicated that a coating with m=5 would be appropriate for a first trial. The next step was to vary the length of the additional guide for this m value and therefore choosing the appropriate length for the maximal gain. With the m value and the length of the guide fixed we have introduced an aperture 1 cm before the focal point and we have varied the radius of this aperture in order to obtain a focused beam. We have observed a dramatic decrease in the size of the beam in the focal point after introducing this aperture. The simulation results, the gains obtained and the evolution of the beam size will be presented.

  1. Monte Carlo Production Management at CMS

    CERN Document Server

    Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni

    2015-01-01

    The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...

  2. Parallel Monte Carlo simulation of aerosol dynamics

    KAUST Repository

    Zhou, K.

    2014-01-01

    A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.

  3. Monte Carlo Simulation of River Meander Modelling

    Science.gov (United States)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  4. Monte Carlo Simulations of the Photospheric Process

    CERN Document Server

    Santana, Rodolfo; Hernandez, Roberto A; Kumar, Pawan

    2015-01-01

    We present a Monte Carlo (MC) code we wrote to simulate the photospheric process and to study the photospheric spectrum above the peak energy. Our simulations were performed with a photon to electron ratio $N_{\\gamma}/N_{e} = 10^{5}$, as determined by observations of the GRB prompt emission. We searched an exhaustive parameter space to determine if the photospheric process can match the observed high-energy spectrum of the prompt emission. If we do not consider electron re-heating, we determined that the best conditions to produce the observed high-energy spectrum are low photon temperatures and high optical depths. However, for these simulations, the spectrum peaks at an energy below 300 keV by a factor $\\sim 10$. For the cases we consider with higher photon temperatures and lower optical depths, we demonstrate that additional energy in the electrons is required to produce a power-law spectrum above the peak-energy. By considering electron re-heating near the photosphere, the spectrum for these simulations h...

  5. Finding Planet Nine: a Monte Carlo approach

    CERN Document Server

    Marcos, C de la Fuente

    2016-01-01

    Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30 degrees, and an argument of perihelion of 150 degrees. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistic...

  6. Parallel Monte Carlo Simulation of Aerosol Dynamics

    Directory of Open Access Journals (Sweden)

    Kun Zhou

    2014-02-01

    Full Text Available A highly efficient Monte Carlo (MC algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process. Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI. The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles.

  7. Measuring Berry curvature with quantum Monte Carlo

    CERN Document Server

    Kolodrubetz, Michael

    2014-01-01

    The Berry curvature and its descendant, the Berry phase, play an important role in quantum mechanics. They can be used to understand the Aharonov-Bohm effect, define topological Chern numbers, and generally to investigate the geometric properties of a quantum ground state manifold. While Berry curvature has been well-studied in the regimes of few-body physics and non-interacting particles, its use in the regime of strong interactions is hindered by the lack of numerical methods to solve it. In this paper we fill this gap by implementing a quantum Monte Carlo method to solve for the Berry curvature, based on interpreting Berry curvature as a leading correction to imaginary time ramps. We demonstrate our algorithm using the transverse-field Ising model in one and two dimensions, the latter of which is non-integrable. Despite the fact that the Berry curvature gives information about the phase of the wave function, we show that our algorithm has no sign or phase problem for standard sign-problem-free Hamiltonians...

  8. An Introduction to Multilevel Monte Carlo for Option Valuation

    CERN Document Server

    Higham, Desmond J

    2015-01-01

    Monte Carlo is a simple and flexible tool that is widely used in computational finance. In this context, it is common for the quantity of interest to be the expected value of a random variable defined via a stochastic differential equation. In 2008, Giles proposed a remarkable improvement to the approach of discretizing with a numerical method and applying standard Monte Carlo. His multilevel Monte Carlo method offers an order of speed up given by the inverse of epsilon, where epsilon is the required accuracy. So computations can run 100 times more quickly when two digits of accuracy are required. The multilevel philosophy has since been adopted by a range of researchers and a wealth of practically significant results has arisen, most of which have yet to make their way into the expository literature. In this work, we give a brief, accessible, introduction to multilevel Monte Carlo and summarize recent results applicable to the task of option evaluation.

  9. Using Supervised Learning to Improve Monte Carlo Integral Estimation

    CERN Document Server

    Tracey, Brendan; Alonso, Juan J

    2011-01-01

    Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...

  10. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    Science.gov (United States)

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...

  11. A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT

    NARCIS (Netherlands)

    MIKOSCH, T; WANG, QA

    1995-01-01

    We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.

  12. Monte Carlo techniques for analyzing deep penetration problems

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs.

  13. EXTENDED MONTE CARLO LOCALIZATION ALGORITHM FOR MOBILE SENSOR NETWORKS

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A real-world localization system for wireless sensor networks that adapts for mobility and irregular radio propagation model is considered.The traditional range-based techniques and recent range-free localization schemes are not welt competent for localization in mobile sensor networks,while the probabilistic approach of Bayesian filtering with particle-based density representations provides a comprehensive solution to such localization problem.Monte Carlo localization is a Bayesian filtering method that approximates the mobile node’S location by a set of weighted particles.In this paper,an enhanced Monte Carlo localization algorithm-Extended Monte Carlo Localization (Ext-MCL) is suitable for the practical wireless network environment where the radio propagation model is irregular.Simulation results show the proposal gets better localization accuracy and higher localizable node number than previously proposed Monte Carlo localization schemes not only for ideal radio model,but also for irregular one.

  14. Monte Carlo simulations: Hidden errors from ``good'' random number generators

    Science.gov (United States)

    Ferrenberg, Alan M.; Landau, D. P.; Wong, Y. Joanna

    1992-12-01

    The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.

  15. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John

    2017-01-01

    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  16. Accelerating Monte Carlo Renderers by Ray Histogram Fusion

    Directory of Open Access Journals (Sweden)

    Mauricio Delbracio

    2015-03-01

    Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.

  17. New Monte Carlo method for the self-avoiding walk

    Science.gov (United States)

    Berretti, Alberto; Sokal, Alan D.

    1985-08-01

    We introduce a new Monte Carlo algorithm for the self-avoiding walk (SAW), and show that it is particularly efficient in the critical region (long chains). We also introduce new and more efficient statistical techniques. We employ these methods to extract numerical estimates for the critical parameters of the SAW on the square lattice. We find μ=2.63820 ± 0.00004 ± 0.00030 γ=1.352 ± 0.006 ± 0.025 νv=0.7590 ± 0.0062 ± 0.0042 where the first error bar represents systematic error due to corrections to scaling (subjective 95% confidence limits) and the second bar represents statistical error (classical 95% confidence limits). These results are based on SAWs of average length ≈ 166, using 340 hours CPU time on a CDC Cyber 170-730. We compare our results to previous work and indicate some directions for future research.

  18. An Efficient Approach to Ab Initio Monte Carlo Simulation

    CERN Document Server

    Leiding, Jeff

    2013-01-01

    We present a Nested Markov Chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, is used to substantially decorrelate configurations at which the potential of interest is evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure is maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature \\beta^0), which is otherwise unconstrained. Local density approximation (LDA) results are presented for shocked states in argon at pressures from 4 to 60 GPa. Depending on the quality of the reference potential, the acceptance probability is enhanced by factors of 1.2-28 relative to unoptimized NMC sampling, and the procedure's efficiency is found to be competitive with that of standard ab initio...

  19. Monte Carlo Simulation of Magnetization Behaviour of Co Nanowires

    Institute of Scientific and Technical Information of China (English)

    ZHONG Ke-Hua; HUANG Zhi-Gao; FENG Qian; JIANG Li-Qin; YANG Yan-Min; CHEN Zhi-Gao

    2006-01-01

    Based on the Monte Carlo method, we simulate the magnetization curves with various magnetic field orientations for various single Co nanowires at room temperature. The simulated switching field as a function of angle θ between the field and the wire axis is consistent well with the experimental data. Correspondingly, the coercivity as a function of angle θ is presented, which together with the switching field plays an important role on explaining the magnetic reversal mechanism. It is found that the angular dependence of coercivity depends on the diameter of nanowires, and the coercivity and switching field versus θ deviate markedly from the prediction from the classical uniform rotation mode in the chain-of-sphere model. Furthermore, the magnetic reversal configurations display that magnetization reversal in the wires with small diameters is a nucleation-propagation process, and it is similar to the curling spread process in the larger wires.

  20. A Monte Carlo Method for Calculating Initiation Probability

    Energy Technology Data Exchange (ETDEWEB)

    Greenman, G M; Procassini, R J; Clouse, C J

    2007-03-05

    A Monte Carlo method for calculating the probability of initiating a self-sustaining neutron chain reaction has been developed. In contrast to deterministic codes which solve a non-linear, adjoint form of the Boltzmann equation to calculate initiation probability, this new method solves the forward (standard) form of the equation using a modified source calculation technique. Results from this new method are compared with results obtained from several deterministic codes for a suite of historical test problems. The level of agreement between these code predictions is quite good, considering the use of different numerical techniques and nuclear data. A set of modifications to the historical test problems has also been developed which reduces the impact of neutron source ambiguities on the calculated probabilities.

  1. Measuring Renyi entanglement entropy in quantum Monte Carlo simulations.

    Science.gov (United States)

    Hastings, Matthew B; González, Iván; Kallin, Ann B; Melko, Roger G

    2010-04-16

    We develop a quantum Monte Carlo procedure, in the valence bond basis, to measure the Renyi entanglement entropy of a many-body ground state as the expectation value of a unitary Swap operator acting on two copies of the system. An improved estimator involving the ratio of Swap operators for different subregions enables convergence of the entropy in a simulation time polynomial in the system size. We demonstrate convergence of the Renyi entropy to exact results for a Heisenberg chain. Finally, we calculate the scaling of the Renyi entropy in the two-dimensional Heisenberg model and confirm that the Néel ground state obeys the expected area law for systems up to linear size L=32.

  2. Study of the Transition Flow Regime using Monte Carlo Methods

    Science.gov (United States)

    Hassan, H. A.

    1999-01-01

    This NASA Cooperative Agreement presents a study of the Transition Flow Regime Using Monte Carlo Methods. The topics included in this final report are: 1) New Direct Simulation Monte Carlo (DSMC) procedures; 2) The DS3W and DS2A Programs; 3) Papers presented; 4) Miscellaneous Applications and Program Modifications; 5) Solution of Transitional Wake Flows at Mach 10; and 6) Turbulence Modeling of Shock-Dominated Fows with a k-Enstrophy Formulation.

  3. Monte Carlo Simulation of Optical Properties of Wake Bubbles

    Institute of Scientific and Technical Information of China (English)

    CAO Jing; WANG Jiang-An; JIANG Xing-Zhou; SHI Sheng-Wei

    2007-01-01

    Based on Mie scattering theory and the theory of multiple light scattering, the light scattering properties of air bubbles in a wake are analysed by Monte Carlo simulation. The results show that backscattering is enhanced obviously due to the existence of bubbles, especially with the increase of bubble density, and that it is feasible to use the Monte Carlo method to study the properties of light scattering by air bubbles.

  4. Successful combination of the stochastic linearization and Monte Carlo methods

    Science.gov (United States)

    Elishakoff, I.; Colombi, P.

    1993-01-01

    A combination of a stochastic linearization and Monte Carlo techniques is presented for the first time in literature. A system with separable nonlinear damping and nonlinear restoring force is considered. The proposed combination of the energy-wise linearization with the Monte Carlo method yields an error under 5 percent, which corresponds to the error reduction associated with the conventional stochastic linearization by a factor of 4.6.

  5. Confidence and efficiency scaling in variational quantum Monte Carlo calculations

    Science.gov (United States)

    Delyon, F.; Bernu, B.; Holzmann, Markus

    2017-02-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.

  6. Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations

    CERN Document Server

    Delyon, François; Holzmann, Markus

    2016-01-01

    Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.

  7. Geometrical and Monte Carlo projectors in 3D PET reconstruction

    OpenAIRE

    Aguiar, Pablo; Rafecas López, Magdalena; Ortuno, Juan Enrique; Kontaxakis, George; Santos, Andrés; Pavía, Javier; Ros, Domènec

    2010-01-01

    Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under c...

  8. Radiative Equilibrium and Temperature Correction in Monte Carlo Radiation Transfer

    OpenAIRE

    Bjorkman, J. E.; Wood, Kenneth

    2001-01-01

    We describe a general radiative equilibrium and temperature correction procedure for use in Monte Carlo radiation transfer codes with sources of temperature-independent opacity, such as astrophysical dust. The technique utilizes the fact that Monte Carlo simulations track individual photon packets, so we may easily determine where their energy is absorbed. When a packet is absorbed, it heats a particular cell within the envelope, raising its temperature. To enforce radiative equilibrium, the ...

  9. Chemical accuracy from quantum Monte Carlo for the Benzene Dimer

    OpenAIRE

    Azadi, Sam; Cohen, R. E

    2015-01-01

    We report an accurate study of interactions between Benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory (DFT) using different van der Waals (vdW) functionals. In our QMC calculations, we use accurate correlated trial wave functions including three-body Jastrow factors, and backflow transformations. We consider two benzene molecules in the parallel displaced (PD) geometry, and fin...

  10. Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN

    CERN Document Server

    Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759

    2010-01-01

    The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.

  11. Monte Carlo methods and applications in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Monte Carlo methods for studying few- and many-body quantum systems are introduced, with special emphasis given to their applications in nuclear physics. Variational and Green's function Monte Carlo methods are presented in some detail. The status of calculations of light nuclei is reviewed, including discussions of the three-nucleon-interaction, charge and magnetic form factors, the coulomb sum rule, and studies of low-energy radiative transitions. 58 refs., 12 figs.

  12. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  13. Monte Carlo Volcano Seismic Moment Tensors

    Science.gov (United States)

    Waite, G. P.; Brill, K. A.; Lanza, F.

    2015-12-01

    Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.

  14. Monte Carlo implementation of polarized hadronization

    Science.gov (United States)

    Matevosyan, Hrayr H.; Kotzinian, Aram; Thomas, Anthony W.

    2017-01-01

    We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of the hadronization process with a finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse-momentum-dependent (TMD) splitting functions (SFs) for elementary q →q'+h transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank 2. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and we propose a quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence of unphysical azimuthal modulations of the computed polarized FFs, and by precisely reproducing the earlier derived explicit results for rank-2 pions. Finally, we present the full results for pion unpolarized and Collins FFs, as well as the corresponding analyzing powers from high statistics MC simulations with a large number of produced hadrons for two different model input elementary SFs. The results for both sets of input functions exhibit the same general features of an opposite signed Collins function for favored and unfavored channels at large z and, at the same time, demonstrate the flexibility of the quark-jet framework by producing significantly different dependences of the results at mid to low z for the two model inputs.

  15. Quantum Monte Carlo with directed loops.

    Science.gov (United States)

    Syljuåsen, Olav F; Sandvik, Anders W

    2002-10-01

    We introduce the concept of directed loops in stochastic series expansion and path-integral quantum Monte Carlo methods. Using the detailed balance rules for directed loops, we show that it is possible to smoothly connect generally applicable simulation schemes (in which it is necessary to include backtracking processes in the loop construction) to more restricted loop algorithms that can be constructed only for a limited range of Hamiltonians (where backtracking can be avoided). The "algorithmic discontinuities" between general and special points (or regions) in parameter space can hence be eliminated. As a specific example, we consider the anisotropic S=1/2 Heisenberg antiferromagnet in an external magnetic field. We show that directed-loop simulations are very efficient for the full range of magnetic fields (zero to the saturation point) and anisotropies. In particular, for weak fields and anisotropies, the autocorrelations are significantly reduced relative to those of previous approaches. The back-tracking probability vanishes continuously as the isotropic Heisenberg point is approached. For the XY model, we show that back tracking can be avoided for all fields extending up to the saturation field. The method is hence particularly efficient in this case. We use directed-loop simulations to study the magnetization process in the two-dimensional Heisenberg model at very low temperatures. For LxL lattices with L up to 64, we utilize the step structure in the magnetization curve to extract gaps between different spin sectors. Finite-size scaling of the gaps gives an accurate estimate of the transverse susceptibility in the thermodynamic limit: chi( perpendicular )=0.0659+/-0.0002.

  16. Monte Carlo simulation of large electron fields

    Science.gov (United States)

    Faddegon, Bruce A.; Perl, Joseph; Asai, Makoto

    2008-03-01

    Two Monte Carlo systems, EGSnrc and Geant4, the latter with two different 'physics lists,' were used to calculate dose distributions in large electron fields used in radiotherapy. Source and geometry parameters were adjusted to match calculated results to measurement. Both codes were capable of accurately reproducing the measured dose distributions of the six electron beams available on the accelerator. Depth penetration matched the average measured with a diode and parallel-plate chamber to 0.04 cm or better. Calculated depth dose curves agreed to 2% with diode measurements in the build-up region, although for the lower beam energies there was a discrepancy of up to 5% in this region when calculated results are compared to parallel-plate measurements. Dose profiles at the depth of maximum dose matched to 2-3% in the central 25 cm of the field, corresponding to the field size of the largest applicator. A 4% match was obtained outside the central region. The discrepancy observed in the bremsstrahlung tail in published results that used EGS4 is no longer evident. Simulations with the different codes and physics lists used different source energies, incident beam angles, thicknesses of the primary foils, and distance between the primary and secondary foil. The true source and geometry parameters were not known with sufficient accuracy to determine which parameter set, including the energy of the source, was closest to the truth. These results underscore the requirement for experimental benchmarks of depth penetration and electron scatter for beam energies and foils relevant to radiotherapy.

  17. kmos: A lattice kinetic Monte Carlo framework

    Science.gov (United States)

    Hoffmann, Max J.; Matera, Sebastian; Reuter, Karsten

    2014-07-01

    Kinetic Monte Carlo (kMC) simulations have emerged as a key tool for microkinetic modeling in heterogeneous catalysis and other materials applications. Systems, where site-specificity of all elementary reactions allows a mapping onto a lattice of discrete active sites, can be addressed within the particularly efficient lattice kMC approach. To this end we describe the versatile kmos software package, which offers a most user-friendly implementation, execution, and evaluation of lattice kMC models of arbitrary complexity in one- to three-dimensional lattice systems, involving multiple active sites in periodic or aperiodic arrangements, as well as site-resolved pairwise and higher-order lateral interactions. Conceptually, kmos achieves a maximum runtime performance which is essentially independent of lattice size by generating code for the efficiency-determining local update of available events that is optimized for a defined kMC model. For this model definition and the control of all runtime and evaluation aspects kmos offers a high-level application programming interface. Usage proceeds interactively, via scripts, or a graphical user interface, which visualizes the model geometry, the lattice occupations and rates of selected elementary reactions, while allowing on-the-fly changes of simulation parameters. We demonstrate the performance and scaling of kmos with the application to kMC models for surface catalytic processes, where for given operation conditions (temperature and partial pressures of all reactants) central simulation outcomes are catalytic activity and selectivities, surface composition, and mechanistic insight into the occurrence of individual elementary processes in the reaction network.

  18. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  19. A Survey on Multilevel Monte Carlo for European Options

    Directory of Open Access Journals (Sweden)

    Masoud Moharamnejad

    2016-03-01

    Full Text Available One of the most applicable and common methods for pricing options is the Monte Carlo simulation. Among the advantages of this method we can name ease of use, being suitable for different types of options including vanilla options and exotic options. On one hand, convergence rate of Monte Carlo's variance is , which has a slow convergence in responding problems, such that for achieving accuracy of ε for a d dimensional problem, computation complexity would be . Thus, various methods have been proposed in Monte Carlo framework to increase the convergence rate of variance as variance reduction methods. One of the recent methods was proposed by Gills in 2006, is the multilevel Monte Carlo method. This method besides reducing the computationcomplexity to while being used in Euler discretizing and to while being used in Milsteindiscretizing method, has the capacity to be combined with other variance reduction methods. In this article, multilevel Monte Carlo using Euler and Milsteindiscretizing methods is adopted for comparing computation complexity with standard Monte Carlo method in pricing European call options.

  20. Perturbation Monte Carlo methods for tissue structure alterations.

    Science.gov (United States)

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Spanier, Jerome

    2013-01-01

    This paper describes an extension of the perturbation Monte Carlo method to model light transport when the phase function is arbitrarily perturbed. Current perturbation Monte Carlo methods allow perturbation of both the scattering and absorption coefficients, however, the phase function can not be varied. The more complex method we develop and test here is not limited in this way. We derive a rigorous perturbation Monte Carlo extension that can be applied to a large family of important biomedical light transport problems and demonstrate its greater computational efficiency compared with using conventional Monte Carlo simulations to produce forward transport problem solutions. The gains of the perturbation method occur because only a single baseline Monte Carlo simulation is needed to obtain forward solutions to other closely related problems whose input is described by perturbing one or more parameters from the input of the baseline problem. The new perturbation Monte Carlo methods are tested using tissue light scattering parameters relevant to epithelia where many tumors originate. The tissue model has parameters for the number density and average size of three classes of scatterers; whole nuclei, organelles such as lysosomes and mitochondria, and small particles such as ribosomes or large protein complexes. When these parameters or the wavelength is varied the scattering coefficient and the phase function vary. Perturbation calculations give accurate results over variations of ∼15-25% of the scattering parameters.

  1. Markov chain Monte Carlo and importance sampling for multiple targets tracking%马尔可夫链蒙特卡洛重要度采样与多目标跟踪

    Institute of Scientific and Technical Information of China (English)

    龙云利; 徐晖; 安玮

    2011-01-01

    针对强杂波环境下的多目标跟踪问题,提出一种基于马尔可夫链蒙特卡洛重要度采样的跟踪方法.通过马尔可夫链蒙特卡洛实现对联合关联事件的采样,据此计算目标可关联量测数据的边缘关联概率.在联合关联事件求解中利用单目标量测的概率密度进行重要度采样,提高采样效率.马尔可夫链蒙特卡洛重要度采样方法克服了联合概率数据关联中的“组合爆炸”问题,能够在强杂波干扰下较好地实现多目标实时跟踪.通过仿真实验对比分析了算法的跟踪精度和处理的时效性,验证了方法的有效性.%This paper presents an algorithm based on Markov chain Monte Carlo and importance sampling(MCMCIS) for tracking multi-target in a dense environment. The joint associated events are sampled by the Markov chain Monte Carlo and the marginal association probability of the measurement to the target is calculated. The probabilistic density is utilized when sampling the associated events as to improve the efficiency. Although the joint probabilistic data association(JPDA) is NP-hard, the MCMCIS provides the ability to track multi-target timely in a dense environment. The simulation experiments are implemented to analyze the tracking precision and processing time, which shows the effectiveness of the algorithm.

  2. Probing the segmental mobility and energy of the active zones of a protein chain (aspartic acid protease) by a coarse-grained bond-fluctuation Monte Carlo simulation

    Science.gov (United States)

    Pandey, Ras; Farmer, Barry

    2008-03-01

    A protein chain such as aspartic acid protease is described by a specific sequence of 99 residues each with its own specific characteristics. In a coarse-grained description, the backbone of a protein chain is described by nodes tethered together by peptide bonds where each node (the amino acid group) is characterized by molecular weight and hydrophobicity. A well-developed and somewhat mature computational modeling tool for the polymer chain such as the bond-fluctuation model is used to study such a specific protein chain with its constitutive amino groups and their sequence. The relative magnitude of hydrophobicity is used to develop appropriate interaction potentials for these amino acid groups in explicit solvent. The Metropolis algorithm is used to move each node and solvent constituent. Local energy and mobility of each amino group are analyzed along with global energy, mobility, and conformation of the protein chain. Effect of the solvent interaction and its concentration on these quantities will be presented.

  3. Monte Carlo Estimation of the Conditional Rasch Model. Research Report 94-09.

    Science.gov (United States)

    Akkermans, Wies M. W.

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to the conditional estimation of both item and…

  4. Conformation of a coarse-grained protein chain (an aspartic acid protease) model in effective solvent by a bond-fluctuating Monte Carlo simulation

    Science.gov (United States)

    Pandey, R. B.; Farmer, B. L.

    2008-03-01

    In a coarse-grained description of a protein chain, all of the 20 amino acid residues can be broadly divided into three groups: Hydrophobic (H) , polar (P) , and electrostatic (E) . A protein can be described by nodes tethered in a chain with a node representing an amino acid group. Aspartic acid protease consists of 99 residues in a well-defined sequence of H , P , and E nodes tethered together by fluctuating bonds. The protein chain is placed on a cubic lattice where empty lattice sites constitute an effective solvent medium. The amino groups (nodes) interact with the solvent (S) sites with appropriate attractive (PS) and repulsive (HS) interactions with the solvent and execute their stochastic movement with the Metropolis algorithm. Variations of the root mean square displacements of the center of mass and that of its center node of the protease chain and its gyration radius with the time steps are examined for different solvent strength. The structure of the protease swells on increasing the solvent interaction strength which tends to enhance the relaxation time to reach the diffusive behavior of the chain. Equilibrium radius of gyration increases linearly on increasing the solvent strength: A slow rate of increase in weak solvent regime is followed by a faster swelling in stronger solvent. Variation of the gyration radius with the time steps suggests that the protein chain moves via contraction and expansion in a somewhat quasiperiodic pattern particularly in strong solvent.

  5. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  6. Monte Carlo Simulation for the Adsorption of Symmetric Triblock Copolymers

    Institute of Scientific and Technical Information of China (English)

    彭昌军; 李健康; 刘洪来; 胡英

    2004-01-01

    The adsorption behavior of symmetric triblock copolymers, Am/2BnAm/2, from a nonselective solvent at solid-liquid interface has been studied by Monte Carlo simulations on a simple lattice model. Either segment A or segment B is attractive, while the other is non-attractive to the surface. Influences of the adsorption energy,bulk concentration, chain composition and chain length on the microstructure of adsorbed layers are presented.The results show that the total surface coverage and the adsorption amount increases monotonically as the bulk concentration increases. The larger the adsorption energy and the higher the fraction of adsorbing segments, the higher the total surface coverage is exhibited. The product of surface coverage and the proportion of non-attractive segments are nearly independent of the chain length, and the logarithm of the adsorption amount is a linear function of the reciprocal of the reduced temperature. When the adsorption energy is larger, the adsorption amount exhibits a maximum as the fraction of adsorbing segment increases. The adsorption isotherms of copolymers with different length of non-attractive segments can be mapped onto a single curve under given adsorption energy. The adsorption layer thickness decreases as the adsorption energy and the fraction of adsorbing segments increases, but it increhses as the length of non-attractive segments increases. The tails mainly govern the adsorption layer thickness.

  7. Monte Carlo simulations of the HP model (the "Ising model" of protein folding)

    Science.gov (United States)

    Li, Ying Wai; Wüst, Thomas; Landau, David P.

    2011-09-01

    Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.

  8. Improved short adjacent repeat identification using three evolutionary Monte Carlo schemes.

    Science.gov (United States)

    Xu, Jin; Li, Qiwei; Li, Victor O K; Li, Shuo-Yen Robert; Fan, Xiaodan

    2013-01-01

    This paper employs three Evolutionary Monte Carlo (EMC) schemes to solve the Short Adjacent Repeat Identification Problem (SARIP), which aims to identify the common repeat units shared by multiple sequences. The three EMC schemes, i.e., Random Exchange (RE), Best Exchange (BE), and crossover are implemented on a parallel platform. The simulation results show that compared with the conventional Markov Chain Monte Carlo (MCMC) algorithm, all three EMC schemes can not only shorten the computation time via speeding up the convergence but also improve the solution quality in difficult cases. Moreover, we observe that the performances of different EMC schemes depend on the degeneracy degree of the motif pattern.

  9. Reducing quasi-ergodicity in a double well potential by Tsallis Monte Carlo simulation

    OpenAIRE

    Iwamatsu, Masao; Okabe, Yutaka

    2000-01-01

    A new Monte Carlo scheme based on the system of Tsallis's generalized statistical mechanics is applied to a simple double well potential to calculate the canonical thermal average of potential energy. Although we observed serious quasi-ergodicity when using the standard Metropolis Monte Carlo algorithm, this problem is largely reduced by the use of the new Monte Carlo algorithm. Therefore the ergodicity is guaranteed even for short Monte Carlo steps if we use this new canonical Monte Carlo sc...

  10. Finding organic vapors - a Monte Carlo approach

    Science.gov (United States)

    Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku

    2010-05-01

    drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.

  11. Coherent Scattering Imaging Monte Carlo Simulation

    Science.gov (United States)

    Hassan, Laila Abdulgalil Rafik

    Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness

  12. Implementation of a Markov Chain Monte Carlo method to inorganic aerosol modeling of observations from the MCMA-2003 campaign – Part I: Model description and application to the La Merced site

    Directory of Open Access Journals (Sweden)

    F. M. San Martini

    2006-01-01

    Full Text Available The equilibrium inorganic aerosol model ISORROPIA was embedded in a Markov Chain Monte Carlo algorithm to develop a powerful tool to analyze aerosol data and predict gas phase concentrations where these are unavailable. The method directly incorporates measurement uncertainty, prior knowledge, and provides a formal framework to combine measurements of different quality. The method was applied to particle- and gas-phase precursor observations taken at La Merced during the Mexico City Metropolitan Area (MCMA 2003 Field Campaign and served to discriminate between diverging gas-phase observations of ammonia and predict gas-phase concentrations of hydrochloric acid. The model reproduced observations of particle-phase ammonium, nitrate, and sulfate well. The most likely concentrations of ammonia were found to vary between 4 and 26 ppbv, while the range for nitric acid was 0.1 to 55 ppbv. During periods where the aerosol chloride observations were consistently above the detection limit, the model was able to reproduce the aerosol chloride observations well and predicted the most likely gas-phase hydrochloric acid concentration varied between 0.4 and 5 ppbv. Despite the high ammonia concentrations observed and predicted by the model, when the aerosols were assumed to be in the efflorescence branch they are predicted to be acidic (pH~3.

  13. Application of Bayesian population physiologically based pharmacokinetic (PBPK) modeling and Markov chain Monte Carlo simulations to pesticide kinetics studies in protected marine mammals: DDT, DDE, and DDD in harbor porpoises.

    Science.gov (United States)

    Weijs, Liesbeth; Yang, Raymond S H; Das, Krishna; Covaci, Adrian; Blust, Ronny

    2013-05-01

    Physiologically based pharmacokinetic (PBPK) modeling in marine mammals is a challenge because of the lack of parameter information and the ban on exposure experiments. To minimize uncertainty and variability, parameter estimation methods are required for the development of reliable PBPK models. The present study is the first to develop PBPK models for the lifetime bioaccumulation of p,p'-DDT, p,p'-DDE, and p,p'-DDD in harbor porpoises. In addition, this study is also the first to apply the Bayesian approach executed with Markov chain Monte Carlo simulations using two data sets of harbor porpoises from the Black and North Seas. Parameters from the literature were used as priors for the first "model update" using the Black Sea data set, the resulting posterior parameters were then used as priors for the second "model update" using the North Sea data set. As such, PBPK models with parameters specific for harbor porpoises could be strengthened with more robust probability distributions. As the science and biomonitoring effort progress in this area, more data sets will become available to further strengthen and update the parameters in the PBPK models for harbor porpoises as a species anywhere in the world. Further, such an approach could very well be extended to other protected marine mammals.

  14. Meaningful timescales from Monte Carlo simulations of particle systems with hard-core interactions

    Science.gov (United States)

    Costa, Liborio I.

    2016-12-01

    A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Monte Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.

  15. Quasi-Monte Carlo methods for lattice systems. A first look

    Energy Technology Data Exchange (ETDEWEB)

    Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Leovey, H.; Griewank, A. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Nube, A. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Mueller-Preussker, M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2013-02-15

    We investigate the applicability of Quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N{sup -1/2}, where N is the number of observations. By means of Quasi-Monte Carlo methods it is possible to improve this behavior for certain problems up to N{sup -1}. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling.

  16. An Unbiased Hessian Representation for Monte Carlo PDFs

    CERN Document Server

    Carrazza, Stefano; Kassabov, Zahari; Latorre, Jose Ignacio; Rojo, Juan

    2015-01-01

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available togethe...

  17. An unbiased Hessian representation for Monte Carlo PDFs

    Energy Technology Data Exchange (ETDEWEB)

    Carrazza, Stefano; Forte, Stefano [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); INFN, Sezione di Milano (Italy); Kassabov, Zahari [Universita di Milano, TIF Lab, Dipartimento di Fisica, Milan (Italy); Universita di Torino, Dipartimento di Fisica, Turin (Italy); INFN, Sezione di Torino (Italy); Latorre, Jose Ignacio [Universitat de Barcelona, Departament d' Estructura i Constituents de la Materia, Barcelona (Spain); Rojo, Juan [University of Oxford, Rudolf Peierls Centre for Theoretical Physics, Oxford (United Kingdom)

    2015-08-15

    We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set. (orig.)

  18. Vectorizing and macrotasking Monte Carlo neutral particle algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Heifetz, D.B.

    1987-04-01

    Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.

  19. VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING

    Directory of Open Access Journals (Sweden)

    Kartik Dwivedi

    2013-12-01

    Full Text Available In this paper, we describe a novel variational Monte Carlo approach for modeling and tracking body parts of articulated objects. An articulated object (human target is represented as a dynamic Markov network of the different constituent parts. The proposed approach combines local information of individual body parts and other spatial constraints influenced by neighboring parts. The movement of the relative parts of the articulated body is modeled with local information of displacements from the Markov network and the global information from other neighboring parts. We explore the effect of certain model parameters (including the number of parts tracked; number of Monte-Carlo cycles, etc. on system accuracy and show that ourvariational Monte Carlo approach achieves better efficiency and effectiveness compared to other methods on a number of real-time video datasets containing single targets.

  20. Sequential Monte Carlo on large binary sampling spaces

    CERN Document Server

    Schäfer, Christian

    2011-01-01

    A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for a good performance. In this paper, we present such a parametric family for adaptive sampling on high-dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high-dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binar...

  1. Introduction to the variational and diffusion Monte Carlo methods

    CERN Document Server

    Toulouse, Julien; Umrigar, C J

    2015-01-01

    We provide a pedagogical introduction to the two main variants of real-space quantum Monte Carlo methods for electronic-structure calculations: variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC). Assuming no prior knowledge on the subject, we review in depth the Metropolis-Hastings algorithm used in VMC for sampling the square of an approximate wave function, discussing details important for applications to electronic systems. We also review in detail the more sophisticated DMC algorithm within the fixed-node approximation, introduced to avoid the infamous Fermionic sign problem, which allows one to sample a more accurate approximation to the ground-state wave function. Throughout this review, we discuss the statistical methods used for evaluating expectation values and statistical uncertainties. In particular, we show how to estimate nonlinear functions of expectation values and their statistical uncertainties.

  2. Efficiency of Monte Carlo sampling in chaotic systems.

    Science.gov (United States)

    Leitão, Jorge C; Lopes, J M Viana Parente; Altmann, Eduardo G

    2014-11-01

    In this paper we investigate how the complexity of chaotic phase spaces affect the efficiency of importance sampling Monte Carlo simulations. We focus on flat-histogram simulations of the distribution of finite-time Lyapunov exponent in a simple chaotic system and obtain analytically that the computational effort: (i) scales polynomially with the finite time, a tremendous improvement over the exponential scaling obtained in uniform sampling simulations; and (ii) the polynomial scaling is suboptimal, a phenomenon known as critical slowing down. We show that critical slowing down appears because of the limited possibilities to issue a local proposal in the Monte Carlo procedure when it is applied to chaotic systems. These results show how generic properties of chaotic systems limit the efficiency of Monte Carlo simulations.

  3. Monte Carlo simulation of laser attenuation characteristics in fog

    Science.gov (United States)

    Wang, Hong-Xia; Sun, Chao; Zhu, You-zhang; Sun, Hong-hui; Li, Pan-shi

    2011-06-01

    Based on the Mie scattering theory and the gamma size distribution model, the scattering extinction parameter of spherical fog-drop is calculated. For the transmission attenuation of the laser in the fog, a Monte Carlo simulation model is established, and the impact of attenuation ratio on visibility and field angle is computed and analysed using the program developed by MATLAB language. The results of the Monte Carlo method in this paper are compared with the results of single scattering method. The results show that the influence of multiple scattering need to be considered when the visibility is low, and single scattering calculations have larger errors. The phenomenon of multiple scattering can be interpreted more better when the Monte Carlo is used to calculate the attenuation ratio of the laser transmitting in the fog.

  4. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  5. Properties of Reactive Oxygen Species by Quantum Monte Carlo

    CERN Document Server

    Zen, Andrea; Guidoni, Leonardo

    2014-01-01

    The electronic properties of the oxygen molecule, in its singlet and triplet states, and of many small oxygen-containing radicals and anions have important roles in different fields of Chemistry, Biology and Atmospheric Science. Nevertheless, the electronic structure of such species is a challenge for ab-initio computational approaches because of the difficulties to correctly describe the statical and dynamical correlation effects in presence of one or more unpaired electrons. Only the highest-level quantum chemical approaches can yield reliable characterizations of their molecular properties, such as binding energies, equilibrium structures, molecular vibrations, charge distribution and polarizabilities. In this work we use the variational Monte Carlo (VMC) and the lattice regularized Monte Carlo (LRDMC) methods to investigate the equilibrium geometries and molecular properties of oxygen and oxygen reactive species. Quantum Monte Carlo methods are used in combination with the Jastrow Antisymmetrized Geminal ...

  6. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Science.gov (United States)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  7. TAKING THE NEXT STEP WITH INTELLIGENT MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    Booth, T.E.; Carlson, J.A. [and others

    2000-10-01

    For many scientific calculations, Monte Carlo is the only practical method available. Unfortunately, standard Monte Carlo methods converge slowly as the square root of the computer time. We have shown, both numerically and theoretically, that the convergence rate can be increased dramatically if the Monte Carlo algorithm is allowed to adapt based on what it has learned from previous samples. As the learning continues, computational efficiency increases, often geometrically fast. The particle transport work achieved geometric convergence for a two-region problem as well as for problems with rapidly changing nuclear data. The statistics work provided theoretical proof of geometic convergence for continuous transport problems and promising initial results for airborne migration of particles. The statistical physics work applied adaptive methods to a variety of physical problems including the three-dimensional Ising glass, quantum scattering, and eigenvalue problems.

  8. Monte Carlo tests of the ELIPGRID-PC algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangular sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.

  9. Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo

    DEFF Research Database (Denmark)

    Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid

    2012-01-01

    This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...

  10. Monte Carlo Simulation in Statistical Physics An Introduction

    CERN Document Server

    Binder, Kurt

    2010-01-01

    Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...

  11. Applicability of Quasi-Monte Carlo for lattice systems

    CERN Document Server

    Ammon, Andreas; Jansen, Karl; Leovey, Hernan; Griewank, Andreas; Müller-Preussker, Micheal

    2013-01-01

    This project investigates the applicability of quasi-Monte Carlo methods to Euclidean lattice systems in order to improve the asymptotic error scaling of observables for such theories. The error of an observable calculated by averaging over random observations generated from ordinary Monte Carlo simulations scales like $N^{-1/2}$, where $N$ is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this scaling for certain problems to $N^{-1}$, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling of all investigated observables in both cases.

  12. Implementation of Monte Carlo Simulations for the Gamma Knife System

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, W [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Huang, D [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Lee, L [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Feng, J [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Morris, K [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Calugaru, E [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Burman, C [Memorial Sloan-Kettering Cancer Center/Mercy Medical Center, 1000 N Village Ave., Rockville Centre, NY 11570 (United States); Li, J [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States); Ma, C-M [Fox Chase Cancer Center, 333 Cottman Ave., Philadelphia, PA 17111 (United States)

    2007-06-15

    Currently the Gamma Knife system is accompanied with a treatment planning system, Leksell GammaPlan (LGP) which is a standard, computer-based treatment planning system for Gamma Knife radiosurgery. In LGP, the dose calculation algorithm does not consider the scatter dose contributions and the inhomogeneity effect due to the skull and air cavities. To improve the dose calculation accuracy, Monte Carlo simulations have been implemented for the Gamma Knife planning system. In this work, the 201 Cobalt-60 sources in the Gamma Knife unit are considered to have the same activity. Each Cobalt-60 source is contained in a cylindric stainless steel capsule. The particle phase space information is stored in four beam data files, which are collected in the inner sides of the 4 treatment helmets, after the Cobalt beam passes through the stationary and helmet collimators. Patient geometries are rebuilt from patient CT data. Twenty two Patients are included in the Monte Carlo simulation for this study. The dose is calculated using Monte Carlo in both homogenous and inhomogeneous geometries with identical beam parameters. To investigate the attenuation effect of the skull bone the dose in a 16cm diameter spherical QA phantom is measured with and without a 1.5mm Lead-covering and also simulated using Monte Carlo. The dose ratios with and without the 1.5mm Lead-covering are 89.8% based on measurements and 89.2% according to Monte Carlo for a 18mm-collimator Helmet. For patient geometries, the Monte Carlo results show that although the relative isodose lines remain almost the same with and without inhomogeneity corrections, the difference in the absolute dose is clinically significant. The average inhomogeneity correction is (3.9 {+-} 0.90) % for the 22 patients investigated. These results suggest that the inhomogeneity effect should be considered in the dose calculation for Gamma Knife treatment planning.

  13. A standard Event Class for Monte Carlo Generators

    Institute of Scientific and Technical Information of China (English)

    L.A.Gerren; M.Fischler

    2001-01-01

    StdHepC++[1]is a CLHEP[2] Monte Carlo event class library which provides a common interface to Monte Carlo Event Generators,This work is an extensive redesign of the StdHep Fortran interface to use the full power of object oriented design,A generated event maps naturally onto the Directed Acyclic Graph concept and we have used the HepMC classes to implement this.The full implementation allows the user to combine events to simulate beam pileup and access them transparently as though they were a single event.

  14. Parallelization of Monte Carlo codes MVP/GMVP

    Energy Technology Data Exchange (ETDEWEB)

    Nagaya, Yasunobu; Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Sasaki, Makoto

    1998-03-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of the parallel processing platforms. The platforms reported are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel Paragon and a distributed-memory scalar-parallel computer Hitachi SR2201. As mentioned generally, ideal speedup could be obtained for large-scale problems but parallelization efficiency got worse as the batch size per a processing element (PE) was smaller. (author)

  15. Parton distribution functions in Monte Carlo factorisation scheme

    Science.gov (United States)

    Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.

    2016-12-01

    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.

  16. PEPSI — a Monte Carlo generator for polarized leptoproduction

    Science.gov (United States)

    Mankiewicz, L.; Schäfer, A.; Veltri, M.

    1992-09-01

    We describe PEPSI (Polarized Electron Proton Scattering Interactions), a Monte Carlo program for polarized deep inelastic leptoproduction mediated by electromagnetic interaction, and explain how to use it. The code is a modification of the LEPTO 4.3 Lund Monte Carlo for unpolarized scattering. The hard virtual gamma-parton scattering is generated according to the polarization-dependent QCD cross-section of the first order in α S. PEPSI requires the standard polarization-independent JETSET routines to simulate the fragmentation into final hadrons.

  17. Utilising Monte Carlo Simulation for the Valuation of Mining Concessions

    Directory of Open Access Journals (Sweden)

    Rosli Said

    2005-12-01

    Full Text Available Valuation involves the analyses of various input data to produce an estimated value. Since each input is itself often an estimate, there is an element of uncertainty in the input. This leads to uncertainty in the resultant output value. It is argued that a valuation must also convey information on the uncertainty, so as to be more meaningful and informative to the user. The Monte Carlo simulation technique can generate the information on uncertainty and is therefore potentially useful to valuation. This paper reports on the investigation that has been conducted to apply Monte Carlo simulation technique in mineral valuation, more specifically, in the valuation of a quarry concession.

  18. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf

    2010-01-01

    Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat

  19. Accuracy Analysis of Assembly Success Rate with Monte Carlo Simulations

    Institute of Scientific and Technical Information of China (English)

    仲昕; 杨汝清; 周兵

    2003-01-01

    Monte Carlo simulation was applied to Assembly Success Rate (ASR) analyses.ASR of two peg-in-hole robot assemblies was used as an example by taking component parts' sizes,manufacturing tolerances and robot repeatability into account.A statistic arithmetic expression was proposed and deduced in this paper,which offers an alternative method of estimating the accuracy of ASR,without having to repeat the simulations.This statistic method also helps to choose a suitable sample size,if error reduction is desired.Monte Carlo simulation results demonstrated the feasibility of the method.

  20. THE APPLICATION OF MONTE CARLO SIMULATION FOR A DECISION PROBLEM

    Directory of Open Access Journals (Sweden)

    Çiğdem ALABAŞ

    2001-01-01

    Full Text Available The ultimate goal of the standard decision tree approach is to calculate the expected value of a selected performance measure. In the real-world situations, the decision problems become very complex as the uncertainty factors increase. In such cases, decision analysis using standard decision tree approach is not useful. One way of overcoming this difficulty is the Monte Carlo simulation. In this study, a Monte Carlo simulation model is developed for a complex problem and statistical analysis is performed to make the best decision.

  1. Applications of quantum Monte Carlo methods in condensed systems

    CERN Document Server

    Kolorenc, Jindrich

    2010-01-01

    The quantum Monte Carlo methods represent a powerful and broadly applicable computational tool for finding very accurate solutions of the stationary Schroedinger equation for atoms, molecules, solids and a variety of model systems. The algorithms are intrinsically parallel and are able to take full advantage of the present-day high-performance computing systems. This review article concentrates on the fixed-node/fixed-phase diffusion Monte Carlo method with emphasis on its applications to electronic structure of solids and other extended many-particle systems.

  2. Quasi-Monte Carlo methods for the Heston model

    OpenAIRE

    Jan Baldeaux; Dale Roberts

    2012-01-01

    In this paper, we discuss the application of quasi-Monte Carlo methods to the Heston model. We base our algorithms on the Broadie-Kaya algorithm, an exact simulation scheme for the Heston model. As the joint transition densities are not available in closed-form, the Linear Transformation method due to Imai and Tan, a popular and widely applicable method to improve the effectiveness of quasi-Monte Carlo methods, cannot be employed in the context of path-dependent options when the underlying pr...

  3. Novel Quantum Monte Carlo Approaches for Quantum Liquids

    Science.gov (United States)

    Rubenstein, Brenda M.

    Quantum Monte Carlo methods are a powerful suite of techniques for solving the quantum many-body problem. By using random numbers to stochastically sample quantum properties, QMC methods are capable of studying low-temperature quantum systems well beyond the reach of conventional deterministic techniques. QMC techniques have likewise been indispensible tools for augmenting our current knowledge of superfluidity and superconductivity. In this thesis, I present two new quantum Monte Carlo techniques, the Monte Carlo Power Method and Bose-Fermi Auxiliary-Field Quantum Monte Carlo, and apply previously developed Path Integral Monte Carlo methods to explore two new phases of quantum hard spheres and hydrogen. I lay the foundation for a subsequent description of my research by first reviewing the physics of quantum liquids in Chapter One and the mathematics behind Quantum Monte Carlo algorithms in Chapter Two. I then discuss the Monte Carlo Power Method, a stochastic way of computing the first several extremal eigenvalues of a matrix too memory-intensive to be stored and therefore diagonalized. As an illustration of the technique, I demonstrate how it can be used to determine the second eigenvalues of the transition matrices of several popular Monte Carlo algorithms. This information may be used to quantify how rapidly a Monte Carlo algorithm is converging to the equilibrium probability distribution it is sampling. I next present the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm. This algorithm generalizes the well-known Auxiliary-Field Quantum Monte Carlo algorithm for fermions to bosons and Bose-Fermi mixtures. Despite some shortcomings, the Bose-Fermi Auxiliary-Field Quantum Monte Carlo algorithm represents the first exact technique capable of studying Bose-Fermi mixtures of any size in any dimension. In Chapter Six, I describe a new Constant Stress Path Integral Monte Carlo algorithm for the study of quantum mechanical systems under high pressures. While

  4. Monte Carlo simulation of electron slowing down in indium

    Energy Technology Data Exchange (ETDEWEB)

    Rouabah, Z.; Hannachi, M. [Materials and Electronic Systems Laboratory (LMSE), University of Bordj Bou Arreridj, Bordj Bou Arreridj (Algeria); Champion, C. [Université de Bordeaux 1, CNRS/IN2P3, Centre d’Etudes Nucléaires de Bordeaux-Gradignan, (CENBG), Gradignan (France); Bouarissa, N., E-mail: n_bouarissa@yahoo.fr [Laboratory of Materials Physics and its Applications, University of M' sila, 28000 M' sila (Algeria)

    2015-07-15

    Highlights: • Electron scattering in indium targets. • Modeling of elastic cross-sections. • Monte Carlo simulation of low energy electrons. - Abstract: In the current study, we aim at simulating via a detailed Monte Carlo code, the electron penetration in a semi-infinite indium medium for incident energies ranging from 0.5 to 5 keV. Electron range, backscattering coefficients, mean penetration depths as well as stopping profiles are then reported. The results may be seen as the first predictions for low-energy electron penetration in indium target.

  5. Kinetic Monte Carlo method applied to nucleic acid hairpin folding.

    Science.gov (United States)

    Sauerwine, Ben; Widom, Michael

    2011-12-01

    Kinetic Monte Carlo on coarse-grained systems, such as nucleic acid secondary structure, is advantageous for being able to access behavior at long time scales, even minutes or hours. Transition rates between coarse-grained states depend upon intermediate barriers, which are not directly simulated. We propose an Arrhenius rate model and an intermediate energy model that incorporates the effects of the barrier between simulated states without enlarging the state space itself. Applying our Arrhenius rate model to DNA hairpin folding, we demonstrate improved agreement with experiment compared to the usual kinetic Monte Carlo model. Further improvement results from including rigidity of single-stranded stacking.

  6. Green's function monte carlo and the many-fermion problem

    Science.gov (United States)

    Kalos, M. H.

    The application of Green's function Monte Carlo to many body problems is outlined. For boson problems, the method is well developed and practical. An "efficiency principle",importance sampling, can be used to reduce variance. Fermion problems are more difficult because spatially antisymmetric functions must be represented as a difference of two density functions. Naively treated, this leads to a rapid growth of Monte Carlo error. Methods for overcoming the difficulty are discussed. Satisfactory algorithms exist for few-body problems; for many-body problems more work is needed, but it is likely that adequate methods will soon be available.

  7. Monte Carlo simulation of electrons in dense gases

    Science.gov (United States)

    Tattersall, Wade; Boyle, Greg; Cocks, Daniel; Buckman, Stephen; White, Ron

    2014-10-01

    We implement a Monte-Carlo simulation modelling the transport of electrons and positrons in dense gases and liquids, by using a dynamic structure factor that allows us to construct structure-modified effective cross sections. These account for the coherent effects caused by interactions with the relatively dense medium. The dynamic structure factor also allows us to model thermal gases in the same manner, without needing to directly sample the velocities of the neutral particles. We present the results of a series of Monte Carlo simulations that verify and apply this new technique, and make comparisons with macroscopic predictions and Boltzmann equation solutions. Financial support of the Australian Research Council.

  8. A new full waveform analysis approach using simulated temp ering Markov chain Monte Carlo metho d%模拟回火马尔可夫链蒙特卡罗全波形分析方法

    Institute of Scientific and Technical Information of China (English)

    尹文也; 何伟基; 顾国华; 陈钱

    2014-01-01

    To reconstruct the target shape distribution in the distance, full waveform analysis algorithm is utilized by extracting and analyzing the number of the peaks, the time of the peak maximum and other parameters. A novel fast full waveform analysis algorithm (simulated tempering Markov chain Monte Carlo algorithm, STMCMC) is proposed, which is able to process the waveform data automatically. For the different types of the parameters, simulated tempering strategy and the Metropolis strategy are presented. In simulated tempering strategy, due to the demand of speed or accuracy, active intervention tempering is used to control the process of solving the vector parameters. On the other hand, the Metropolis strategy is adopted for non-vector parameters to reduce computation amount. Both the strategies are based on Markov chain algorithm, and meanwhile can hold the convergence of the Markov chain, which makes the STMCMC algorithm robust.%针对传统的全波形分析方法不能快速自动处理全波形数据的缺点,提出了一种模拟回火马尔可夫链蒙特卡罗全波形分析法,用于求解全波形数据中的波峰数和峰值位置等参量。该方法采用Metropolis更新策略求解波峰数量和噪声两个参量,以达到快速求解的目的;而峰值位置和波峰幅值则采用改进的模拟回火策略求解,通过添加的主动干预回火步骤实现对参量更新过程的有效探测,以满足对速度或运算收敛性的要求。模拟回火马尔可夫链蒙特卡罗全波形分析方法以马尔可夫算法为基础,仍保持马氏链的收敛性,从而保证本方法具有良好的鲁棒性,实现对全波形数据的自动化处理。

  9. Probabilistic back analysis of unsaturated soil seepage parameters based on Markov chain Monte Carlo method%基于 MCMC 法的非饱和土渗流参数随机反分析

    Institute of Scientific and Technical Information of China (English)

    左自波; 张璐璐; 程演; 王建华; 何晔

    2013-01-01

      基于贝叶斯理论,以马尔可夫链蒙特卡罗方法(Markov chain Monte Carlo Simulation, MCMC 法)的自适应差分演化 Metropolis 算法为参数后验分布抽样计算方法,建立利用时变测试数据的参数随机反分析及模型预测方法。以香港东涌某天然坡地降雨入渗测试为算例,采用自适应差分演化 Metropolis 算法对时变降雨条件下非饱和土一维渗流模型参数进行随机反分析,研究参数后验分布的统计特性,并分别对校准期和验证期内模型预测孔压和实测值进行比较。研究结果表明,DREAM算法得到的各随机变量后验分布标准差较先验分布均显著减小;经过实测孔压数据的校准,模型计算精度很高,校准期内95%总置信区间的覆盖率达到0.964;验证期第2~4个阶段95%总置信区间的覆盖率分别为0.52、0.79和0.79,模型预测结果与实测值吻合程度较高。%Based on the Bayesian theory, a probabilistic back analysis method using time-varying measurement data is established. The back calculated posterior distributions are determined using the Markov chain Monte Carlo method (MCMC) with the differential evolution adaptive Metropolis algorithm. In this paper, a case study of a well instrumented natural terrain is presented. The deterministic model for pore-water pressure evaluation is an analytical model. Field measurements of pore-water pressure are used to calibrate the unsaturated parameters of the deterministic model. Statistical properties of the posterior distributions are presented and discussed. It is found that the posterior standard deviations of the six parameters are all greatly reduced. The predicted and measured pore-water pressures during the calibration period and the validation period are compared. The coverage of the 95%total uncertainty bounds is estimated to be 0.964 for the calibration period, during which the field measured pore pressures are used to back

  10. Hamiltonian Markov Chain Monte Carlo Method for Abrupt Motion Tracking%基于 Hamiltonian 马氏链蒙特卡罗方法的突变运动跟踪*

    Institute of Scientific and Technical Information of China (English)

    王法胜; 李绪成; 肖智博; 鲁明羽

    2014-01-01

    Tracking of abrupt motion is a challenging task in computer vision due to the large motion uncertainty induced by camera switching, sudden dynamic change, and rapid motion. This paper proposes an ordered over-relaxation Hamiltonian Markov chain Monte Carlo (MCMC) based tracking scheme for abrupt motion tracking within Bayesian filtering framework. In this tracking scheme, the object states are augmented by introducing a momentum item and the Hamiltonian dynamics (HD) is integrated into the traditional MCMC based tracking method. At the proposal step, the ordered over-relaxation method is adopted to draw the momentum item in order to suppress the random walk behavior induced by Gibbs sampling. In addition, the paper provides an adaptive step-size scheme to simulate the Hamiltonian dynamics in order to reduce the simulation error. The proposed tracking algorithm can avoid being trapped in local maxima with no additional computational burden, which is suffered by conventional MCMC based tracking algorithms. Experimental results reveal that the presented approach is efficient and effective in dealing with various types of abrupt motions compared with several alternatives.%在计算机视觉领域,由镜头切换、目标动力学突变、低帧率视频等引起的突变运动存在极大的不确定性,使得突变运动跟踪成为该领域的挑战性课题。以贝叶斯滤波框架为基础,提出一种基于有序超松弛 Hamiltonian 马氏链蒙特卡罗方法的突变运动跟踪算法。该算法将 Hamiltonian 动力学融入 MCMC(Markov chain Monte Carlo)算法,目标状态被扩张为原始目标状态变量与一个动量项的组合。在提议阶段,为抑制由 Gibbs 采样带来的随机游动行为,提出采用有序超松弛迭代方法来抽取目标动量项。同时,提出自适应步长的 Hamiltonian 动力学实现方法,在跟踪过程中自适应地调整步长,以减少模拟误差。提出的跟踪算法可以避免

  11. Monte Carlo方法研究聚亚甲基长链的统计性质%The Monte Carlo investigation of statistical properties of the long polymethylene chains

    Institute of Scientific and Technical Information of China (English)

    华柳青; 马定洋; 王禹

    2007-01-01

    Monte Carlo方法研究聚亚甲基长链的统计性质,如:平均分布几率(P)(r)、每一个键的平均能量、均方末端距〈R2〉等.然后,将这些结果和完全统计方法获得的结果进行对比,两者是相似的,而Monte Carlo方法有计算量小的优势.

  12. Structural Reliability Sensitivity Analysis Method Based on Markov Chain Monte Carlo Subset Simulation%基于马尔可夫蒙特卡罗子集模拟的可靠性灵敏度分析方法木

    Institute of Scientific and Technical Information of China (English)

    宋述芳; 吕震宙

    2009-01-01

    在小失效概率可靠性分析子集模拟法的基础上,提出基于马尔可夫蒙特卡罗(Markov Chain Monte Carlo,MCMC)子集模拟的可靠性灵敏度分析方法.在子集模拟的可靠性分析中,通过引入合理的中间失效事件将概率空间划分为一系列的子集,从而将小的失效概率表达为一系列易于模拟求解的较大条件失效概率乘积的形式,然后利用MCMC抽取条件样本点来估计条件失效概率.基于MCMC子集模拟的可靠性灵敏度分析,是将失效概率对基本变量分布参数的偏导数转化成条件失效概率对基本随机变量分布参数的偏导数.给出了偏导数通过MCMC模拟的条件样本点进行估计的原理和步骤,推导得出可靠性灵敏度分析的计算公式.利用简单数值算例和工程算例验证所提方法,算例结果表明:基于MCMC子集模拟的可靠性灵敏度分析方法有较高的计算效率和精度,对于高度非线性极限状态方程问题亦有很强的适应性.

  13. Markov chain Monte Carlo(MCMC)based image reconstruction algorithm for electrical capacitance tomography%基于MCMC方法的电容成像图像重构算法

    Institute of Scientific and Technical Information of China (English)

    叶佳敏; 彭黎辉

    2012-01-01

    研究基于概率统计的电容成像图像重构算法,以马尔科夫随机场的方式给出介电常数分布的先验概率,利用电容成像(electrical capacitance tomography,ECT)线性模型得到似然函数,通过马尔科夫链蒙特卡罗(Markov chain Monte Carlo,MCMC)方法对介电常数分布的后验概率密度进行采样,马尔科夫链的转移核利用Metropolis-Hastings方法得到,结合嵌套迭代提高计算效率.仿真结果表明,嵌套迭代-MCMC方法在正则化参数设置合适的条件下,可以得到较好的图像质量,基于MCMC方法图像重构算法为解决ECT图像重构问题提供一种新思路.%An image reconstruction algorithm based on statistical model for electrical capacitance tomography (ECT) is proposed. The prior probability and likelihood function are obtained using multi-level Markov random field and ECT liner model. Using MCMC sampling, the posterior distribution of permittivity is estimated. Meanwhile, nested iteration is introduced to improve the calculation efficiency. Simulation results show that the nested iteration-MCMC can enhance the calculation speed significantly and provide reconstruction images with higher quality if a proper regu-larization parameter is used. The MCMC based method provides a new way for ECT image reconstruction.

  14. Bayesian Inference on Parameters for Mixtures of a-Stable Distributions Based on Markov Chain Monte Carlo%基于MCMC的混合α稳定分布参数贝叶斯推理

    Institute of Scientific and Technical Information of China (English)

    陈亚军; 刘丁; 梁军利

    2012-01-01

    To solve the difficult problem of non-Gaussian signal difficult to be described, this paper suggests a method of Bayesian inference on parameter for mixtures of α-stable distributions based on Markov Chain Monte Carlo. The hierarchical Bayesian graph model is constructed. Gibbs sampling algorithm is used to achieve the estimation of the mixing weights and allocation parameter z. The 4 parameter estimations in each distribution component are completed on the basis of Metropolis algorithm. The simulation results show that the method can accurately estimate the parameters of mixture of α-stable distributions, and it has good robustness and flexibility, whereby the method can be used to establish the model for non-Gaussian signal or data.%为解决非高斯信号较难描述这一难点问题,提出一种基于马尔科夫链蒙特卡罗方法的混合α稳定分布参数的贝叶斯推理方法.构建了混合稳定分布分层的贝叶斯图模型,利用Gibbs抽样实现了混合权值和分配参数z的估计,基于Metropolis算法完成了每个分布元中4个参数的估计.仿真结果表明,该方法能够准确地估计出混合α稳定分布中的各个参数,具有很好的鲁棒性和灵活性,可用于对非高斯信号或数据进行建模.

  15. Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)

    2011-07-01

    This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)

  16. Pseudopotentials for quantum-Monte-Carlo-calculations; Pseudopotentiale fuer Quanten-Monte-Carlo-Rechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Burkatzki, Mark Thomas

    2008-07-01

    The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)

  17. Time management for Monte-Carlo tree search in Go

    NARCIS (Netherlands)

    Baier, Hendrik; Winands, Mark H M

    2012-01-01

    The dominant approach for programs playing the game of Go is nowadays Monte-Carlo Tree Search (MCTS). While MCTS allows for fine-grained time control, little has been published on time management for MCTS programs under tournament conditions. This paper investigates the effects that various time-man

  18. Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation

    Science.gov (United States)

    Silver, N. Clayton; Hittner, James B.; May, Kim

    2004-01-01

    The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…

  19. A Variational Monte Carlo Approach to Atomic Structure

    Science.gov (United States)

    Davis, Stephen L.

    2007-01-01

    The practicality and usefulness of variational Monte Carlo calculations to atomic structure are demonstrated. It is found to succeed in quantitatively illustrating electron shielding, effective nuclear charge, l-dependence of the orbital energies, and singlet-tripetenergy splitting and ionization energy trends in atomic structure theory.

  20. Nanoporous gold formation by dealloying : A Metropolis Monte Carlo study

    NARCIS (Netherlands)

    Zinchenko, O.; De Raedt, H. A.; Detsi, E.; Onck, P. R.; De Hosson, J. T. M.

    2013-01-01

    A Metropolis Monte Carlo study of the dealloying mechanism leading to the formation of nanoporous gold is presented. A simple lattice-gas model for gold, silver and acid particles, vacancies and products of chemical reactions is adopted. The influence of temperature, concentration and lattice defect

  1. Auxiliary-field quantum Monte Carlo methods in nuclei

    CERN Document Server

    Alhassid, Y

    2016-01-01

    Auxiliary-field quantum Monte Carlo methods enable the calculation of thermal and ground state properties of correlated quantum many-body systems in model spaces that are many orders of magnitude larger than those that can be treated by conventional diagonalization methods. We review recent developments and applications of these methods in nuclei using the framework of the configuration-interaction shell model.

  2. Bayesian Monte Carlo Method for Nuclear Data Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Koning, A.J., E-mail: koning@nrg.eu

    2015-01-15

    A Bayesian Monte Carlo method is outlined which allows a systematic evaluation of nuclear reactions using TALYS. The result will be either an EXFOR-weighted covariance matrix or a collection of random files, each accompanied by an experiment based weight.

  3. Data libraries as a collaborative tool across Monte Carlo codes

    CERN Document Server

    Augelli, Mauro; Han, Mincheol; Hauf, Steffen; Kim, Chan-Hyeung; Kuster, Markus; Pia, Maria Grazia; Quintieri, Lina; Saracco, Paolo; Seo, Hee; Sudhakar, Manju; Eidenspointner, Georg; Zoglauer, Andreas

    2010-01-01

    The role of data libraries in Monte Carlo simulation is discussed. A number of data libraries currently in preparation are reviewed; their data are critically examined with respect to the state-of-the-art in the respective fields. Extensive tests with respect to experimental data have been performed for the validation of their content.

  4. Effective quantum Monte Carlo algorithm for modeling strongly correlated systems

    NARCIS (Netherlands)

    Kashurnikov, V. A.; Krasavin, A. V.

    2007-01-01

    A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description

  5. A Monte Carlo Evaluation of Maximum Likelihood Multidimensional Scaling Methods

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    1996-01-01

    We compare three alternative Maximum Likelihood Multidimensional Scaling methods for pairwise dissimilarity ratings, namely MULTISCALE, MAXSCAL, and PROSCAL in a Monte Carlo study.The three MLMDS methods recover the true con gurations very well.The recovery of the true dimensionality depends on the

  6. Monte Carlo Simulation on Glueball Search at BESⅢ

    Institute of Scientific and Technical Information of China (English)

    QIN Hu; SHEN Xiao-Yan

    2007-01-01

    The J/ψ radiative decays are suggested as promising modes for glueball search. A full Monte Carlo simulation of J/ψ→γηη and γηη', based on the design of BESⅢ detector, is performed to study the sensitivity of searching for a possible tensor glueball at BESⅢ.

  7. Quantum Monte Carlo simulation of topological phase transitions

    Science.gov (United States)

    Yamamoto, Arata; Kimura, Taro

    2016-12-01

    We study the electron-electron interaction effects on topological phase transitions by the ab initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.

  8. Monte Carlo Simulation Optimizing Design of Grid Ionization Chamber

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Yu-lai; WANG; Qiang; YANG; Lu

    2013-01-01

    The grid ionization chamber detector is often used for measuring charged particles.Based on Monte Carlo simulation method,the energy loss distribution and electron ion pairs of alpha particle with different energy have been calculated to determine suitable filling gas in the ionization chamber filled with

  9. Monte Carlo method for magnetic impurities in metals

    Science.gov (United States)

    Hirsch, J. E.; Fye, R. M.

    1986-01-01

    The paper discusses a Monte Carlo algorithm to study properties of dilute magnetic alloys; the method can treat a small number of magnetic impurities interacting wiith the conduction electrons in a metal. Results for the susceptibility of a single Anderson impurity in the symmetric case show the expected universal behavior at low temperatures. Some results for two Anderson impurities are also discussed.

  10. Improved Monte Carlo model for multiple scattering calculations

    Institute of Scientific and Technical Information of China (English)

    Weiwei Cai; Lin Ma

    2012-01-01

    The coupling between the Monte Carlo (MC) method and geometrical optics to improve accuracy is investigated.The results obtained show improved agreement with previous experimental data,demonstrating that the MC method,when coupled with simple geometrical optics,can simulate multiple scattering with enhanced fidelity.

  11. Simulating Strongly Correlated Electron Systems with Hybrid Monte Carlo

    Institute of Scientific and Technical Information of China (English)

    LIU Chuan

    2000-01-01

    Using the path integral representation, the Hubbard and the periodic Anderson model on D-dimensional cubic lattice are transformed into field theories of fermions in D + 1 dimensions. These theories at half-filling possess a positive definite real symmetry fermion matrix and can be simulated using the hybrid Monte Carlo method.

  12. Research of Monte Carlo Simulation in Commercial Bank Risk Management

    Institute of Scientific and Technical Information of China (English)

    BeimingXiao

    2004-01-01

    Simulation method is an important-tool in financial risk management. It can simulate financial variable or economic wriable and deal with non-linear or non-nominal issue. This paper analyzes the usage of "Monte Carlo" approach in commercial bank risk management.

  13. Observations on variational and projector Monte Carlo methods.

    Science.gov (United States)

    Umrigar, C J

    2015-10-28

    Variational Monte Carlo and various projector Monte Carlo (PMC) methods are presented in a unified manner. Similarities and differences between the methods and choices made in designing the methods are discussed. Both methods where the Monte Carlo walk is performed in a discrete space and methods where it is performed in a continuous space are considered. It is pointed out that the usual prescription for importance sampling may not be advantageous depending on the particular quantum Monte Carlo method used and the observables of interest, so alternate prescriptions are presented. The nature of the sign problem is discussed for various versions of PMC methods. A prescription for an exact PMC method in real space, i.e., a method that does not make a fixed-node or similar approximation and does not have a finite basis error, is presented. This method is likely to be practical for systems with a small number of electrons. Approximate PMC methods that are applicable to larger systems and go beyond the fixed-node approximation are also discussed.

  14. Monte-carlo calculations for some problems of quantum mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Novoselov, A. A., E-mail: novoselov@goa.bog.msu.ru; Pavlovsky, O. V.; Ulybyshev, M. V. [Moscow State University (Russian Federation)

    2012-09-15

    The Monte-Carlo technique for the calculations of functional integral in two one-dimensional quantum-mechanical problems had been applied. The energies of the bound states in some potential wells were obtained using this method. Also some peculiarities in the calculation of the kinetic energy in the ground state had been studied.

  15. Play It Again: Teaching Statistics with Monte Carlo Simulation

    Science.gov (United States)

    Sigal, Matthew J.; Chalmers, R. Philip

    2016-01-01

    Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…

  16. An Overview of the Monte Carlo Methods, Codes, & Applications Group

    Energy Technology Data Exchange (ETDEWEB)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-30

    This report sketches the work of the Group to deliver first-principle Monte Carlo methods, production quality codes, and radiation transport-based computational and experimental assessments using the codes MCNP and MCATK for such applications as criticality safety, non-proliferation, nuclear energy, nuclear threat reduction and response, radiation detection and measurement, radiation health protection, and stockpile stewardship.

  17. Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm

    Science.gov (United States)

    Gubernatis, James

    2014-03-01

    A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.

  18. Quantum Monte Carlo simulation of topological phase transitions

    CERN Document Server

    Yamamoto, Arata

    2016-01-01

    We study the electron-electron interaction effects on topological phase transitions by the ab-initio quantum Monte Carlo simulation. We analyze two-dimensional class A topological insulators and three-dimensional Weyl semimetals with the long-range Coulomb interaction. The direct computation of the Chern number shows the electron-electron interaction modifies or extinguishes topological phase transitions.

  19. The Metropolis Monte Carlo Method in Statistical Physics

    Science.gov (United States)

    Landau, David P.

    2003-11-01

    A brief overview is given of some of the advances in statistical physics that have been made using the Metropolis Monte Carlo method. By complementing theory and experiment, these have increased our understanding of phase transitions and other phenomena in condensed matter systems. A brief description of a new method, commonly known as "Wang-Landau sampling," will also be presented.

  20. SPANDY: a Monte Carlo program for gas target scattering geometry

    Energy Technology Data Exchange (ETDEWEB)

    Jarmie, N.; Jett, J.H.; Niethammer, A.C.

    1977-02-01

    A Monte Carlo computer program is presented that simulates a two-slit gas target scattering geometry. The program is useful in estimating effects due to finite geometry and multiple scattering in the target foil. Details of the program are presented and experience with a specific example is discussed.

  1. Quantum Monte Carlo diagonalization method as a variational calculation

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1997-05-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  2. Criticality benchmarks validation of the Monte Carlo code TRIPOLI-2

    Energy Technology Data Exchange (ETDEWEB)

    Maubert, L. (Commissariat a l' Energie Atomique, Inst. de Protection et de Surete Nucleaire, Service d' Etudes de Criticite, 92 - Fontenay-aux-Roses (France)); Nouri, A. (Commissariat a l' Energie Atomique, Inst. de Protection et de Surete Nucleaire, Service d' Etudes de Criticite, 92 - Fontenay-aux-Roses (France)); Vergnaud, T. (Commissariat a l' Energie Atomique, Direction des Reacteurs Nucleaires, Service d' Etudes des Reacteurs et de Mathematique Appliquees, 91 - Gif-sur-Yvette (France))

    1993-04-01

    The three-dimensional energy pointwise Monte-Carlo code TRIPOLI-2 includes metallic spheres of uranium and plutonium, nitrate plutonium solutions, square and triangular pitch assemblies of uranium oxide. Results show good agreements between experiments and calculations, and avoid a part of the code and its ENDF-B4 library validation. (orig./DG)

  3. Determining MTF of digital detector system with Monte Carlo simulation

    Science.gov (United States)

    Jeong, Eun Seon; Lee, Hyung Won; Nam, Sang Hee

    2005-04-01

    We have designed a detector based on a-Se(amorphous Selenium) and done simulation the detector with Monte Carlo method. We will apply the cascaded linear system theory to determine the MTF for whole detector system. For direct comparison with experiment, we have simulated 139um pixel pitch and used simulated X-ray tube spectrum.

  4. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...

  5. Direct Monte Carlo simulation of nanoscale mixed gas bearings

    Directory of Open Access Journals (Sweden)

    Kyaw Sett Myo

    2015-06-01

    Full Text Available The conception of sealed hard drives with helium gas mixture has been recently suggested over the current hard drives for achieving higher reliability and less position error. Therefore, it is important to understand the effects of different helium gas mixtures on the slider bearing characteristics in the head–disk interface. In this article, the helium/air and helium/argon gas mixtures are applied as the working fluids and their effects on the bearing characteristics are studied using the direct simulation Monte Carlo method. Based on direct simulation Monte Carlo simulations, the physical properties of these gas mixtures such as mean free path and dynamic viscosity are achieved and compared with those obtained from theoretical models. It is observed that both results are comparable. Using these gas mixture properties, the bearing pressure distributions are calculated under different fractions of helium with conventional molecular gas lubrication models. The outcomes reveal that the molecular gas lubrication results could have relatively good agreement with those of direct simulation Monte Carlo simulations, especially for pure air, helium, or argon gas cases. For gas mixtures, the bearing pressures predicted by molecular gas lubrication model are slightly larger than those from direct simulation Monte Carlo simulation.

  6. CMS Monte Carlo production operations in a distributed computing environment

    CERN Document Server

    Mohapatra, A; Khomich, A; Lazaridis, C; Hernández, J M; Caballero, J; Hof, C; Kalinin, S; Flossdorf, A; Abbrescia, M; De Filippis, N; Donvito, G; Maggi, G; My, S; Pompili, A; Sarkar, S; Maes, J; Van Mulders, P; Villella, I; De Weirdt, S; Hammad, G; Wakefield, S; Guan, W; Lajas, J A S; Elmer, P; Evans, D; Fanfani, A; Bacchi, W; Codispoti, G; Van Lingen, F; Kavka, C; Eulisse, G

    2008-01-01

    Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A brief overview of the production operations and statistics is presented.

  7. Multi-microcomputer system for Monte-Carlo calculations

    CERN Document Server

    Berg, B; Krasemann, H

    1981-01-01

    The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.

  8. Monte Carlo methods for multidimensional integration for European option pricing

    Science.gov (United States)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  9. Monte Carlo simulation of magnetic nanostructured thin films

    Institute of Scientific and Technical Information of China (English)

    Guan Zhi-Qiang; Yutaka Abe; Jiang Dong-Hua; Lin Hai; Yoshitake Yamazakia; Wu Chen-Xu

    2004-01-01

    @@ Using Monte Carlo simulation, we have compared the magnetic properties between nanostructured thin films and two-dimensional crystalline solids. The dependence of nanostructured properties on the interaction between particles that constitute the nanostructured thin films is also studied. The result shows that the parameters in the interaction potential have an important effect on the properties of nanostructured thin films at the transition temperatures.

  10. A separable shadow Hamiltonian hybrid Monte Carlo method.

    Science.gov (United States)

    Sweet, Christopher R; Hampton, Scott S; Skeel, Robert D; Izaguirre, Jesús A

    2009-11-07

    Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).

  11. Variational Monte Carlo calculations of few-body nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Wiringa, R.B.

    1986-01-01

    The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.

  12. Monte Carlo studies of nuclei and quantum liquid drops

    Energy Technology Data Exchange (ETDEWEB)

    Pandharipande, V.R.; Pieper, S.C.

    1989-01-01

    The progress in application of variational and Green's function Monte Carlo methods to nuclei is reviewed. The nature of single-particle orbitals in correlated quantum liquid drops is discussed, and it is suggested that the difference between quasi-particle and mean-field orbitals may be of importance in nuclear structure physics. 27 refs., 7 figs., 2 tabs.

  13. Monte Carlo: in the beginning and some great expectations

    Energy Technology Data Exchange (ETDEWEB)

    Metropolis, N.

    1985-01-01

    The central theme will be on the historical setting and origins of the Monte Carlo Method. The scene was post-war Los Alamos Scientific Laboratory. There was an inevitability about the Monte Carlo Event: the ENIAC had recently enjoyed its meteoric rise (on a classified Los Alamos problem); Stan Ulam had returned to Los Alamos; John von Neumann was a frequent visitor. Techniques, algorithms, and applications developed rapidly at Los Alamos. Soon, the fascination of the Method reached wider horizons. The first paper was submitted for publication in the spring of 1949. In the summer of 1949, the first open conference was held at the University of California at Los Angeles. Of some interst perhaps is an account of Fermi's earlier, independent application in neutron moderation studies while at the University of Rome. The quantum leap expected with the advent of massively parallel processors will provide stimuli for very ambitious applications of the Monte Carlo Method in disciplines ranging from field theories to cosmology, including more realistic models in the neurosciences. A structure of multi-instruction sets for parallel processing is ideally suited for the Monte Carlo approach. One may even hope for a modest hardening of the soft sciences.

  14. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1985-01-01

    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used t

  15. On a full Monte Carlo approach to quantum mechanics

    Science.gov (United States)

    Sellier, J. M.; Dimov, I.

    2016-12-01

    The Monte Carlo approach to numerical problems has shown to be remarkably efficient in performing very large computational tasks since it is an embarrassingly parallel technique. Additionally, Monte Carlo methods are well known to keep performance and accuracy with the increase of dimensionality of a given problem, a rather counterintuitive peculiarity not shared by any known deterministic method. Motivated by these very peculiar and desirable computational features, in this work we depict a full Monte Carlo approach to the problem of simulating single- and many-body quantum systems by means of signed particles. In particular we introduce a stochastic technique, based on the strategy known as importance sampling, for the computation of the Wigner kernel which, so far, has represented the main bottleneck of this method (it is equivalent to the calculation of a multi-dimensional integral, a problem in which complexity is known to grow exponentially with the dimensions of the problem). The introduction of this stochastic technique for the kernel is twofold: firstly it reduces the complexity of a quantum many-body simulation from non-linear to linear, secondly it introduces an embarassingly parallel approach to this very demanding problem. To conclude, we perform concise but indicative numerical experiments which clearly illustrate how a full Monte Carlo approach to many-body quantum systems is not only possible but also advantageous. This paves the way towards practical time-dependent, first-principle simulations of relatively large quantum systems by means of affordable computational resources.

  16. A Markov Chain Monte Carlo Method for Simulation of Wind and Solar Power Time Series%风光发电功率时间序列模拟的MCMC方法

    Institute of Scientific and Technical Information of China (English)

    罗钢; 石东源; 陈金富; 吴小珊

    2014-01-01

    准确、合理地构建间歇性电源的发电功率模型对于电力系统的仿真分析与计算具有重要意义。提出了一种风光发电功率时间序列模拟的单变量与多变量马尔科夫链蒙特卡罗(Markov chain Monte Carlo,MCMC)仿真方法。该模型针对风电场与光伏电站等多种类型的间歇性电源,构建发电功率时间序列的马尔科夫链,采用Gibbs抽样技术实现了单变量或多变量的时间序列模拟。不仅全面地分析了不同类型间歇性电源马尔科夫过程的特征与影响因素,并且在 MCMC方法中考虑了多变量之间的相互联系,使模型能够适应多组间歇性电源彼此间存在相关性的情形。对德国2家电力公司控制区域内的风电场、光伏电站进行仿真模拟,通过统计特征参数的对比分析,验证了所提模型的有效性。%Constructing a model of generated power for intermittent power source accurately and reasonably is of great significance for power system simulation and analysis. A single- and multi-variable Markov chain Monte Carlo (MCMC) method to simulate time series of wind and photovoltaic (PV) power is proposed. Aiming to many types of intermittent power source such as wind farms and PV generation stations, the proposed model constructs Markov chain for time series of generated power and utilizing Gibbs sampling the single- or multi-variable time series simulation is realized. Not only the features and impacting factors of Markov processes for different types of intermittent power sources are overall analyzed, and in the MCMC method the interrelation among multi-variables is considered, thus the proposed model can adapt to the condition that there are interrelations among multi sets of intermittent power sources. The wind farms and PV power plants located in the control areas of two power companies in Germany are simulated, and through the comparative analysis on statistical feature parameters the validity of

  17. Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL

    2015-01-01

    This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Three of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.

  18. An efficient approach to ab initio Monte Carlo simulation.

    Science.gov (United States)

    Leiding, Jeff; Coe, Joshua D

    2014-01-21

    We present a Nested Markov chain Monte Carlo (NMC) scheme for building equilibrium averages based on accurate potentials such as density functional theory. Metropolis sampling of a reference system, defined by an inexpensive but approximate potential, was used to substantially decorrelate configurations at which the potential of interest was evaluated, thereby dramatically reducing the number needed to build ensemble averages at a given level of precision. The efficiency of this procedure was maximized on-the-fly through variation of the reference system thermodynamic state (characterized here by its inverse temperature β(0)), which was otherwise unconstrained. Local density approximation results are presented for shocked states of argon at pressures from 4 to 60 GPa, where-depending on the quality of the reference system potential-acceptance probabilities were enhanced by factors of 1.2-28 relative to unoptimized NMC. The optimization procedure compensated strongly for reference potential shortcomings, as evidenced by significantly higher speedups when using a reference potential of lower quality. The efficiency of optimized NMC is shown to be competitive with that of standard ab initio molecular dynamics in the canonical ensemble.

  19. MCMCpack: Markov Chain Monte Carlo in R

    Directory of Open Access Journals (Sweden)

    Andrew D. Martin

    2011-08-01

    Full Text Available We introduce MCMCpack, an R package that contains functions to perform Bayesian inference using posterior simulation for a number of statistical models. In addition to code that can be used to fit commonly used models, MCMCpack also contains some useful utility functions, including some additional density functions and pseudo-random number generators for statistical distributions, a general purpose Metropolis sampling algorithm, and tools for visualization.

  20. Development of ray tracing visualization program by Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Otani, Takayuki [Japan Atomic Energy Research Inst., Tokyo (Japan); Hasegawa, Yukihiro

    1997-09-01

    Ray tracing algorithm is a powerful method to synthesize three dimensional computer graphics. In conventional ray tracing algorithms, a view point is used as a starting point of ray tracing, from which the rays are tracked up to the light sources through center points of pixels on the view screen to calculate the intensities of the pixels. This manner, however, makes it difficult to define the configuration of light source as well as to strictly simulate the reflections of the rays. To resolve these problems, we have developed a new ray tracing means which traces rays from a light source, not from a view point, with use of Monte Carlo method which is widely applied in nuclear fields. Moreover, we adopt the variance reduction techniques to the program with use of the specialized machine (Monte-4) for particle transport Monte Carlo so that the computational time could be successfully reduced. (author)