WorldWideScience

Sample records for models monte carlo

  1. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  2. Monte Carlo exploration of warped Higgsless models

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu

    2004-10-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)

  3. Monte Carlo Exploration of Warped Higgsless Models

    CERN Document Server

    Hewett, J L; Rizzo, T G

    2004-01-01

    We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.

  4. Validation of Compton Scattering Monte Carlo Simulation Models

    CERN Document Server

    Weidenspointner, Georg; Hauf, Steffen; Hoff, Gabriela; Kuster, Markus; Pia, Maria Grazia; Saracco, Paolo

    2014-01-01

    Several models for the Monte Carlo simulation of Compton scattering on electrons are quantitatively evaluated with respect to a large collection of experimental data retrieved from the literature. Some of these models are currently implemented in general purpose Monte Carlo systems; some have been implemented and evaluated for possible use in Monte Carlo particle transport for the first time in this study. Here we present first and preliminary results concerning total and differential Compton scattering cross sections.

  5. Monte Carlo modelling of TRIGA research reactor

    Science.gov (United States)

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  6. Reporting Monte Carlo Studies in Structural Equation Modeling

    NARCIS (Netherlands)

    Boomsma, Anne

    2013-01-01

    In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel

  7. Monte Carlo Simulation of River Meander Modelling

    Science.gov (United States)

    Posner, A. J.; Duan, J. G.

    2010-12-01

    This study first compares the first order analytical solutions for flow field by Ikeda et. al. (1981) and Johanesson and Parker (1989b). Ikeda et. al.’s (1981) linear bank erosion model was implemented to predict the rate of bank erosion in which the bank erosion coefficient is treated as a stochastic variable that varies with physical properties of the bank (e.g. cohesiveness, stratigraphy, vegetation density). The developed model was used to predict the evolution of meandering planforms. Then, the modeling results were analyzed and compared to the observed data. Since the migration of meandering channel consists of downstream translation, lateral expansion, and downstream or upstream rotations. Several measures are formulated in order to determine which of the resulting planform is closest to the experimental measured one. Results from the deterministic model highly depend on the calibrated erosion coefficient. Since field measurements are always limited, the stochastic model yielded more realistic predictions of meandering planform evolutions. Due to the random nature of bank erosion coefficient, the meandering planform evolution is a stochastic process that can only be accurately predicted by a stochastic model. Quasi-2D Ikeda (1989) flow solution with Monte Carlo Simulation of Bank Erosion Coefficient.

  8. Monte Carlo models of dust coagulation

    CERN Document Server

    Zsom, Andras

    2010-01-01

    The thesis deals with the first stage of planet formation, namely dust coagulation from micron to millimeter sizes in circumstellar disks. For the first time, we collect and compile the recent laboratory experiments on dust aggregates into a collision model that can be implemented into dust coagulation models. We put this model into a Monte Carlo code that uses representative particles to simulate dust evolution. Simulations are performed using three different disk models in a local box (0D) located at 1 AU distance from the central star. We find that the dust evolution does not follow the previously assumed growth-fragmentation cycle, but growth is halted by bouncing before the fragmentation regime is reached. We call this the bouncing barrier which is an additional obstacle during the already complex formation process of planetesimals. The absence of the growth-fragmentation cycle and the halted growth has two important consequences for planet formation. 1) It is observed that disk atmospheres are dusty thr...

  9. Quantum Monte Carlo methods algorithms for lattice models

    CERN Document Server

    Gubernatis, James; Werner, Philipp

    2016-01-01

    Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...

  10. MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD

    Science.gov (United States)

    A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...

  11. Event-chain Monte Carlo for classical continuous spin models

    Science.gov (United States)

    Michel, Manon; Mayer, Johannes; Krauth, Werner

    2015-10-01

    We apply the event-chain Monte Carlo algorithm to classical continuum spin models on a lattice and clarify the condition for its validity. In the two-dimensional XY model, it outperforms the local Monte Carlo algorithm by two orders of magnitude, although it remains slower than the Wolff cluster algorithm. In the three-dimensional XY spin glass model at low temperature, the event-chain algorithm is far superior to the other algorithms.

  12. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John

    2017-01-01

    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  13. Gas discharges modeling by Monte Carlo technique

    Directory of Open Access Journals (Sweden)

    Savić Marija

    2010-01-01

    Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].

  14. Modeling neutron guides using Monte Carlo simulations

    CERN Document Server

    Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R

    2002-01-01

    Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.

  15. Monte Carlo studies of model Langmuir monolayers.

    Science.gov (United States)

    Opps, S B; Yang, B; Gray, C G; Sullivan, D E

    2001-04-01

    This paper examines some of the basic properties of a model Langmuir monolayer, consisting of surfactant molecules deposited onto a water subphase. The surfactants are modeled as rigid rods composed of a head and tail segment of diameters sigma(hh) and sigma(tt), respectively. The tails consist of n(t) approximately 4-7 effective monomers representing methylene groups. These rigid rods interact via site-site Lennard-Jones potentials with different interaction parameters for the tail-tail, head-tail, and head-head interactions. In a previous paper, we studied the ground-state properties of this system using a Landau approach. In the present paper, Monte Carlo simulations were performed in the canonical ensemble to elucidate the finite-temperature behavior of this system. Simulation techniques, incorporating a system of dynamic filters, allow us to decrease CPU time with negligible statistical error. This paper focuses on several of the key parameters, such as density, head-tail diameter mismatch, and chain length, responsible for driving transitions from uniformly tilted to untilted phases and between different tilt-ordered phases. Upon varying the density of the system, with sigma(hh)=sigma(tt), we observe a transition from a tilted (NNN)-condensed phase to an untilted-liquid phase and, upon comparison with recent experiments with fatty acid-alcohol and fatty acid-ester mixtures [M. C. Shih, M. K. Durbin, A. Malik, P. Zschack, and P. Dutta, J. Chem. Phys. 101, 9132 (1994); E. Teer, C. M. Knobler, C. Lautz, S. Wurlitzer, J. Kildae, and T. M. Fischer, J. Chem. Phys. 106, 1913 (1997)], we identify this as the L'(2)/Ov-L1 phase boundary. By varying the head-tail diameter ratio, we observe a decrease in T(c) with increasing mismatch. However, as the chain length was increased we observed that the transition temperatures increased and differences in T(c) due to head-tail diameter mismatch were diminished. In most of the present research, the water was treated as a hard

  16. Quasi-Monte Carlo methods for the Heston model

    OpenAIRE

    Jan Baldeaux; Dale Roberts

    2012-01-01

    In this paper, we discuss the application of quasi-Monte Carlo methods to the Heston model. We base our algorithms on the Broadie-Kaya algorithm, an exact simulation scheme for the Heston model. As the joint transition densities are not available in closed-form, the Linear Transformation method due to Imai and Tan, a popular and widely applicable method to improve the effectiveness of quasi-Monte Carlo methods, cannot be employed in the context of path-dependent options when the underlying pr...

  17. Modelling hadronic interactions in cosmic ray Monte Carlo generators

    Directory of Open Access Journals (Sweden)

    Pierog Tanguy

    2015-01-01

    Full Text Available Currently the uncertainty in the prediction of shower observables for different primary particles and energies is dominated by differences between hadronic interaction models. The LHC data on minimum bias measurements can be used to test Monte Carlo generators and these new constraints will help to reduce the uncertainties in air shower predictions. In this article, after a short introduction on air showers and Monte Carlo generators, we will show the results of the comparison between the updated version of high energy hadronic interaction models EPOS LHC and QGSJETII-04 with LHC data. Results for air shower simulations and their consequences on comparisons with air shower data will be discussed.

  18. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf

    2010-01-01

    Offering a unique balance between applications and calculations, this book incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The book enables readers to find the right algorithm for a desired application and illustrates complicated methods and algorithms with simple applicat

  19. Calibration and Monte Carlo modelling of neutron long counters

    CERN Document Server

    Tagziria, H

    2000-01-01

    The Monte Carlo technique has become a very powerful tool in radiation transport as full advantage is taken of enhanced cross-section data, more powerful computers and statistical techniques, together with better characterisation of neutron and photon source spectra. At the National Physical Laboratory, calculations using the Monte Carlo radiation transport code MCNP-4B have been combined with accurate measurements to characterise two long counters routinely used to standardise monoenergetic neutron fields. New and more accurate response function curves have been produced for both long counters. A novel approach using Monte Carlo methods has been developed, validated and used to model the response function of the counters and determine more accurately their effective centres, which have always been difficult to establish experimentally. Calculations and measurements agree well, especially for the De Pangher long counter for which details of the design and constructional material are well known. The sensitivit...

  20. Strain in the mesoscale kinetic Monte Carlo model for sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    2014-01-01

    Shrinkage strains measured from microstructural simulations using the mesoscale kinetic Monte Carlo (kMC) model for solid state sintering are discussed. This model represents the microstructure using digitized discrete sites that are either grain or pore sites. The algorithm used to simulate...

  1. A novel Monte Carlo approach to hybrid local volatility models

    NARCIS (Netherlands)

    A.W. van der Stoep (Anton); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractWe present in a Monte Carlo simulation framework, a novel approach for the evaluation of hybrid local volatility [Risk, 1994, 7, 18–20], [Int. J. Theor. Appl. Finance, 1998, 1, 61–110] models. In particular, we consider the stochastic local volatility model—see e.g. Lipton et al. [Quant.

  2. JEWEL - a Monte Carlo Model for Jet Quenching

    CERN Document Server

    Zapp, Korinna; Wiedemann, Urs Achim

    2009-01-01

    The Monte Carlo model JEWEL 1.0 (Jet Evolution With Energy Loss) simulates parton shower evolution in the presence of a dense QCD medium. In its current form medium interactions are modelled as elastic scattering based on perturbative matrix elements and a simple prescription for medium induced gluon radiation. The parton shower is interfaced with a hadronisation model. In the absence of medium effects JEWEL is shown to reproduce jet measurements at LEP. The collisional energy loss is consistent with analytic calculations, but with JEWEL we can go a step further and characterise also jet-induced modifications of the medium. Elastic and inelastic medium interactions are shown to lead to distinctive modifications of the jet fragmentation pattern, which should allow to experimentally distinguish between collisional and radiative energy loss mechanisms. In these proceedings the main JEWEL results are summarised and a Monte Carlo algorithm is outlined that allows to include the Landau-Pomerantschuk-Migdal effect i...

  3. Monte Carlo Euler approximations of HJM term structure financial models

    KAUST Repository

    Björk, Tomas

    2012-11-22

    We present Monte Carlo-Euler methods for a weak approximation problem related to the Heath-Jarrow-Morton (HJM) term structure model, based on Itô stochastic differential equations in infinite dimensional spaces, and prove strong and weak error convergence estimates. The weak error estimates are based on stochastic flows and discrete dual backward problems, and they can be used to identify different error contributions arising from time and maturity discretization as well as the classical statistical error due to finite sampling. Explicit formulas for efficient computation of sharp error approximation are included. Due to the structure of the HJM models considered here, the computational effort devoted to the error estimates is low compared to the work to compute Monte Carlo solutions to the HJM model. Numerical examples with known exact solution are included in order to show the behavior of the estimates. © 2012 Springer Science+Business Media Dordrecht.

  4. A semianalytic Monte Carlo code for modelling LIDAR measurements

    Science.gov (United States)

    Palazzi, Elisa; Kostadinov, Ivan; Petritoli, Andrea; Ravegnani, Fabrizio; Bortoli, Daniele; Masieri, Samuele; Premuda, Margherita; Giovanelli, Giorgio

    2007-10-01

    LIDAR (LIght Detection and Ranging) is an optical active remote sensing technology with many applications in atmospheric physics. Modelling of LIDAR measurements appears useful approach for evaluating the effects of various environmental variables and scenarios as well as of different measurement geometries and instrumental characteristics. In this regard a Monte Carlo simulation model can provide a reliable answer to these important requirements. A semianalytic Monte Carlo code for modelling LIDAR measurements has been developed at ISAC-CNR. The backscattered laser signal detected by the LIDAR system is calculated in the code taking into account the contributions due to the main atmospheric molecular constituents and aerosol particles through processes of single and multiple scattering. The contributions by molecular absorption, ground and clouds reflection are evaluated too. The code can perform simulations of both monostatic and bistatic LIDAR systems. To enhance the efficiency of the Monte Carlo simulation, analytical estimates and expected value calculations are performed. Artificial devices (such as forced collision, local forced collision, splitting and russian roulette) are moreover foreseen by the code, which can enable the user to drastically reduce the variance of the calculation.

  5. Monte Carlo modelling of positron transport in real world applications

    Science.gov (United States)

    Marjanović, S.; Banković, A.; Šuvakov, M.; Petrović, Z. Lj

    2014-05-01

    Due to the unstable nature of positrons and their short lifetime, it is difficult to obtain high positron particle densities. This is why the Monte Carlo simulation technique, as a swarm method, is very suitable for modelling most of the current positron applications involving gaseous and liquid media. The ongoing work on the measurements of cross-sections for positron interactions with atoms and molecules and swarm calculations for positrons in gasses led to the establishment of good cross-section sets for positron interaction with gasses commonly used in real-world applications. Using the standard Monte Carlo technique and codes that can follow both low- (down to thermal energy) and high- (up to keV) energy particles, we are able to model different systems directly applicable to existing experimental setups and techniques. This paper reviews the results on modelling Surko-type positron buffer gas traps, application of the rotating wall technique and simulation of positron tracks in water vapor as a substitute for human tissue, and pinpoints the challenges in and advantages of applying Monte Carlo simulations to these systems.

  6. A Monte Carlo Model of Light Propagation in Nontransparent Tissue

    Institute of Scientific and Technical Information of China (English)

    姚建铨; 朱水泉; 胡海峰; 王瑞康

    2004-01-01

    To sharpen the imaging of structures, it is vital to develop a convenient and efficient quantitative algorithm of the optical coherence tomography (OCT) sampling. In this paper a new Monte Carlo model is set up and how light propagates in bio-tissue is analyzed in virtue of mathematics and physics equations. The relations,in which light intensity of Class 1 and Class 2 light with different wavelengths changes with their permeation depth,and in which Class 1 light intensity (signal light intensity) changes with the probing depth, and in which angularly resolved diffuse reflectance and diffuse transmittance change with the exiting angle, are studied. The results show that Monte Carlo simulation results are consistent with the theory data.

  7. A generalized hard-sphere model for Monte Carlo simulation

    Science.gov (United States)

    Hassan, H. A.; Hash, David B.

    1993-01-01

    A new molecular model, called the generalized hard-sphere, or GHS model, is introduced. This model contains, as a special case, the variable hard-sphere model of Bird (1981) and is capable of reproducing all of the analytic viscosity coefficients available in the literature that are derived for a variety of interaction potentials incorporating attraction and repulsion. In addition, a new procedure for determining interaction potentials in a gas mixture is outlined. Expressions needed for implementing the new model in the direct simulation Monte Carlo methods are derived. This development makes it possible to employ interaction models that have the same level of complexity as used in Navier-Stokes calculations.

  8. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  9. Extended Ensemble Monte Carlo

    OpenAIRE

    Iba, Yukito

    2000-01-01

    ``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...

  10. Monte Carlo simulation of classical spin models with chaotic billiards.

    Science.gov (United States)

    Suzuki, Hideyuki

    2013-11-01

    It has recently been shown that the computing abilities of Boltzmann machines, or Ising spin-glass models, can be implemented by chaotic billiard dynamics without any use of random numbers. In this paper, we further numerically investigate the capabilities of the chaotic billiard dynamics as a deterministic alternative to random Monte Carlo methods by applying it to classical spin models in statistical physics. First, we verify that the billiard dynamics can yield samples that converge to the true distribution of the Ising model on a small lattice, and we show that it appears to have the same convergence rate as random Monte Carlo sampling. Second, we apply the billiard dynamics to finite-size scaling analysis of the critical behavior of the Ising model and show that the phase-transition point and the critical exponents are correctly obtained. Third, we extend the billiard dynamics to spins that take more than two states and show that it can be applied successfully to the Potts model. We also discuss the possibility of extensions to continuous-valued models such as the XY model.

  11. Dynamical Monte Carlo method for stochastic epidemic models

    CERN Document Server

    Aiello, O E

    2002-01-01

    A new approach to Dynamical Monte Carlo Methods is introduced to simulate markovian processes. We apply this approach to formulate and study an epidemic Generalized SIRS model. The results are in excellent agreement with the forth order Runge-Kutta method in a region of deterministic solution. Introducing local stochastic interactions, the Runge-Kutta method is not applicable, and we solve and check it self-consistently with a stochastic version of the Euler Method. The results are also analyzed under the herd-immunity concept.

  12. Monte Carlo Shell Model for ab initio nuclear structure

    Directory of Open Access Journals (Sweden)

    Abe T.

    2014-03-01

    Full Text Available We report on our recent application of the Monte Carlo Shell Model to no-core calculations. At the initial stage of the application, we have performed benchmark calculations in the p-shell region. Results are compared with those in the Full Configuration Interaction and No-Core Full Configuration methods. These are found to be consistent with each other within quoted uncertainties when they could be quantified. The preliminary results in Nshell = 5 reveal the onset of systematic convergence pattern.

  13. Novel Extrapolation Method in the Monte Carlo Shell Model

    CERN Document Server

    Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.

  14. Monte Carlo Simulation of Kinesin Movement with a Lattice Model

    Institute of Scientific and Technical Information of China (English)

    WANG Hong; DOU Shuo-Xing; WANG Peng-Ye

    2005-01-01

    @@ Kinesin is a processive double-headed molecular motor that moves along a microtubule by taking about 8nm steps. It generally hydrolyzes one ATP molecule for taking each forward step. The processive movement of the kinesin molecular motors is numerically simulated with a lattice model. The motors are considered as Brownian particles and the ATPase processes of both heads are taken into account. The Monte Carlo simulation results agree well with recent experimental observations, especially on the relation of velocity versus ATP and ADP concentrations.

  15. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    Science.gov (United States)

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  16. Gauge Potts model with generalized action: A Monte Carlo analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fanchiotti, H.; Canal, C.A.G.; Sciutto, S.J.

    1985-08-15

    Results of a Monte Carlo calculation on the q-state gauge Potts model in d dimensions with a generalized action involving planar 1 x 1, plaquette, and 2 x 1, fenetre, loop interactions are reported. For d = 3 and q = 2, first- and second-order phase transitions are detected. The phase diagram for q = 3 presents only first-order phase transitions. For d = 2, a comparison with analytical results is made. Here also, the behavior of the numerical simulation in the vicinity of a second-order transition is analyzed.

  17. Evolutionary Sequential Monte Carlo Samplers for Change-Point Models

    Directory of Open Access Journals (Sweden)

    Arnaud Dufays

    2016-03-01

    Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.

  18. Monte Carlo model for electron degradation in methane

    CERN Document Server

    Bhardwaj, Anil

    2015-01-01

    We present a Monte Carlo model for degradation of 1-10,000 eV electrons in an atmosphere of methane. The electron impact cross sections for CH4 are compiled and analytical representations of these cross sections are used as input to the model.model.Yield spectra, which provides information about the number of inelastic events that have taken place in each energy bin, is used to calculate the yield (or population) of various inelastic processes. The numerical yield spectra, obtained from the Monte Carlo simulations, is represented analytically, thus generating the Analytical Yield Spectra (AYS). AYS is employed to obtain the mean energy per ion pair and efficiencies of various inelastic processes.Mean energy per ion pair for neutral CH4 is found to be 26 (27.8) eV at 10 (0.1) keV. Efficiency calculation showed that ionization is the dominant process at energies >50 eV, for which more than 50% of the incident electron energy is used. Above 25 eV, dissociation has an efficiency of 27%. Below 10 eV, vibrational e...

  19. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  20. Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models

    Science.gov (United States)

    Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti

    2016-10-01

    A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.

  1. Monte Carlo modelling of Schottky diode for rectenna simulation

    Science.gov (United States)

    Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.

    2017-09-01

    Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.

  2. Monte Carlo fundamentals

    Energy Technology Data Exchange (ETDEWEB)

    Brown, F.B.; Sutton, T.M.

    1996-02-01

    This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

  3. Monte Carlo autofluorescence modeling of cervical intraepithelial neoplasm progression

    Science.gov (United States)

    Chu, S. C.; Chiang, H. K.; Wu, C. E.; He, S. Y.; Wang, D. Y.

    2006-02-01

    Monte Carlo fluorescence model has been developed to estimate the autofluorescent spectra associated with the progression of the Exo-Cervical Intraepithelial Neoplasm (CIN). We used double integrating spheres system and a tunable light source system, 380 to 600 nm, to measure the reflection and transmission spectra of a 50 μm thick tissue, and used Inverse Adding-Doubling (IAD) method to estimate the absorption (μa) and scattering (μs) coefficients. Human cervical tissue samples were sliced vertically (longitudinal) by the frozen section method. The results show that the absorption and scattering coefficients of cervical neoplasia are 2~3 times higher than normal tissues. We applied Monte Carlo method to estimate photon distribution and fluorescence emission in the tissue. By combining the intrinsic fluorescence information (collagen, NADH, and FAD), the anatomical information of the epithelium, CIN, stroma layers, and the fluorescence escape function, the autofluorescence spectra of CIN at different development stages were obtained.We have observed that the progression of the CIN results in gradually decreasing of the autofluorescence intensity of collagen peak intensity. In addition, the existence of the CIN layer formeda barrier that blocks the autofluorescence escaping from the stroma layer due to the strong extinction(scattering and absorption) of the CIN layer. To our knowledge, this is the first study measuring the CIN optical properties in the visible range; it also successfully demonstrates the fluorescence model forestimating autofluorescence spectra of cervical tissue associated with the progression of the CIN tissue;this model is very important in assisting the CIN diagnosis and treatment in clinical medicine.

  4. Monte Carlo methods

    OpenAIRE

    Bardenet, R.

    2012-01-01

    ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...

  5. Modelling a gamma irradiation process using the Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Soares, Gabriela A.; Pereira, Marcio T., E-mail: gas@cdtn.br, E-mail: mtp@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2011-07-01

    In gamma irradiation service it is of great importance the evaluation of absorbed dose in order to guarantee the service quality. When physical structure and human resources are not available for performing dosimetry in each product irradiated, the appliance of mathematic models may be a solution. Through this, the prediction of the delivered dose in a specific product, irradiated in a specific position and during a certain period of time becomes possible, if validated with dosimetry tests. At the gamma irradiation facility of CDTN, equipped with a Cobalt-60 source, the Monte Carlo method was applied to perform simulations of products irradiations and the results were compared with Fricke dosimeters irradiated under the same conditions of the simulations. The first obtained results showed applicability of this method, with a linear relation between simulation and experimental results. (author)

  6. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  7. Monte Carlo modeling of recrystallization processes in α-uranium

    Science.gov (United States)

    Steiner, M. A.; McCabe, R. J.; Garlea, E.; Agnew, S. R.

    2017-08-01

    Starting with electron backscattered diffraction (EBSD) data obtained from a warm clock-rolled α-uranium deformation microstructure, a Potts Monte Carlo model was used to simulate static site-saturated recrystallization and test which recrystallization nucleation conditions within the microstructure are best validated by experimental observations. The simulations support prior observations that recrystallized nuclei within α-uranium form preferentially on non-twin high-angle grain boundary sites at 450 °C. They also demonstrate, in a new finding, that nucleation along these boundaries occurs only at a highly constrained subset of sites possessing the largest degrees of local deformation. Deformation in the EBSD data can be identified by the Kernel Average Misorientation (KAM), which may be considered as a proxy for the local geometrically necessary dislocation (GND) density.

  8. Monte Carlo Modeling of Crystal Channeling at High Energies

    CERN Document Server

    Schoofs, Philippe; Cerutti, Francesco

    Charged particles entering a crystal close to some preferred direction can be trapped in the electromagnetic potential well existing between consecutive planes or strings of atoms. This channeling effect can be used to extract beam particles if the crystal is bent beforehand. Crystal channeling is becoming a reliable and efficient technique for collimating beams and removing halo particles. At CERN, the installation of silicon crystals in the LHC is under scrutiny by the UA9 collaboration with the goal of investigating if they are a viable option for the collimation system upgrade. This thesis describes a new Monte Carlo model of planar channeling which has been developed from scratch in order to be implemented in the FLUKA code simulating particle transport and interactions. Crystal channels are described through the concept of continuous potential taking into account thermal motion of the lattice atoms and using Moliere screening function. The energy of the particle transverse motion determines whether or n...

  9. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  10. Effective quantum Monte Carlo algorithm for modeling strongly correlated systems

    NARCIS (Netherlands)

    Kashurnikov, V. A.; Krasavin, A. V.

    2007-01-01

    A new effective Monte Carlo algorithm based on principles of continuous time is presented. It allows calculating, in an arbitrary discrete basis, thermodynamic quantities and linear response of mixed boson-fermion, spin-boson, and other strongly correlated systems which admit no analytic description

  11. Monte Carlo simulation of quantum statistical lattice models

    NARCIS (Netherlands)

    Raedt, Hans De; Lagendijk, Ad

    1985-01-01

    In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used t

  12. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, Wies M.W.

    1994-01-01

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to

  13. Monte Carlo estimation of the conditional Rasch model

    NARCIS (Netherlands)

    Akkermans, W.

    1998-01-01

    In order to obtain conditional maximum likelihood estimates, the conditioning constants are needed. Geyer and Thompson (1992) proposed a Markov chain Monte Carlo method that can be used to approximate these constants when they are difficult to calculate exactly. In the present paper, their method is

  14. Improved Monte Carlo model for multiple scattering calculations

    Institute of Scientific and Technical Information of China (English)

    Weiwei Cai; Lin Ma

    2012-01-01

    The coupling between the Monte Carlo (MC) method and geometrical optics to improve accuracy is investigated.The results obtained show improved agreement with previous experimental data,demonstrating that the MC method,when coupled with simple geometrical optics,can simulate multiple scattering with enhanced fidelity.

  15. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  16. Monte Carlo grain growth modeling with local temperature gradients

    Science.gov (United States)

    Tan, Y.; Maniatty, A. M.; Zheng, C.; Wen, J. T.

    2017-09-01

    This work investigated the development of a Monte Carlo (MC) simulation approach to modeling grain growth in the presence of non-uniform temperature field that may vary with time. We first scale the MC model to physical growth processes by fitting experimental data. Based on the scaling relationship, we derive a grid site selection probability (SSP) function to consider the effect of a spatially varying temperature field. The SSP function is based on the differential MC step, which allows it to naturally consider time varying temperature fields too. We verify the model and compare the predictions to other existing formulations (Godfrey and Martin 1995 Phil. Mag. A 72 737-49 Radhakrishnan and Zacharia 1995 Metall. Mater. Trans. A 26 2123-30) in simple two-dimensional cases with only spatially varying temperature fields, where the predicted grain growth in regions of constant temperature are expected to be the same as for the isothermal case. We also test the model in a more realistic three-dimensional case with a temperature field varying in both space and time, modeling grain growth in the heat affected zone of a weld. We believe the newly proposed approach is promising for modeling grain growth in material manufacturing processes that involves time-dependent local temperature gradient.

  17. A Monte Carlo-based model of gold nanoparticle radiosensitization

    Science.gov (United States)

    Lechtman, Eli Solomon

    The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization. A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10 3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther. A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm Au

  18. Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations

    Science.gov (United States)

    Hanson, Andrea; Reed, Erik; Cavanagh, Peter

    2011-01-01

    Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.

  19. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    CERN Document Server

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  20. Monte Carlo model for electron degradation in xenon gas

    CERN Document Server

    Mukundan, Vrinda

    2016-01-01

    We have developed a Monte Carlo model for studying the local degradation of electrons in the energy range 9-10000 eV in xenon gas. Analytically fitted form of electron impact cross sections for elastic and various inelastic processes are fed as input data to the model. Two dimensional numerical yield spectrum, which gives information on the number of energy loss events occurring in a particular energy interval, is obtained as output of the model. Numerical yield spectrum is fitted analytically, thus obtaining analytical yield spectrum. The analytical yield spectrum can be used to calculate electron fluxes, which can be further employed for the calculation of volume production rates. Using yield spectrum, mean energy per ion pair and efficiencies of inelastic processes are calculated. The value for mean energy per ion pair for Xe is 22 eV at 10 keV. Ionization dominates for incident energies greater than 50 eV and is found to have an efficiency of 65% at 10 keV. The efficiency for the excitation process is 30%...

  1. Hopping electron model with geometrical frustration: kinetic Monte Carlo simulations

    Science.gov (United States)

    Terao, Takamichi

    2016-09-01

    The hopping electron model on the Kagome lattice was investigated by kinetic Monte Carlo simulations, and the non-equilibrium nature of the system was studied. We have numerically confirmed that aging phenomena are present in the autocorrelation function C ({t,tW )} of the electron system on the Kagome lattice, which is a geometrically frustrated lattice without any disorder. The waiting-time distributions p(τ ) of hopping electrons of the system on Kagome lattice has been also studied. It is confirmed that the profile of p (τ ) obtained at lower temperatures obeys the power-law behavior, which is a characteristic feature of continuous time random walk of electrons. These features were also compared with the characteristics of the Coulomb glass model, used as a model of disordered thin films and doped semiconductors. This work represents an advance in the understanding of the dynamics of geometrically frustrated systems and will serve as a basis for further studies of these physical systems.

  2. Accelerating Monte Carlo Markov chains with proxy and error models

    Science.gov (United States)

    Josset, Laureline; Demyanov, Vasily; Elsheikh, Ahmed H.; Lunati, Ivan

    2015-12-01

    In groundwater modeling, Monte Carlo Markov Chain (MCMC) simulations are often used to calibrate aquifer parameters and propagate the uncertainty to the quantity of interest (e.g., pollutant concentration). However, this approach requires a large number of flow simulations and incurs high computational cost, which prevents a systematic evaluation of the uncertainty in the presence of complex physical processes. To avoid this computational bottleneck, we propose to use an approximate model (proxy) to predict the response of the exact model. Here, we use a proxy that entails a very simplified description of the physics with respect to the detailed physics described by the "exact" model. The error model accounts for the simplification of the physical process; and it is trained on a learning set of realizations, for which both the proxy and exact responses are computed. First, the key features of the set of curves are extracted using functional principal component analysis; then, a regression model is built to characterize the relationship between the curves. The performance of the proposed approach is evaluated on the Imperial College Fault model. We show that the joint use of the proxy and the error model to infer the model parameters in a two-stage MCMC set-up allows longer chains at a comparable computational cost. Unnecessary evaluations of the exact responses are avoided through a preliminary evaluation of the proposal made on the basis of the corrected proxy response. The error model trained on the learning set is crucial to provide a sufficiently accurate prediction of the exact response and guide the chains to the low misfit regions. The proposed methodology can be extended to multiple-chain algorithms or other Bayesian inference methods. Moreover, FPCA is not limited to the specific presented application and offers a general framework to build error models.

  3. Monte Carlo Modeling Electronuclear Processes in Cascade Subcritical Reactor

    CERN Document Server

    Bznuni, S A; Zhamkochyan, V M; Polyanskii, A A; Sosnin, A N; Khudaverdian, A G

    2000-01-01

    Accelerator driven subcritical cascade reactor composed of the main thermal neutron reactor constructed analogous to the core of the VVER-1000 reactor and a booster-reactor, which is constructed similar to the core of the BN-350 fast breeder reactor, is taken as a model example. It is shown by means of Monte Carlo calculations that such system is a safe energy source (k_{eff}=0.94-0.98) and it is capable of transmuting produced radioactive wastes (neutron flux density in the thermal zone is PHI^{max} (r,z)=10^{14} n/(cm^{-2} s^{-1}), neutron flux in the fast zone is respectively equal PHI^{max} (r,z)=2.25 cdot 10^{15} n/(cm^{-2} s^{-1}) if the beam current of the proton accelerator is k_{eff}=0.98 and I=5.3 mA). Suggested configuration of the "cascade" reactor system essentially reduces the requirements on the proton accelerator current.

  4. Monte Carlo Glauber wounded nucleon model with meson cloud

    CERN Document Server

    Zakharov, B G

    2016-01-01

    We study the effect of the nucleon meson cloud on predictions of the Monte Carlo Glauber wounded nucleon model for $AA$, $pA$, and $pp$ collisions. From the analysis of the data on the charged multiplicity density in $AA$ collisions we find that the meson-baryon Fock component reduces the required fraction of binary collisions by a factor of $\\sim 2$ for Au+Au collisions at $\\sqrt{s}=0.2$ TeV and $\\sim 1.5$ for Pb+Pb collisions at $\\sqrt{s}=2.76$ TeV. For central $AA$ collisions the meson cloud can increase the multiplicity density by $\\sim 16-18$\\%. We give predictions for the midrapidity charged multiplicity density in Pb+Pb collisions at $\\sqrt{s}=5.02$ TeV for the future LHC run 2. We find that the meson cloud has a weak effect on the centrality dependence of the ellipticity $\\epsilon_2$ in $AA$ collisions. For collisions of the deformed uranium nuclei at $\\sqrt{s}=0.2$ TeV we find that the meson cloud may improve somewhat agreement with the data on the dependence of the elliptic flow on the charged multi...

  5. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  6. Monte Carlo modeling and optimization of buffer gas positron traps

    Science.gov (United States)

    Marjanović, Srđan; Petrović, Zoran Lj

    2017-02-01

    Buffer gas positron traps have been used for over two decades as the prime source of slow positrons enabling a wide range of experiments. While their performance has been well understood through empirical studies, no theoretical attempt has been made to quantitatively describe their operation. In this paper we apply standard models as developed for physics of low temperature collision dominated plasmas, or physics of swarms to model basic performance and principles of operation of gas filled positron traps. The Monte Carlo model is equipped with the best available set of cross sections that were mostly derived experimentally by using the same type of traps that are being studied. Our model represents in realistic geometry and fields the development of the positron ensemble from the initial beam provided by the solid neon moderator through voltage drops between the stages of the trap and through different pressures of the buffer gas. The first two stages employ excitation of N2 with acceleration of the order of 10 eV so that the trap operates under conditions when excitation of the nitrogen reduces the energy of the initial beam to trap the positrons without giving them a chance to become annihilated following positronium formation. The energy distribution function develops from the assumed distribution leaving the moderator, it is accelerated by the voltage drops and forms beams at several distinct energies. In final stages the low energy loss collisions (vibrational excitation of CF4 and rotational excitation of N2) control the approach of the distribution function to a Maxwellian at room temperature but multiple non-Maxwellian groups persist throughout most of the thermalization. Optimization of the efficiency of the trap may be achieved by changing the pressure and voltage drops and also by selecting to operate in a two stage mode. The model allows quantitative comparisons and test of optimization as well as development of other properties.

  7. A Monte Carlo reflectance model for soil surfaces with three-dimensional structure

    Science.gov (United States)

    Cooper, K. D.; Smith, J. A.

    1985-01-01

    A Monte Carlo soil reflectance model has been developed to study the effect of macroscopic surface irregularities larger than the wavelength of incident flux. The model treats incoherent multiple scattering from Lambertian facets distributed on a periodic surface. Resulting bidirectional reflectance distribution functions are non-Lambertian and compare well with experimental trends reported in the literature. Examples showing the coupling of the Monte Carlo soil model to an adding bidirectional canopy of reflectance model are also given.

  8. Quantum Monte Carlo simulation

    OpenAIRE

    Wang, Yazhen

    2011-01-01

    Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...

  9. Monte Carlo transition probabilities

    OpenAIRE

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  10. Optical Monte Carlo modeling of a true portwine stain anatomy

    Science.gov (United States)

    Barton, Jennifer K.; Pfefer, T. Joshua; Welch, Ashley J.; Smithies, Derek J.; Nelson, Jerry; van Gemert, Martin J.

    1998-04-01

    A unique Monte Carlo program capable of accommodating an arbitrarily complex geometry was used to determine the energy deposition in a true port wine stain anatomy. Serial histologic sections taken from a biopsy of a dark red, laser therapy resistant stain were digitized and used to create the program input for simulation at wavelengths of 532 and 585 nm. At both wavelengths, the greatest energy deposition occurred in the superficial blood vessels, and subsequently decreased with depth as the laser beam was attenuated. However, more energy was deposited in the epidermis and superficial blood vessels at 532 nm than at 585 nm.

  11. Monte Carlo simulations of the HP model (the "Ising model" of protein folding)

    Science.gov (United States)

    Li, Ying Wai; Wüst, Thomas; Landau, David P.

    2011-09-01

    Using Wang-Landau sampling with suitable Monte Carlo trial moves (pull moves and bond-rebridging moves combined) we have determined the density of states and thermodynamic properties for a short sequence of the HP protein model. For free chains these proteins are known to first undergo a collapse "transition" to a globule state followed by a second "transition" into a native state. When placed in the proximity of an attractive surface, there is a competition between surface adsorption and folding that leads to an intriguing sequence of "transitions". These transitions depend upon the relative interaction strengths and are largely inaccessible to "standard" Monte Carlo methods.

  12. Monte Carlo simulation of magnetization switching in a Heisenberg model for small ferromagnetic particles

    OpenAIRE

    Hinzke, Denise; Nowak, Ulrich

    1999-01-01

    Using Monte Carlo methods we investigate the thermally activated magnetization switching of small ferromagnetic particles driven by an external magnetic field. For low uniaxial anisotropy one expects that the spins rotate coherently while for sufficiently large anisotropy the reversal should be due to nucleation. The latter case has been investigated extensively by Monte Carlo simulation of corresponding Ising models. In order to study the crossover from coherent rotation to nucleation we use...

  13. Microscopic imaging through turbid media Monte Carlo modeling and applications

    CERN Document Server

    Gu, Min; Deng, Xiaoyuan

    2015-01-01

    This book provides a systematic introduction to the principles of microscopic imaging through tissue-like turbid media in terms of Monte-Carlo simulation. It describes various gating mechanisms based on the physical differences between the unscattered and scattered photons and method for microscopic image reconstruction, using the concept of the effective point spread function. Imaging an object embedded in a turbid medium is a challenging problem in physics as well as in biophotonics. A turbid medium surrounding an object under inspection causes multiple scattering, which degrades the contrast, resolution and signal-to-noise ratio. Biological tissues are typically turbid media. Microscopic imaging through a tissue-like turbid medium can provide higher resolution than transillumination imaging in which no objective is used. This book serves as a valuable reference for engineers and scientists working on microscopy of tissue turbid media.

  14. Kinetic Monte Carlo modelling of neutron irradiation damage in iron

    Energy Technology Data Exchange (ETDEWEB)

    Gamez, L. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Departamento de Fisica Aplicada, ETSII, UPM, Madrid (Spain)], E-mail: linarejos.gamez@upm.es; Martinez, E. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Lawrence Livermore National Laboratory, LLNL, CA 94550 (United States); Perlado, J.M.; Cepas, P. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Caturla, M.J. [Departamento de Fisica Aplicada, Universidad de Alicante, Alicante (Spain); Victoria, M. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Marian, J. [Lawrence Livermore National Laboratory, LLNL, CA 94550 (United States); Arevalo, C. [Instituto de Fusion Nuclear, UPM, Madrid (Spain); Hernandez, M.; Gomez, D. [CIEMAT, Madrid (Spain)

    2007-10-15

    Ferritic steels (FeCr based alloys) are key materials needed to fulfill the requirements expected in future nuclear fusion facilities, both for magnetic and inertial confinement, and advanced fission reactors (GIV) and transmutation systems. Research in such field is actually a critical aspect in the European research program and abroad. Experimental and multiscale simulation methodologies are going hand by hand in increasing the knowledge of materials performance. At DENIM, it is progressing in some specific part of the well-linked simulation methodology both for defects energetics and diffusion, and for dislocation dynamics. In this study, results obtained from kinetic Monte Carlo simulations of neutron irradiated Fe under different conditions are presented, using modified ad hoc parameters. A significant agreement with experimental measurements has been found for some of the parameterization and mechanisms considered. The results of these simulations are discussed and compared with previous calculations.

  15. Parametric links among Monte Carlo, phase-field, and sharp-interface models of interfacial motion.

    Science.gov (United States)

    Liu, Pu; Lusk, Mark T

    2002-12-01

    Parametric links are made among three mesoscale simulation paradigms: phase-field, sharp-interface, and Monte Carlo. A two-dimensional, square lattice, 1/2 Ising model is considered for the Monte Carlo method, where an exact solution for the interfacial free energy is known. The Monte Carlo mobility is calibrated as a function of temperature using Glauber kinetics. A standard asymptotic analysis relates the phase-field and sharp-interface parameters, and this allows the phase-field and Monte Carlo parameters to be linked. The result is derived without bulk effects but is then applied to a set of simulations with the bulk driving force included. An error analysis identifies the domain over which the parametric relationships are accurate.

  16. Monte Carlo modeling of ultrasound probes for image guided radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Bazalova-Carter, Magdalena, E-mail: bazalova@uvic.ca [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 2Y2 (Canada); Schlosser, Jeffrey [SoniTrack Systems, Inc., Palo Alto, California 94304 (United States); Chen, Josephine [Department of Radiation Oncology, UCSF, San Francisco, California 94143 (United States); Hristov, Dimitre [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2015-10-15

    Purpose: To build Monte Carlo (MC) models of two ultrasound (US) probes and to quantify the effect of beam attenuation due to the US probes for radiation therapy delivered under real-time US image guidance. Methods: MC models of two Philips US probes, an X6-1 matrix-array transducer and a C5-2 curved-array transducer, were built based on their megavoltage (MV) CT images acquired in a Tomotherapy machine with a 3.5 MV beam in the EGSnrc, BEAMnrc, and DOSXYZnrc codes. Mass densities in the probes were assigned based on an electron density calibration phantom consisting of cylinders with mass densities between 0.2 and 8.0 g/cm{sup 3}. Beam attenuation due to the US probes in horizontal (for both probes) and vertical (for the X6-1 probe) orientation was measured in a solid water phantom for 6 and 15 MV (15 × 15) cm{sup 2} beams with a 2D ionization chamber array and radiographic films at 5 cm depth. The MC models of the US probes were validated by comparison of the measured dose distributions and dose distributions predicted by MC. Attenuation of depth dose in the (15 × 15) cm{sup 2} beams and small circular beams due to the presence of the probes was assessed by means of MC simulations. Results: The 3.5 MV CT number to mass density calibration curve was found to be linear with R{sup 2} > 0.99. The maximum mass densities in the X6-1 and C5-2 probes were found to be 4.8 and 5.2 g/cm{sup 3}, respectively. Dose profile differences between MC simulations and measurements of less than 3% for US probes in horizontal orientation were found, with the exception of the penumbra region. The largest 6% dose difference was observed in dose profiles of the X6-1 probe placed in vertical orientation, which was attributed to inadequate modeling of the probe cable. Gamma analysis of the simulated and measured doses showed that over 96% of measurement points passed the 3%/3 mm criteria for both probes placed in horizontal orientation and for the X6-1 probe in vertical orientation. The

  17. The Virtual Monte Carlo

    CERN Document Server

    Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas

    2003-01-01

    The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.

  18. A new Monte Carlo simulation model for laser transmission in smokescreen based on MATLAB

    Science.gov (United States)

    Lee, Heming; Wang, Qianqian; Shan, Bin; Li, Xiaoyang; Gong, Yong; Zhao, Jing; Peng, Zhong

    2016-11-01

    A new Monte Carlo simulation model of laser transmission in smokescreen is promoted in this paper. In the traditional Monte Carlo simulation model, the radius of particles is set at the same value and the initial cosine value of photons direction is fixed also, which can only get the approximate result. The new model is achieved based on MATLAB and can simulate laser transmittance in smokescreen with different sizes of particles, and the output result of the model is close to the real scenarios. In order to alleviate the influence of the laser divergence while traveling in the air, we changed the initial direction cosine of photons on the basis of the traditional Monte Carlo model. The mixed radius particle smoke simulation results agree with the measured transmittance under the same experimental conditions with 5.42% error rate.

  19. Monte Carlo Hamiltonian: Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx < 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.

  20. Proton Upset Monte Carlo Simulation

    Science.gov (United States)

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  1. Z_3 Polyakov Loop Models and Inverse Monte-Carlo Methods

    CERN Document Server

    Wozar, Christian; Uhlmann, Sebastian; Wipf, Andreas; Heinzl, Thomas

    2007-01-01

    We study effective Polyakov loop models for SU(3) Yang-Mills theory at finite temperature. A comprehensive mean field analysis of the phase diagram is carried out and compared to the results obtained from Monte-Carlo simulations. We find a rich phase structure including ferromagnetic and antiferromagnetic phases. Due to the presence of a tricritical point the mean field approximation agrees very well with the numerical data. Critical exponents associated with second-order transitions coincide with those of the Z_3 Potts model. Finally, we employ inverse Monte-Carlo methods to determine the effective couplings in order to match the effective models to Yang-Mills theory.

  2. NRMC - A GPU code for N-Reverse Monte Carlo modeling of fluids in confined media

    Science.gov (United States)

    Sánchez-Gil, Vicente; Noya, Eva G.; Lomba, Enrique

    2017-08-01

    NRMC is a parallel code for performing N-Reverse Monte Carlo modeling of fluids in confined media [V. Sánchez-Gil, E.G. Noya, E. Lomba, J. Chem. Phys. 140 (2014) 024504]. This method is an extension of the usual Reverse Monte Carlo method to obtain structural models of confined fluids compatible with experimental diffraction patterns, specifically designed to overcome the problem of slow diffusion that can appear under conditions of tight confinement. Most of the computational time in N-Reverse Monte Carlo modeling is spent in the evaluation of the structure factor for each trial configuration, a calculation that can be easily parallelized. Implementation of the structure factor evaluation in NVIDIA® CUDA so that the code can be run on GPUs leads to a speed up of up to two orders of magnitude.

  3. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    NARCIS (Netherlands)

    Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.

    2008-01-01

    By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tun

  4. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    NARCIS (Netherlands)

    Machguth, H.; Purves, R.S.; Oerlemans, J.; Hoelzle, M.; Paul, F.

    2008-01-01

    By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tun

  5. Monte Carlo Option Princing

    Directory of Open Access Journals (Sweden)

    Cecilia Maya

    2004-12-01

    Full Text Available El método Monte Carlo se aplica a varios casos de valoración de opciones financieras. El método genera una buena aproximación al comparar su precisión con la de otros métodos numéricos. La estimación que produce la versión Cruda de Monte Carlo puede ser aún más exacta si se recurre a metodologías de reducción de la varianza entre las cuales se sugieren la variable antitética y de la variable de control. Sin embargo, dichas metodologías requieren un esfuerzo computacional mayor por lo cual las mismas deben ser evaluadas en términos no sólo de su precisión sino también de su eficiencia.

  6. Monte Carlo and nonlinearities

    CERN Document Server

    Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian

    2016-01-01

    The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...

  7. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  8. Markov chain Monte Carlo methods for state-space models with point process observations.

    Science.gov (United States)

    Yuan, Ke; Girolami, Mark; Niranjan, Mahesan

    2012-06-01

    This letter considers how a number of modern Markov chain Monte Carlo (MCMC) methods can be applied for parameter estimation and inference in state-space models with point process observations. We quantified the efficiencies of these MCMC methods on synthetic data, and our results suggest that the Reimannian manifold Hamiltonian Monte Carlo method offers the best performance. We further compared such a method with a previously tested variational Bayes method on two experimental data sets. Results indicate similar performance on the large data sets and superior performance on small ones. The work offers an extensive suite of MCMC algorithms evaluated on an important class of models for physiological signal analysis.

  9. FREYA-a new Monte Carlo code for improved modeling of fission chains

    Energy Technology Data Exchange (ETDEWEB)

    Hagmann, C A; Randrup, J; Vogt, R L

    2012-06-12

    A new simulation capability for modeling of individual fission events and chains and the transport of fission products in materials is presented. FREYA ( Fission Yield Event Yield Algorithm ) is a Monte Carlo code for generating fission events providing correlated kinematic information for prompt neutrons, gammas, and fragments. As a standalone code, FREYA calculates quantities such as multiplicity-energy, angular, and gamma-neutron energy sharing correlations. To study materials with multiplication, shielding effects, and detectors, we have integrated FREYA into the general purpose Monte Carlo code MCNP. This new tool will allow more accurate modeling of detector responses including correlations and the development of SNM detectors with increased sensitivity.

  10. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  11. Model unspecific search in CMS. Treatment of insufficient Monte Carlo statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lieb, Jonas; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In 2015, the CMS detector recorded proton-proton collisions at an unprecedented center of mass energy of √(s)=13 TeV. The Model Unspecific Search in CMS (MUSiC) offers an analysis approach of these data which is complementary to dedicated analyses: By taking all produced final states into consideration, MUSiC is sensitive to indicators of new physics appearing in final states that are usually not investigated. In a two step process, MUSiC first classifies events according to their physics content and then searches kinematic distributions for the most significant deviations between Monte Carlo simulations and observed data. Such a general approach introduces its own set of challenges. One of them is the treatment of situations with insufficient Monte Carlo statistics. Complementing introductory presentations on the MUSiC event selection and classification, this talk will present a method of dealing with the issue of low Monte Carlo statistics.

  12. Benchmark calculation of no-core Monte Carlo shell model in light nuclei

    CERN Document Server

    Abe, T; Otsuka, T; Shimizu, N; Utsuno, Y; Vary, J P; 10.1063/1.3584062

    2011-01-01

    The Monte Carlo shell model is firstly applied to the calculation of the no-core shell model in light nuclei. The results are compared with those of the full configuration interaction. The agreements between them are within a few % at most.

  13. A tutorial introduction to Bayesian inference for stochastic epidemic models using Markov chain Monte Carlo methods.

    Science.gov (United States)

    O'Neill, Philip D

    2002-01-01

    Recent Bayesian methods for the analysis of infectious disease outbreak data using stochastic epidemic models are reviewed. These methods rely on Markov chain Monte Carlo methods. Both temporal and non-temporal data are considered. The methods are illustrated with a number of examples featuring different models and datasets.

  14. Universality of the Ising and the S=1 model on Archimedean lattices: A Monte Carlo determination

    Science.gov (United States)

    Malakis, A.; Gulpinar, G.; Karaaslan, Y.; Papakonstantinou, T.; Aslan, G.

    2012-03-01

    The Ising models S=1/2 and S=1 are studied by efficient Monte Carlo schemes on the (3,4,6,4) and the (3,3,3,3,6) Archimedean lattices. The algorithms used, a hybrid Metropolis-Wolff algorithm and a parallel tempering protocol, are briefly described and compared with the simple Metropolis algorithm. Accurate Monte Carlo data are produced at the exact critical temperatures of the Ising model for these lattices. Their finite-size analysis provide, with high accuracy, all critical exponents which, as expected, are the same with the well-known 2D Ising model exact values. A detailed finite-size scaling analysis of our Monte Carlo data for the S=1 model on the same lattices provides very clear evidence that this model obeys, also very well, the 2D Ising model critical exponents. As a result, we find that recent Monte Carlo simulations and attempts to define effective dimensionality for the S=1 model on these lattices are misleading. Accurate estimates are obtained for the critical amplitudes of the logarithmic expansions of the specific heat for both models on the two Archimedean lattices.

  15. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  16. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...

  17. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled

  18. Preliminary Monte Carlo Results for the Three-Dimensional Holstein Model

    Institute of Scientific and Technical Information of China (English)

    吴焰立; 刘川; 罗强

    2003-01-01

    Monte Carlo simulations are used to study the three-dimensional Holstein model. The relationship between the band filling and the chemical potential is obtained for various phonon frequencies and temperatures. The energy of a single electron or a hole is also calculated as a function of the lattice momenta.

  19. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  20. Treatment of input uncertainty in hydrologic modeling: Doing hydrology backward with Markov chain Monte Carlo simulation

    NARCIS (Netherlands)

    Vrugt, J.A.; Braak, ter C.J.F.; Clark, M.P.; Hyman, J.M.; Robinson, B.A.

    2008-01-01

    There is increasing consensus in the hydrologic literature that an appropriate framework for streamflow forecasting and simulation should include explicit recognition of forcing and parameter and model structural error. This paper presents a novel Markov chain Monte Carlo (MCMC) sampler, entitled di

  1. Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation

    NARCIS (Netherlands)

    Minasny, B.; Vrugt, J.A.; McBratney, A.B.

    2011-01-01

    This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior distributi

  2. An NCME Instructional Module on Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods

    Science.gov (United States)

    Kim, Jee-Seon; Bolt, Daniel M.

    2007-01-01

    The purpose of this ITEMS module is to provide an introduction to Markov chain Monte Carlo (MCMC) estimation for item response models. A brief description of Bayesian inference is followed by an overview of the various facets of MCMC algorithms, including discussion of prior specification, sampling procedures, and methods for evaluating chain…

  3. A study of the XY model by the Monte Carlo method

    Science.gov (United States)

    Suranyi, Peter; Harten, Paul

    1987-01-01

    The massively parallel processor is used to perform Monte Carlo simulations for the two dimensional XY model on lattices of sizes up to 128 x 128. A parallel random number generator was constructed, finite size effects were studied, and run times were compared with those on a CRAY X-MP supercomputer.

  4. Generic Form of Bayesian Monte Carlo For Models With Partial Monotonicity

    NARCIS (Netherlands)

    Rajabalinejad, M.

    2012-01-01

    This paper presents a generic method for the safety assessments of models with partial monotonicity. For this purpose, a Bayesian interpolation method is developed and implemented in the Monte Carlo process. integrated approach is the generalization of the recently developed techniques used in safet

  5. An Evaluation of a Markov Chain Monte Carlo Method for the Rasch Model.

    Science.gov (United States)

    Kim, Seock-Ho

    2001-01-01

    Examined the accuracy of the Gibbs sampling Markov chain Monte Carlo procedure for estimating item and person (theta) parameters in the one-parameter logistic model. Analyzed four empirical datasets using the Gibbs sampling, conditional maximum likelihood, marginal maximum likelihood, and joint maximum likelihood methods. Discusses the conditions…

  6. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  7. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    Science.gov (United States)

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  8. Generic form of Bayesian Monte Carlo for models with partial monotonicity

    NARCIS (Netherlands)

    Rajabalinejad, M.; Spitas, C.

    2012-01-01

    This paper presents a generic method for the safety assessments of models with partial monotonicity. For this purpose, a Bayesian interpolation method is developed and implemented in the Monte Carlo process. integrated approach is the generalization of the recently developed techniques used in safet

  9. LASER-DOPPLER VELOCIMETRY AND MONTE-CARLO SIMULATIONS ON MODELS FOR BLOOD PERFUSION IN TISSUE

    NARCIS (Netherlands)

    DEMUL, FFM; KOELINK, MH; KOK, ML; HARMSMA, PJ; GREVE, J; GRAAFF, R; AARNOUDSE, JG

    1995-01-01

    Laser Doppler flow measurements and Monte Carlo simulations on small blood perfusion flow models at 780 nm are presented and compared. The dimensions of the optical sample volume are investigated as functions of the distance of the laser to the detector and as functions of the angle of penetration o

  10. Surprising convergence of the Monte Carlo renormalization group for the three-dimensional Ising model.

    Science.gov (United States)

    Ron, Dorit; Brandt, Achi; Swendsen, Robert H

    2017-05-01

    We present a surprisingly simple approach to high-accuracy calculations of the critical properties of the three-dimensional Ising model. The method uses a modified block-spin transformation with a tunable parameter to improve convergence in the Monte Carlo renormalization group. The block-spin parameter must be tuned differently for different exponents to produce optimal convergence.

  11. LMC: Logarithmantic Monte Carlo

    Science.gov (United States)

    Mantz, Adam B.

    2017-06-01

    LMC is a Markov Chain Monte Carlo engine in Python that implements adaptive Metropolis-Hastings and slice sampling, as well as the affine-invariant method of Goodman & Weare, in a flexible framework. It can be used for simple problems, but the main use case is problems where expensive likelihood evaluations are provided by less flexible third-party software, which benefit from parallelization across many nodes at the sampling level. The parallel/adaptive methods use communication through MPI, or alternatively by writing/reading files, and mostly follow the approaches pioneered by CosmoMC (ascl:1106.025).

  12. Monte Carlo modeling of a Novalis Tx Varian 6 MV with HD-120 multileaf collimator.

    Science.gov (United States)

    Vazquez-Quino, Luis Alberto; Massingill, Brian; Shi, Chengyu; Gutierrez, Alonso; Esquivel, Carlos; Eng, Tony; Papanikolaou, Nikos; Stathakis, Sotirios

    2012-09-06

    A Monte Carlo model of the Novalis Tx linear accelerator equipped with high-definition multileaf collimator (HD-120 HD-MLC) was commissioned using ionization chamber measurements in water. All measurements in water were performed using a liquid filled ionization chamber. Film measurements were made using EDR2 film in solid water. Open rectangular fields defined by the jaws or the HD-MLC were used for comparison against measurements. Furthermore, inter- and intraleaf leakage calculated by the Monte Carlo model was compared against film measurements. The statistical uncertainty of the Monte Carlo calculations was less than 1% for all simulations. Results for all regular field sizes show an excellent agreement with commissioning data (percent depth-dose curves and profiles), well within 1% of difference in the relative dose and 1 mm distance to agreement. The computed leakage through HD-MLCs shows good agreement with film measurements. The Monte Carlo model developed in this study accurately represents the new Novalis Tx Varian linac with HD-MLC and can be used for reliable patient dose calculations.

  13. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  14. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell

    NARCIS (Netherlands)

    Spada, F.M.; Krol, M.C.|info:eu-repo/dai/nl/078760410; Stammes, P.

    2006-01-01

    A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth’s radius, and can

  15. McSCIA: application of the equivalence theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    NARCIS (Netherlands)

    Spada, F.; Krol, M.C.; Stammes, P.

    2006-01-01

    A new multiple-scatteringMonte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIA-machy) is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can

  16. Single-cluster-update Monte Carlo method for the random anisotropy model

    Science.gov (United States)

    Rößler, U. K.

    1999-06-01

    A Wolff-type cluster Monte Carlo algorithm for random magnetic models is presented. The algorithm is demonstrated to reduce significantly the critical slowing down for planar random anisotropy models with weak anisotropy strength. Dynamic exponents zcluster algorithms are estimated for models with ratio of anisotropy to exchange constant D/J=1.0 on cubic lattices in three dimensions. For these models, critical exponents are derived from a finite-size scaling analysis.

  17. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    Science.gov (United States)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  18. Monte-Carlo Inversion of Travel-Time Data for the Estimation of Weld Model Parameters

    Science.gov (United States)

    Hunter, A. J.; Drinkwater, B. W.; Wilcox, P. D.

    2011-06-01

    The quality of ultrasonic array imagery is adversely affected by uncompensated variations in the medium properties. A method for estimating the parameters of a general model of an inhomogeneous anisotropic medium is described. The model is comprised of a number of homogeneous sub-regions with unknown anisotropy. Bayesian estimation of the unknown model parameters is performed via a Monte-Carlo Markov chain using the Metropolis-Hastings algorithm. Results are demonstrated using simulated weld data.

  19. Monte Carlo study of single-barrier structure based on exclusion model full counting statistics

    Institute of Scientific and Technical Information of China (English)

    Chen Hua; Du Lei; Qu Cheng-Li; He Liang; Chen Wen-Hao; Sun Peng

    2011-01-01

    Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of single-barrier structure is performed to obtain time series for two types of widely applicable exclusion models, counter-flows model,and tunnel model. With high-order spectrum analysis of Matlab, the validation of Monte Carlo methods is shown through the extracted first four cumulants from the time series, which are in agreement with those from cumulant generating function. After the comparison between the counter-flows model and the tunnel model in a single barrier structure, it is found that the essential difference between them consists in the strictly holding of Pauli principle in the former and in the statistical consideration of Pauli principle in the latter.

  20. Development of perturbation Monte Carlo methods for polarized light transport in a discrete particle scattering model.

    Science.gov (United States)

    Nguyen, Jennifer; Hayakawa, Carole K; Mourant, Judith R; Venugopalan, Vasan; Spanier, Jerome

    2016-05-01

    We present a polarization-sensitive, transport-rigorous perturbation Monte Carlo (pMC) method to model the impact of optical property changes on reflectance measurements within a discrete particle scattering model. The model consists of three log-normally distributed populations of Mie scatterers that approximate biologically relevant cervical tissue properties. Our method provides reflectance estimates for perturbations across wavelength and/or scattering model parameters. We test our pMC model performance by perturbing across number densities and mean particle radii, and compare pMC reflectance estimates with those obtained from conventional Monte Carlo simulations. These tests allow us to explore different factors that control pMC performance and to evaluate the gains in computational efficiency that our pMC method provides.

  1. High-resolution and Monte Carlo additions to the SASKTRAN radiative transfer model

    Directory of Open Access Journals (Sweden)

    D. J. Zawada

    2015-06-01

    Full Text Available The Optical Spectrograph and InfraRed Imaging System (OSIRIS instrument on board the Odin spacecraft has been measuring limb-scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high-spatial-resolution mode and a Monte Carlo mode. The high-spatial-resolution mode is a successive-orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2 %. As an example case for both models, Odin–OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high-resolution model. A systematic bias of up to 4 % in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. The bias is largest when the sun is near the horizon and the solar scattering angle is far from 90°. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin–OSIRIS geometries.

  2. High resolution and Monte Carlo additions to the SASKTRAN radiative transfer model

    Directory of Open Access Journals (Sweden)

    D. J. Zawada

    2015-03-01

    Full Text Available The OSIRIS instrument on board the Odin spacecraft has been measuring limb scattered radiance since 2001. The vertical radiance profiles measured as the instrument nods are inverted, with the aid of the SASKTRAN radiative transfer model, to obtain vertical profiles of trace atmospheric constituents. Here we describe two newly developed modes of the SASKTRAN radiative transfer model: a high spatial resolution mode, and a Monte Carlo mode. The high spatial resolution mode is a successive orders model capable of modelling the multiply scattered radiance when the atmosphere is not spherically symmetric; the Monte Carlo mode is intended for use as a highly accurate reference model. It is shown that the two models agree in a wide variety of solar conditions to within 0.2%. As an example case for both models, Odin-OSIRIS scans were simulated with the Monte Carlo model and retrieved using the high resolution model. A systematic bias of up to 4% in retrieved ozone number density between scans where the instrument is scanning up or scanning down was identified. It was found that calculating the multiply scattered diffuse field at five discrete solar zenith angles is sufficient to eliminate the bias for typical Odin-OSIRIS geometries.

  3. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Energy Technology Data Exchange (ETDEWEB)

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  4. Monte Carlo Simulation of the Potts Model on a Dodecagonal Quasiperiodic Structure

    Institute of Scientific and Technical Information of China (English)

    WEN Zhang-Bin; HOU Zhi-Lin; FU Xiu-Jun

    2011-01-01

    By means of a Monte Carlo simulation, we study the three-state Potts model on a two-dimensional quasiperiodic structure based on a dodecagonal cluster covering pattern. The critical temperature and exponents are obtained from finite-size scaling analysis. It is shown that the Potts model on the quasiperiodic lattice belongs to the same universal class as those on periodic ones.%@@ By means of a Monte Carlo simulation, we study the three-state Potts model on a two-dimensional quasiperiodic structure based on a dodecagonal cluster covering pattern.The critical temperature and exponents are obtained from finite-size scaling analysis.It is shown that the Potts model on the quasiperiodic lattice belongs to the same universal class as those on periodic ones.

  5. Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models

    Science.gov (United States)

    Si, Lisha; Liao, Xiaoyun; Zhou, Nengji

    2016-12-01

    With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.

  6. Monte Carlo modeling of atomic oxygen attack of polymers with protective coatings on LDEF

    Science.gov (United States)

    Banks, Bruce A.; Degroh, Kim K.; Auer, Bruce M.; Gebauer, Linda; Edwards, Jonathan L.

    1993-01-01

    Characterization of the behavior of atomic oxygen interaction with materials on the Long Duration Exposure Facility (LDEF) assists in understanding of the mechanisms involved. Thus the reliability of predicting in-space durability of materials based on ground laboratory testing should be improved. A computational model which simulates atomic oxygen interaction with protected polymers was developed using Monte Carlo techniques. Through the use of an assumed mechanistic behavior of atomic oxygen interaction based on in-space atomic oxygen erosion of unprotected polymers and ground laboratory atomic oxygen interaction with protected polymers, prediction of atomic oxygen interaction with protected polymers on LDEF was accomplished. However, the results of these predictions are not consistent with the observed LDEF results at defect sites in protected polymers. Improved agreement between observed LDEF results and predicted Monte Carlo modeling can be achieved by modifying of the atomic oxygen interactive assumptions used in the model. LDEF atomic oxygen undercutting results, modeling assumptions, and implications are presented.

  7. Simulation model based on Monte Carlo method for traffic assignment in local area road network

    Institute of Scientific and Technical Information of China (English)

    Yuchuan DU; Yuanjing GENG; Lijun SUN

    2009-01-01

    For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.

  8. Density matrix quantum Monte Carlo

    CERN Document Server

    Blunt, N S; Spencer, J S; Foulkes, W M C

    2013-01-01

    This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...

  9. Efficient kinetic Monte Carlo simulation

    Science.gov (United States)

    Schulze, Tim P.

    2008-02-01

    This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.

  10. MCMini: Monte Carlo on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  11. Cosmological constraints on generalized Chaplygin gas model: Markov Chain Monte Carlo approach

    OpenAIRE

    Xu, Lixin; Lu, Jianbo

    2010-01-01

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the generalized Chaplygin gas (GCG) model as the unification of dark matter and dark energy from the latest observational data: the Constitution dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a non-flat universe, the constraint results for GCG model are, $\\Ome...

  12. Direct Monte Carlo Measurement of the Surface Tension in Ising Models

    CERN Document Server

    Hasenbusch, M

    1992-01-01

    I present a cluster Monte Carlo algorithm that gives direct access to the interface free energy of Ising models. The basic idea is to simulate an ensemble that consists of both configurations with periodic and with antiperiodic boundary conditions. A cluster algorithm is provided that efficently updates this joint ensemble. The interface tension is obtained from the ratio of configurations with periodic and antiperiodic boundary conditions, respectively. The method is tested for the 3-dimensional Ising model.

  13. Coupled Simulations of Mechanical Deformation and Microstructural Evolution Using Polycrystal Plasticity and Monte Carlo Potts Models

    Energy Technology Data Exchange (ETDEWEB)

    Battaile, C.C.; Buchheit, T.E.; Holm, E.A.; Neilsen, M.K.; Wellman, G.W.

    1999-01-12

    The microstructural evolution of heavily deformed polycrystalline Cu is simulated by coupling a constitutive model for polycrystal plasticity with the Monte Carlo Potts model for grain growth. The effects of deformation on boundary topology and grain growth kinetics are presented. Heavy deformation leads to dramatic strain-induced boundary migration and subsequent grain fragmentation. Grain growth is accelerated in heavily deformed microstructures. The implications of these results for the thermomechanical fatigue failure of eutectic solder joints are discussed.

  14. The MC21 Monte Carlo Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H

    2007-01-09

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.

  15. Numerical Study of Light Transport in Apple Models Based on Monte Carlo Simulations

    Directory of Open Access Journals (Sweden)

    Mohamed Lamine Askoura

    2015-12-01

    Full Text Available This paper reports on the quantification of light transport in apple models using Monte Carlo simulations. To this end, apple was modeled as a two-layer spherical model including skin and flesh bulk tissues. The optical properties of both tissue types used to generate Monte Carlo data were collected from the literature, and selected to cover a range of values related to three apple varieties. Two different imaging-tissue setups were simulated in order to show the role of the skin on steady-state backscattering images, spatially-resolved reflectance profiles, and assessment of flesh optical properties using an inverse nonlinear least squares fitting algorithm. Simulation results suggest that apple skin cannot be ignored when a Visible/Near-Infrared (Vis/NIR steady-state imaging setup is used for investigating quality attributes of apples. They also help to improve optical inspection techniques in the horticultural products.

  16. Monte Carlo methods for electromagnetics

    CERN Document Server

    Sadiku, Matthew NO

    2009-01-01

    Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...

  17. Forward and adjoint radiance Monte Carlo models for quantitative photoacoustic imaging

    Science.gov (United States)

    Hochuli, Roman; Powell, Samuel; Arridge, Simon; Cox, Ben

    2015-03-01

    In quantitative photoacoustic imaging, the aim is to recover physiologically relevant tissue parameters such as chromophore concentrations or oxygen saturation. Obtaining accurate estimates is challenging due to the non-linear relationship between the concentrations and the photoacoustic images. Nonlinear least squares inversions designed to tackle this problem require a model of light transport, the most accurate of which is the radiative transfer equation. This paper presents a highly scalable Monte Carlo model of light transport that computes the radiance in 2D using a Fourier basis to discretise in angle. The model was validated against a 2D finite element model of the radiative transfer equation, and was used to compute gradients of an error functional with respect to the absorption and scattering coefficient. It was found that adjoint-based gradient calculations were much more robust to inherent Monte Carlo noise than a finite difference approach. Furthermore, the Fourier angular discretisation allowed very efficient gradient calculations as sums of Fourier coefficients. These advantages, along with the high parallelisability of Monte Carlo models, makes this approach an attractive candidate as a light model for quantitative inversion in photoacoustic imaging.

  18. Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors

    Science.gov (United States)

    Kalyvas, N.; Liaparinos, P.

    2014-03-01

    Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.

  19. Markov chain Monte Carlo methods in directed graphical models

    DEFF Research Database (Denmark)

    Højbjerre, Malene

    Directed graphical models present data possessing a complex dependence structure, and MCMC methods are computer-intensive simulation techniques to approximate high-dimensional intractable integrals, which emerge in such models with incomplete data. MCMC computations in directed graphical models...

  20. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  1. Monte Carlo analysis of uncertainty propagation in a stratospheric model. 2: Uncertainties due to reaction rates

    Science.gov (United States)

    Stolarski, R. S.; Butler, D. M.; Rundel, R. D.

    1977-01-01

    A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.

  2. Monte Carlo modeling of spatially complex wrist tissue for the optimization of optical pulse oximeters

    Science.gov (United States)

    Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.

    2017-02-01

    Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.

  3. Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling

    NARCIS (Netherlands)

    Vrugt, J.A.; Diks, C.G.H.; Clark, M.

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In t

  4. Monte Carlo tests of the Rasch model based on scalability coefficients

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Kreiner, Svend

    2010-01-01

    that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...... and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....

  5. Monte Carlo Tests of Nucleation Concepts in the Lattice Gas Model

    OpenAIRE

    Schmitz, Fabian; Virnau, Peter; Binder, Kurt

    2013-01-01

    The conventional theory of homogeneous and heterogeneous nucleation in a supersaturated vapor is tested by Monte Carlo simulations of the lattice gas (Ising) model with nearest-neighbor attractive interactions on the simple cubic lattice. The theory considers the nucleation process as a slow (quasi-static) cluster (droplet) growth over a free energy barrier $\\Delta F^*$, constructed in terms of a balance of surface and bulk term of a "critical droplet" of radius $R^*$, implying that the rates...

  6. Critical Exponents of the Classical 3D Heisenberg Model A Single-Cluster Monte Carlo Study

    CERN Document Server

    Holm, C; Holm, Christian; Janke, Wolfhard

    1993-01-01

    We have simulated the three-dimensional Heisenberg model on simple cubic lattices, using the single-cluster Monte Carlo update algorithm. The expected pronounced reduction of critical slowing down at the phase transition is verified. This allows simulations on significantly larger lattices than in previous studies and consequently a better control over systematic errors. In one set of simulations we employ the usual finite-size scaling methods to compute the critical exponents $\

  7. Monte Carlo Study of the Xy-Model on SIERPIŃSKI Carpet

    Science.gov (United States)

    Mitrović, Božidar; Przedborski, Michelle A.

    2014-09-01

    We have performed a Monte Carlo (MC) study of the classical XY-model on a Sierpiński carpet, which is a planar fractal structure with infinite order of ramification and fractal dimension 1.8928. We employed the Wolff cluster algorithm in our simulations and our results, in particular those for the susceptibility and the helicity modulus, indicate the absence of finite-temperature Berezinskii-Kosterlitz-Thouless (BKT) transition in this system.

  8. Quantum Monte Carlo simulation of a two-dimensional Majorana lattice model

    Science.gov (United States)

    Hayata, Tomoya; Yamamoto, Arata

    2017-07-01

    We study interacting Majorana fermions in two dimensions as a low-energy effective model of a vortex lattice in two-dimensional time-reversal-invariant topological superconductors. For that purpose, we implement ab initio quantum Monte Carlo simulation to the Majorana fermion system in which the path-integral measure is given by a semipositive Pfaffian. We discuss spontaneous breaking of time-reversal symmetry at finite temperatures.

  9. Metropolis Methods for Quantum Monte Carlo Simulations

    OpenAIRE

    Ceperley, D. M.

    2003-01-01

    Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...

  10. Improvements of the Analytical Model of Monte Carlo

    Institute of Scientific and Technical Information of China (English)

    HE Qing-Fang; XU Zheng; TENG Feng; LIU De-Ang; XU Xu-Rong

    2006-01-01

    @@ By extending the conduction band structure, we set up a new analytical model in ZnS. Compared the results with both the old analytical model and the full band model, it is found that they are possibly in reasonable agreement with the full band method and we can improve the calculation precision. Another important work is to reduce the programme computation time using the method of data fitting scattering rate curves.

  11. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  12. Monte Carlo method of radiative transfer applied to a turbulent flame modeling with LES

    Science.gov (United States)

    Zhang, Jin; Gicquel, Olivier; Veynante, Denis; Taine, Jean

    2009-06-01

    Radiative transfer plays an important role in the numerical simulation of turbulent combustion. However, for the reason that combustion and radiation are characterized by different time scales and different spatial and chemical treatments, the radiation effect is often neglected or roughly modelled. The coupling of a large eddy simulation combustion solver and a radiation solver through a dedicated language, CORBA, is investigated. Two formulations of Monte Carlo method (Forward Method and Emission Reciprocity Method) employed to resolve RTE have been compared in a one-dimensional flame test case using three-dimensional calculation grids with absorbing and emitting media in order to validate the Monte Carlo radiative solver and to choose the most efficient model for coupling. Then the results obtained using two different RTE solvers (Reciprocity Monte Carlo method and Discrete Ordinate Method) applied on a three-dimensional flame holder set-up with a correlated-k distribution model describing the real gas medium spectral radiative properties are compared not only in terms of the physical behavior of the flame, but also in computational performance (storage requirement, CPU time and parallelization efficiency). To cite this article: J. Zhang et al., C. R. Mecanique 337 (2009).

  13. A Monte Carlo Solution of the Human Ballistic Mortality Model

    Science.gov (United States)

    1978-08-01

    to obtain a damage D for thetotal wound. This addition law is averaged over the total soldier W.B. Beverly, "A Human Balistic Mortalitj Model," to be...January 1970. 𔃽 C.A. Stanley and K. Brown. "A Coniputer Man Anatomica l ModeL ," Balistic Research Laboratory Report ARBL T No. 02080, May 1978

  14. Reservoir Modeling Combining Geostatistics with Markov Chain Monte Carlo Inversion

    DEFF Research Database (Denmark)

    Zunino, Andrea; Lange, Katrine; Melnikova, Yulia;

    2014-01-01

    , multi-step forward model (rock physics and seismology) and to provide realistic estimates of uncertainties. To generate realistic models which represent samples of the prior distribution, and to overcome the high computational demand, we reduce the search space utilizing an algorithm drawn from...

  15. ANALYSIS OF INNOVATIVE ACTIVITY OF METALLURGICAL COMPANIES USING MONTE-CARLO MATHEMATICAL MODEL-ING METHOD

    Directory of Open Access Journals (Sweden)

    Shchekoturova S. D.

    2015-04-01

    Full Text Available The article presents an analysis of an innovative activity of four Russian metallurgical enterprises: "Ruspolimet", JSC "Ural Smithy", JSC "Stupino Metallurgical Company", JSC "VSMPO" via mathematical modeling using Monte Carlo method. The results of the assessment of innovative activity of Russian metallurgical companies were identified in five years dynamics. An assessment of the current innovative activity was made by the calculation of an integral index of the innovative activity. The calculation was based on such six indicators as the proportion of staff employed in R & D; the level of development of new technology; the degree of development of new products; share of material resources for R & D; degree of security of enterprise intellectual property; the share of investment in innovative projects and it was analyzed from 2007 to 2011. On the basis of this data the integral indicator of the innovative activity of metallurgical companies was calculated by well-known method of weighting coefficients. The comparative analysis of integral indicators of the innovative activity of considered companies made it possible to range their level of the innovative activity and to characterize the current state of their business. Based on Monte Carlo method a variation interval of the integral indicator was obtained and detailed instructions to choose the strategy of the innovative development of metallurgical enterprises were given as well

  16. A Monte Carlo simulation for kinetic chemotaxis models: an application to the traveling population wave

    CERN Document Server

    Yasuda, Shugo

    2015-01-01

    A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...

  17. Monte Carlo Based Toy Model for Fission Process

    CERN Document Server

    Kurniadi, R; Viridi, S

    2014-01-01

    Fission yield has been calculated notoriously by two calculations approach, macroscopic approach and microscopic approach. This work will proposes another calculation approach which the nucleus is treated as a toy model. The toy model of fission yield is a preliminary method that use random number as a backbone of the calculation. Because of nucleus as a toy model hence the fission process does not represent real fission process in nature completely. Fission event is modeled by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. The toy model is formed by Gaussian distribution of random number that randomizes distance like between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean ({\\mu}CN, {\\mu}L, {\\mu}R), and standard d...

  18. Essays on Quantitative Marketing Models and Monte Carlo Integration Methods

    NARCIS (Netherlands)

    R.D. van Oest (Rutger)

    2005-01-01

    textabstractThe last few decades have led to an enormous increase in the availability of large detailed data sets and in the computing power needed to analyze such data. Furthermore, new models and new computing techniques have been developed to exploit both sources. All of this has allowed for addr

  19. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    Science.gov (United States)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation

  20. Modeling Replenishment of Ultrathin Liquid Perfluoropolyether Z Films on Solid Surfaces Using Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    M. S. Mayeed

    2014-01-01

    Full Text Available Applying the reptation algorithm to a simplified perfluoropolyether Z off-lattice polymer model an NVT Monte Carlo simulation has been performed. Bulk condition has been simulated first to compare the average radius of gyration with the bulk experimental results. Then the model is tested for its ability to describe dynamics. After this, it is applied to observe the replenishment of nanoscale ultrathin liquid films on solid flat carbon surfaces. The replenishment rate for trenches of different widths (8, 12, and 16 nms for several molecular weights between two films of perfluoropolyether Z from the Monte Carlo simulation is compared to that obtained solving the diffusion equation using the experimental diffusion coefficients of Ma et al. (1999, with room condition in both cases. Replenishment per Monte Carlo cycle seems to be a constant multiple of replenishment per second at least up to 2 nm replenished film thickness of the trenches over the carbon surface. Considerable good agreement has been achieved here between the experimental results and the dynamics of molecules using reptation moves in the ultrathin liquid films on solid surfaces.

  1. Modeling weight variability in a pan coating process using Monte Carlo simulations.

    Science.gov (United States)

    Pandey, Preetanshu; Katakdaunde, Manoj; Turton, Richard

    2006-10-06

    The primary objective of the current study was to investigate process variables affecting weight gain mass coating variability (CV(m) ) in pan coating devices using novel video-imaging techniques and Monte Carlo simulations. Experimental information such as the tablet location, circulation time distribution, velocity distribution, projected surface area, and spray dynamics was the main input to the simulations. The data on the dynamics of tablet movement were obtained using novel video-imaging methods. The effects of pan speed, pan loading, tablet size, coating time, spray flux distribution, and spray area and shape were investigated. CV(m) was found to be inversely proportional to the square root of coating time. The spray shape was not found to affect the CV(m) of the process significantly, but an increase in the spray area led to lower CV(m) s. Coating experiments were conducted to verify the predictions from the Monte Carlo simulations, and the trends predicted from the model were in good agreement. It was observed that the Monte Carlo simulations underpredicted CV(m) s in comparison to the experiments. The model developed can provide a basis for adjustments in process parameters required during scale-up operations and can be useful in predicting the process changes that are needed to achieve the same CV(m) when a variable is altered.

  2. Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    CERN Document Server

    Navarro, C A; Deng, Youjin

    2015-01-01

    The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...

  3. Monte Carlo simulation based toy model for fission process

    Science.gov (United States)

    Kurniadi, Rizal; Waris, Abdul; Viridi, Sparisoma

    2016-09-01

    Nuclear fission has been modeled notoriously using two approaches method, macroscopic and microscopic. This work will propose another approach, where the nucleus is treated as a toy model. The aim is to see the usefulness of particle distribution in fission yield calculation. Inasmuch nucleus is a toy, then the Fission Toy Model (FTM) does not represent real process in nature completely. The fission event in FTM is represented by one random number. The number is assumed as width of distribution probability of nucleon position in compound nuclei when fission process is started. By adopting the nucleon density approximation, the Gaussian distribution is chosen as particle distribution. This distribution function generates random number that randomizes distance between particles and a central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. The yield is determined from portion of nuclei distribution which is proportional with portion of mass numbers. By using modified FTM, characteristic of particle distribution in each fission event could be formed before fission process. These characteristics could be used to make prediction about real nucleons interaction in fission process. The results of FTM calculation give information that the γ value seems as energy.

  4. Modeling of hysteresis loops by Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    Z. Nehme

    2015-12-01

    Full Text Available Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nanoarchitectures with different anisotropy contributions.

  5. Modeling of hysteresis loops by Monte Carlo simulation

    Science.gov (United States)

    Nehme, Z.; Labaye, Y.; Sayed Hassan, R.; Yaacoub, N.; Greneche, J. M.

    2015-12-01

    Recent advances in MC simulations of magnetic properties are rather devoted to non-interacting systems or ultrafast phenomena, while the modeling of quasi-static hysteresis loops of an assembly of spins with strong internal exchange interactions remains limited to specific cases. In the case of any assembly of magnetic moments, we propose MC simulations on the basis of a three dimensional classical Heisenberg model applied to an isolated magnetic slab involving first nearest neighbors exchange interactions and uniaxial anisotropy. Three different algorithms were successively implemented in order to simulate hysteresis loops: the classical free algorithm, the cone algorithm and a mixed one consisting of adding some global rotations. We focus particularly our study on the impact of varying the anisotropic constant parameter on the coercive field for different temperatures and algorithms. A study of the angular acceptation move distribution allows the dynamics of our simulations to be characterized. The results reveal that the coercive field is linearly related to the anisotropy providing that the algorithm and the numeric conditions are carefully chosen. In a general tendency, it is found that the efficiency of the simulation can be greatly enhanced by using the mixed algorithm that mimic the physics of collective behavior. Consequently, this study lead as to better quantified coercive fields measurements resulting from physical phenomena of complex magnetic (nano)architectures with different anisotropy contributions.

  6. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  7. Converting Boundary Representation Solid Models to Half-Space Representation Models for Monte Carlo Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Davis JE, Eddy MJ, Sutton TM, Altomari TJ

    2007-03-01

    Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.

  8. Monte Carlo modeling of ion beam induced secondary electrons

    Energy Technology Data Exchange (ETDEWEB)

    Huh, U., E-mail: uhuh@vols.utk.edu [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Cho, W. [Electrical and Computer Engineering, University of Tennessee, Knoxville, TN 37996-2100 (United States); Joy, D.C. [Biochemistry & Cellular & Molecular Biology, University of Tennessee, Knoxville, TN 37996-0840 (United States); Center for Nanophase Materials Science, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2016-09-15

    Ion induced secondary electrons (iSE) can produce high-resolution images ranging from a few eV to 100 keV over a wide range of materials. The interpretation of such images requires knowledge of the secondary electron yields (iSE δ) for each of the elements and materials present and as a function of the incident beam energy. Experimental data for helium ions are currently limited to 40 elements and six compounds while other ions are not well represented. To overcome this limitation, we propose a simple procedure based on the comprehensive work of Berger et al. Here we show that between the energy range of 10–100 keV the Berger et al. data for elements and compounds can be accurately represented by a single universal curve. The agreement between the limited experimental data that is available and the predictive model is good, and has been found to provide reliable yield data for a wide range of elements and compounds. - Highlights: • The Universal ASTAR Yield Curve was derived from data recently published by NIST. • IONiSE incorporated with the Curve will predict iSE yield for elements and compounds. • This approach can also handle other ion beams by changing basic scattering profile.

  9. Development of a Monte Carlo model for the Brainlab microMLC.

    Science.gov (United States)

    Belec, Jason; Patrocinio, Horacio; Verhaegen, Frank

    2005-03-07

    Stereotactic radiosurgery with several static conformal beams shaped by a micro multileaf collimator (microMLC) is used to treat small irregularly shaped brain lesions. Our goal is to perform Monte Carlo calculations of dose distributions for certain treatment plans as a verification tool. A dedicated microMLC component module for the BEAMnrc code was developed as part of this project and was incorporated in a model of the Varian CL2300 linear accelerator 6 MV photon beam. As an initial validation of the code, the leaf geometry was visualized by tracing particles through the component module and recording their position each time a leaf boundary was crossed. The leaf dimensions were measured and the leaf material density and interleaf air gap were chosen to match the simulated leaf leakage profiles with film measurements in a solid water phantom. A comparison between Monte Carlo calculations and measurements (diode, radiographic film) was performed for square and irregularly shaped fields incident on flat and homogeneous water phantoms. Results show that Monte Carlo calculations agree with measured dose distributions to within 2% and/or 1 mm except for field size smaller than 1.2 cm diameter where agreement is within 5% due to uncertainties in measured output factors.

  10. Monte Carlo path sampling approach to modeling aeolian sediment transport

    Science.gov (United States)

    Hardin, E. J.; Mitasova, H.; Mitas, L.

    2011-12-01

    Coastal communities and vital infrastructure are subject to coastal hazards including storm surge and hurricanes. Coastal dunes offer protection by acting as natural barriers from waves and storm surge. During storms, these landforms and their protective function can erode; however, they can also erode even in the absence of storms due to daily wind and waves. Costly and often controversial beach nourishment and coastal construction projects are common erosion mitigation practices. With a more complete understanding of coastal morphology, the efficacy and consequences of anthropogenic activities could be better predicted. Currently, the research on coastal landscape evolution is focused on waves and storm surge, while only limited effort is devoted to understanding aeolian forces. Aeolian transport occurs when the wind supplies a shear stress that exceeds a critical value, consequently ejecting sand grains into the air. If the grains are too heavy to be suspended, they fall back to the grain bed where the collision ejects more grains. This is called saltation and is the salient process by which sand mass is transported. The shear stress required to dislodge grains is related to turbulent air speed. Subsequently, as sand mass is injected into the air, the wind loses speed along with its ability to eject more grains. In this way, the flux of saltating grains is itself influenced by the flux of saltating grains and aeolian transport becomes nonlinear. Aeolian sediment transport is difficult to study experimentally for reasons arising from the orders of magnitude difference between grain size and dune size. It is difficult to study theoretically because aeolian transport is highly nonlinear especially over complex landscapes. Current computational approaches have limitations as well; single grain models are mathematically simple but are computationally intractable even with modern computing power whereas cellular automota-based approaches are computationally efficient

  11. Fast quantum Monte Carlo on a GPU

    CERN Document Server

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  12. Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations

    CERN Document Server

    Dias Astros, Maria Isabel

    2017-01-01

    In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.

  13. Experimental Validation of Monte Carlo Simulations Based on a Virtual Source Model for TomoTherapy in a RANDO Phantom.

    Science.gov (United States)

    Yuan, Jiankui; Zheng, Yiran; Wessels, Barry; Lo, Simon S; Ellis, Rodney; Machtay, Mitchell; Yao, Min

    2016-12-01

    A virtual source model for Monte Carlo simulations of helical TomoTherapy has been developed previously by the authors. The purpose of this work is to perform experiments in an anthropomorphic (RANDO) phantom with the same order of complexity as in clinical treatments to validate the virtual source model to be used for quality assurance secondary check on TomoTherapy patient planning dose. Helical TomoTherapy involves complex delivery pattern with irregular beam apertures and couch movement during irradiation. Monte Carlo simulation, as the most accurate dose algorithm, is desirable in radiation dosimetry. Current Monte Carlo simulations for helical TomoTherapy adopt the full Monte Carlo model, which includes detailed modeling of individual machine component, and thus, large phase space files are required at different scoring planes. As an alternative approach, we developed a virtual source model without using the large phase space files for the patient dose calculations previously. In this work, we apply the simulation system to recompute the patient doses, which were generated by the treatment planning system in an anthropomorphic phantom to mimic the real patient treatments. We performed thermoluminescence dosimeter point dose and film measurements to compare with Monte Carlo results. Thermoluminescence dosimeter measurements show that the relative difference in both Monte Carlo and treatment planning system is within 3%, with the largest difference less than 5% for both the test plans. The film measurements demonstrated 85.7% and 98.4% passing rate using the 3 mm/3% acceptance criterion for the head and neck and lung cases, respectively. Over 95% passing rate is achieved if 4 mm/4% criterion is applied. For the dose-volume histograms, very good agreement is obtained between the Monte Carlo and treatment planning system method for both cases. The experimental results demonstrate that the virtual source model Monte Carlo system can be a viable option for the

  14. Monte Carlo radiation transport in external beam radiotherapy

    OpenAIRE

    Çeçen, Yiğit

    2013-01-01

    The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...

  15. Monte Carlo integration on GPU

    OpenAIRE

    Kanzaki, J.

    2010-01-01

    We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...

  16. Critical behavior of the random-bond Ashkin-Teller model: A Monte Carlo study

    Science.gov (United States)

    Wiseman, Shai; Domany, Eytan

    1995-04-01

    The critical behavior of a bond-disordered Ashkin-Teller model on a square lattice is investigated by intensive Monte Carlo simulations. A duality transformation is used to locate a critical plane of the disordered model. This critical plane corresponds to the line of critical points of the pure model, along which critical exponents vary continuously. Along this line the scaling exponent corresponding to randomness φ=(α/ν) varies continuously and is positive so that the randomness is relevant, and different critical behavior is expected for the disordered model. We use a cluster algorithm for the Monte Carlo simulations based on the Wolff embedding idea, and perform a finite size scaling study of several critical models, extrapolating between the critical bond-disordered Ising and bond-disordered four-state Potts models. The critical behavior of the disordered model is compared with the critical behavior of an anisotropic Ashkin-Teller model, which is used as a reference pure model. We find no essential change in the order parameters' critical exponents with respect to those of the pure model. The divergence of the specific heat C is changed dramatically. Our results favor a logarithmic type divergence at Tc, C~lnL for the random-bond Ashkin-Teller and four-state Potts models and C~ln lnL for the random-bond Ising model.

  17. Monte Carlo tools for Beyond the Standard Model Physics , April 14-16

    DEFF Research Database (Denmark)

    Badger...[], Simon; Christensen, Christian Holm; Dalsgaard, Hans Hjersing;

    2011-01-01

    This workshop aims to gather together theorists and experimentalists interested in developing and using Monte Carlo tools for Beyond the Standard Model Physics in an attempt to be prepared for the analysis of data focusing on the Large Hadron Collider. Since a large number of excellent tools....... To identify promising models (or processes) for which the tools have not yet been constructed and start filling up these gaps. To propose ways to streamline the process of going from models to events, i.e. to make the process more user-friendly so that more people can get involved and perform serious collider...

  18. Development of advanced geometric models and acceleration techniques for Monte Carlo simulation in Medical Physics

    OpenAIRE

    Badal Soler, Andreu

    2008-01-01

    Els programes de simulació Monte Carlo de caràcter general s'utilitzen actualment en una gran varietat d'aplicacions.Tot i això, els models geomètrics implementats en la majoria de programes imposen certes limitacions a la forma dels objectes que es poden definir. Aquests models no són adequats per descriure les superfícies arbitràries que es troben en estructures anatòmiques o en certs aparells mèdics i, conseqüentment, algunes aplicacions que requereixen l'ús de models geomètrics molt detal...

  19. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  20. Self-consistent kinetic lattice Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Horsfield, A.; Dunham, S.; Fujitani, Hideaki

    1999-07-01

    The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.

  1. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    Science.gov (United States)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  2. Combined constraints on modified Chaplygin gas model from cosmological observed data: Markov Chain Monte Carlo approach

    OpenAIRE

    Lu, Jianbo; Xu, Lixin; Wu, Yabo; Liu, Molin

    2011-01-01

    We use the Markov Chain Monte Carlo method to investigate a global constraints on the modified Chaplygin gas (MCG) model as the unification of dark matter and dark energy from the latest observational data: the Union2 dataset of type supernovae Ia (SNIa), the observational Hubble data (OHD), the cluster X-ray gas mass fraction, the baryon acoustic oscillation (BAO), and the cosmic microwave background (CMB) data. In a flat universe, the constraint results for MCG model are, $\\Omega_{b}h^{2}=0...

  3. Modeling and Monte Carlo simulation of nucleation and growth of UV/low-temperature-induced nanostructures

    Science.gov (United States)

    Flicstein, Jean; Pata, S.; Chun, L. S. H. K.; Palmier, Jean F.; Courant, J. L.

    1998-05-01

    A model for ultraviolet induced chemical vapor deposition (UV CVD) for a-SiN:H is described. In the simulation of UV CVD process, activate charged centers creation, species incorporation, surface diffusion, and desorption are considered as elementary steps for the photonucleation and photodeposition mechanisms. The process is characterized by two surface sticking coefficients. Surface diffusion of species is modeled with a gaussian distribution. A real time Monte Carlo method is used to determine photonucleation and photodeposition rates in nanostructures. Comparison of experimental versus simulation results for a-SiN:H is shown to predict the morphology temporal evolution under operating conditions down to atomistic resolution.

  4. Nuclear Level Density of ${}^{161}$Dy in the Shell Model Monte Carlo Method

    CERN Document Server

    Özen, Cem; Nakada, Hitoshi

    2012-01-01

    We extend the shell-model Monte Carlo applications to the rare-earth region to include the odd-even nucleus ${}^{161}$Dy. The projection on an odd number of particles leads to a sign problem at low temperatures making it impractical to extract the ground-state energy in direct calculations. We use level counting data at low energies and neutron resonance data to extract the shell model ground-state energy to good precision. We then calculate the level density of ${}^{161}$Dy and find it in very good agreement with the level density extracted from experimental data.

  5. Monte Carlo renormalization-group investigation of the two-dimensional O(4) sigma model

    Science.gov (United States)

    Heller, Urs M.

    1988-01-01

    An improved Monte Carlo renormalization-group method is used to determine the beta function of the two-dimensional O(4) sigma model. While for (inverse) couplings beta = greater than about 2.2 agreement is obtained with asymptotic scaling according to asymptotic freedom, deviations from it are obtained at smaller couplings. They are, however, consistent with the behavior of the correlation length, indicating 'scaling' according to the full beta function. These results contradict recent claims that the model has a critical point at finite coupling.

  6. Molecular mobility with respect to accessible volume in Monte Carlo lattice model for polymers

    Science.gov (United States)

    Diani, J.; Gilormini, P.

    2017-02-01

    A three-dimensional cubic Monte Carlo lattice model is considered to test the impact of volume on the molecular mobility of amorphous polymers. Assuming classic polymer chain dynamics, the concept of locked volume limiting the accessible volume around the polymer chains is introduced. The polymer mobility is assessed by its ability to explore the entire lattice thanks to reptation motions. When recording the polymer mobility with respect to the lattice accessible volume, a sharp mobility transition is observed as witnessed during glass transition. The model ability to reproduce known actual trends in terms of glass transition with respect to material parameters, is also tested.

  7. Open-source direct simulation Monte Carlo chemistry modeling for hypersonic flows

    OpenAIRE

    Scanlon, Thomas J.; White, Craig; Borg, Matthew K.; Palharini, Rodrigo C.; Farbar, Erin; Boyd, Iain D.; Reese, Jason M.; Brown, Richard E

    2015-01-01

    An open source implementation of chemistry modelling for the direct simulationMonte Carlo (DSMC) method is presented. Following the recent work of Bird [1] an approach known as the quantum kinetic (Q-K) method has been adopted to describe chemical reactions in a 5-species air model using DSMC procedures based on microscopic gas information. The Q-K technique has been implemented within the framework of the dsmcFoam code, a derivative of the open source CFD code OpenFOAM. Results for vibration...

  8. Monte Carlo Studies of Phase Separation in Compressible 2-dim Ising Models

    Science.gov (United States)

    Mitchell, S. J.; Landau, D. P.

    2006-03-01

    Using high resolution Monte Carlo simulations, we study time-dependent domain growth in compressible 2-dim ferromagnetic (s=1/2) Ising models with continuous spin positions and spin-exchange moves [1]. Spins interact with slightly modified Lennard-Jones potentials, and we consider a model with no lattice mismatch and one with 4% mismatch. For comparison, we repeat calculations for the rigid Ising model [2]. For all models, large systems (512^2) and long times (10^ 6 MCS) are examined over multiple runs, and the growth exponent is measured in the asymptotic scaling regime. For the rigid model and the compressible model with no lattice mismatch, the growth exponent is consistent with the theoretically expected value of 1/3 [1] for Model B type growth. However, we find that non-zero lattice mismatch has a significant and unexpected effect on the growth behavior.Supported by the NSF.[1] D.P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, second ed. (Cambridge University Press, New York, 2005).[2] J. Amar, F. Sullivan, and R.D. Mountain, Phys. Rev. B 37, 196 (1988).

  9. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-04-01

    Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.

  10. Monte Carlo simulation of Prussian blue analogs described by Heisenberg ternary alloy model

    Science.gov (United States)

    Yüksel, Yusuf

    2015-11-01

    Within the framework of Monte Carlo simulation technique, we simulate magnetic behavior of Prussian blue analogs based on Heisenberg ternary alloy model. We present phase diagrams in various parameter spaces, and we compare some of our results with those based on Ising counterparts. We clarify the variations of transition temperature and compensation phenomenon with mixing ratio of magnetic ions, exchange interactions, and exchange anisotropy in the present ferro-ferrimagnetic Heisenberg system. According to our results, thermal variation of the total magnetization curves may exhibit N, L, P, Q, R type behaviors based on the Néel classification scheme.

  11. Rejection-free Monte Carlo algorithms for models with continuous degrees of freedom.

    Science.gov (United States)

    Muñoz, J D; Novotny, M A; Mitchell, S J

    2003-02-01

    We construct a rejection-free Monte Carlo algorithm for a system with continuous degrees of freedom. We illustrate the algorithm by applying it to the classical three-dimensional Heisenberg model with canonical Metropolis dynamics. We obtain the lifetime of the metastable state following a reversal of the external magnetic field. Our rejection-free algorithm obtains results in agreement with a direct implementation of the Metropolis dynamic and requires orders of magnitude less computational time at low temperatures. The treatment is general and can be extended to other dynamics and other systems with continuous degrees of freedom.

  12. Uncertainty assessment of integrated distributed hydrological models using GLUE with Markov chain Monte Carlo sampling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2008-01-01

    uncertainty estimation (GLUE) procedure based on Markov chain Monte Carlo sampling is applied in order to improve the performance of the methodology in estimating parameters and posterior output distributions. The description of the spatial variations of the hydrological processes is accounted for by defining...... the identifiability of the parameters and results in satisfactory multi-variable simulations and uncertainty estimates. However, the parameter uncertainty alone cannot explain the total uncertainty at all the sites, due to limitations in the distributed data included in the model calibration. The study also indicates...

  13. Simulation of low Schottky barrier MOSFETs using an improved Multi-subband Monte Carlo model

    Science.gov (United States)

    Gudmundsson, Valur; Palestri, Pierpaolo; Hellström, Per-Erik; Selmi, Luca; Östling, Mikael

    2013-01-01

    We present a simple and efficient approach to implement Schottky barrier contacts in a Multi-subband Monte Carlo simulator by using the subband smoothening technique to mimic tunneling at the Schottky junction. In the absence of scattering, simulation results for Schottky barrier MOSFETs are in agreement with ballistic Non-Equilibrium Green's Functions calculations. We then include the most relevant scattering mechanisms, and apply the model to the study of double gate Schottky barrier MOSFETs representative of the ITRS 2015 high performance device. Results show that a Schottky barrier height of less than approximately 0.15 eV is required to outperform the doped source/drain structure.

  14. Chemical Potential of Benzene Fluid from Monte Carlo Simulation with Anisotropic United Atom Model

    Directory of Open Access Journals (Sweden)

    Mahfuzh Huda

    2013-07-01

    Full Text Available The profile of chemical potential of benzene fluid has been investigated using Anisotropic United Atom (AUA model. A Monte Carlo simulation in canonical ensemble was done to obtain the isotherm of benzene fluid, from which the excess part of chemical potential was calculated. A surge of potential energy is observed during the simulation at high temperature which is related to the gas-liquid phase transition. The isotherm profile indicates the tendency of benzene to condensate due to the strong attractive interaction. The results show that the chemical potential of benzene rapidly deviates from its ideal gas counterpart even at low density.

  15. A threaded Java concurrent implementation of the Monte-Carlo Metropolis Ising model.

    Science.gov (United States)

    Castañeda-Marroquín, Carlos; de la Puente, Alfonso Ortega; Alfonseca, Manuel; Glazier, James A; Swat, Maciej

    2009-06-01

    This paper describes a concurrent Java implementation of the Metropolis Monte-Carlo algorithm that is used in 2D Ising model simulations. The presented method uses threads, monitors, shared variables and high level concurrent constructs that hide the low level details. In our algorithm we assign one thread to handle one spin flip attempt at a time. We use special lattice site selection algorithm to avoid two or more threads working concurently in the region of the lattice that "belongs" to two or more different spins undergoing spin-flip transformation. Our approach does not depend on the current platform and maximizes concurrent use of the available resources.

  16. Variational Monte Carlo study of magnetic states in the periodic Anderson model

    Science.gov (United States)

    Kubo, Katsunori

    2015-03-01

    We study the magnetic states of the periodic Anderson model with a finite Coulomb interaction between f electrons on a square lattice by applying variational Monte Carlo method. We consider Gutzwiller wavefunctions for the paramagnetic, antiferromagnetic, ferromagnetic, and charge density wave states. We find an antiferromagnetic phase around half-filling. There is a phase transition accompanying change in the Fermi-surface topology in this antiferromagnetic phase. We also study a case away from half-filling, and find a ferromagnetic state as the ground state there.

  17. Studies on top-quark Monte Carlo modelling for Top2016

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    This note summarises recent studies on Monte Carlo simulation setups of top-quark pair production used by the ATLAS experiment and presents a new method to deal with interference effects for the $Wt$ single-top-quark production which is compared against previous techniques. The main focus for the top-quark pair production is on the improvement of the modelling of the Powheg generator interfaced to the Pythia8 and Herwig7 shower generators. The studies are done using unfolded data at centre-of-mass energies of 7, 8, and 13 TeV.

  18. Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

    Science.gov (United States)

    Dureau, David; Poëtte, Gaël

    2014-06-01

    This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

  19. A study of potential energy curves from the model space quantum Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Ohtsuka, Yuhki; Ten-no, Seiichiro, E-mail: tenno@cs.kobe-u.ac.jp [Department of Computational Sciences, Graduate School of System Informatics, Kobe University, Nada-ku, Kobe 657-8501 (Japan)

    2015-12-07

    We report on the first application of the model space quantum Monte Carlo (MSQMC) to potential energy curves (PECs) for the excited states of C{sub 2}, N{sub 2}, and O{sub 2} to validate the applicability of the method. A parallel MSQMC code is implemented with the initiator approximation to enable efficient sampling. The PECs of MSQMC for various excited and ionized states are compared with those from the Rydberg-Klein-Rees and full configuration interaction methods. The results indicate the usefulness of MSQMC for precise PECs in a wide range obviating problems concerning quasi-degeneracy.

  20. Hierarchical Acceleration of Multilevel Monte Carlo Methods for Computationally Expensive Simulations in Reservoir Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Webster, C.

    2014-12-01

    The rational management of oil and gas reservoir requires an understanding of its response to existing and planned schemes of exploitation and operation. Such understanding requires analyzing and quantifying the influence of the subsurface uncertainties on predictions of oil and gas production. As the subsurface properties are typically heterogeneous causing a large number of model parameters, the dimension independent Monte Carlo (MC) method is usually used for uncertainty quantification (UQ). Recently, multilevel Monte Carlo (MLMC) methods were proposed, as a variance reduction technique, in order to improve computational efficiency of MC methods in UQ. In this effort, we propose a new acceleration approach for MLMC method to further reduce the total computational cost by exploiting model hierarchies. Specifically, for each model simulation on a new added level of MLMC, we take advantage of the approximation of the model outputs constructed based on simulations on previous levels to provide better initial states of new simulations, which will help improve efficiency by, e.g. reducing the number of iterations in linear system solving or the number of needed time-steps. This is achieved by using mesh-free interpolation methods, such as Shepard interpolation and radial basis approximation. Our approach is applied to a highly heterogeneous reservoir model from the tenth SPE project. The results indicate that the accelerated MLMC can achieve the same accuracy as standard MLMC with a significantly reduced cost.

  1. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A., E-mail: ksteph@illinois.edu [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States)

    2016-02-15

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  2. A stochastic model updating strategy-based improved response surface model and advanced Monte Carlo simulation

    Science.gov (United States)

    Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun

    2017-01-01

    To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.

  3. Recommended direct simulation Monte Carlo collision model parameters for modeling ionized air transport processes

    Science.gov (United States)

    Swaminathan-Gopalan, Krishnan; Stephani, Kelly A.

    2016-02-01

    A systematic approach for calibrating the direct simulation Monte Carlo (DSMC) collision model parameters to achieve consistency in the transport processes is presented. The DSMC collision cross section model parameters are calibrated for high temperature atmospheric conditions by matching the collision integrals from DSMC against ab initio based collision integrals that are currently employed in the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and Data Parallel Line Relaxation (DPLR) high temperature computational fluid dynamics solvers. The DSMC parameter values are computed for the widely used Variable Hard Sphere (VHS) and the Variable Soft Sphere (VSS) models using the collision-specific pairing approach. The recommended best-fit VHS/VSS parameter values are provided over a temperature range of 1000-20 000 K for a thirteen-species ionized air mixture. Use of the VSS model is necessary to achieve consistency in transport processes of ionized gases. The agreement of the VSS model transport properties with the transport properties as determined by the ab initio collision integral fits was found to be within 6% in the entire temperature range, regardless of the composition of the mixture. The recommended model parameter values can be readily applied to any gas mixture involving binary collisional interactions between the chemical species presented for the specified temperature range.

  4. An Analytic Linear Accelerator Source Model for Monte Carlo Dose Calculations. I. Model Representation and Construction

    CERN Document Server

    Tian, Zhen; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation is considered as the most accurate method for radiation dose calculations. Accuracy of a source model for a linear accelerator is critical for the overall dose calculation accuracy. In this paper, we presented an analytical source model that we recently developed for GPU-based MC dose calculations. A key concept called phase-space-ring (PSR) was proposed. It contained a group of particles that are of the same type and close in energy and radial distance to the center of the phase-space plane. The model parameterized probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. For a primary photon PSRs, the particle direction is assumed to be from the beam spot. A finite spot size is modeled with a 2D Gaussian distribution. For a scattered photon PSR, multiple Gaussian components were used to model the particle direction. The direction distribution of an electron PSRs was also modeled as a 2D Gaussian distributi...

  5. Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model

    Science.gov (United States)

    Morin, Mario A.; Ficarazzo, Francesco

    2006-04-01

    Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.

  6. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  7. A Monte Carlo model for 3D grain evolution during welding

    Science.gov (United States)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  8. Phenomenology of Large Extra Dimensions Models at Hadrons Colliders using Monte Carlo Techniques (Spin-2 Graviton)

    CERN Document Server

    Bakhet, Nady; Hussein, Tarek

    2015-01-01

    Large Extra Dimensions Models have been proposed to remove the hierarchy problem and give an explanation why the gravity is so much weaker than the other three forces. In this work, we present an analysis of Monte Carlo data events for new physics signatures of spin-2 Graviton in context of ADD model with total dimensions $D=4+\\delta,$ $\\delta = 1,2,3,4,5,6 $ where $ \\delta $ is the extra special dimension, this model involves missing momentum $P_{T}^{miss}$ in association with jet in the final state via the process $pp(\\bar{p}) \\rightarrow G+jet$, Also, we present an analysis in context of the RS model with 5-dimensions via the process $pp(\\bar{p}) \\rightarrow G+jet$, $G \\rightarrow e^{+}e^{-}$ with final state $e^{+}e^{-}+jet$. We used Monte Carlo event generator Pythia8 to produce efficient signal selection rules at the Large Hadron Collider with $\\sqrt{s}$=14TeV and at the Tevatron $\\sqrt{s}$=1.96TeV .

  9. Contribution of Monte-Carlo modeling for understanding long-term behavior of nuclear glasses

    Energy Technology Data Exchange (ETDEWEB)

    Minet, Y.; Ledieu, A.; Devreux, F.; Barboux, P.; Frugier, P.; Gin, S

    2004-07-01

    Monte-Carlo methods have been developed at CEA and Ecole Polytechnique to improve our understanding of the basic mechanisms that control glass dissolution kinetics. The models, based on dissolution and recondensation rates of the atoms, can reproduce the observed alteration rates and the evolutions of the alteration layers on simplified borosilicate glasses (based on SiO{sub 2}-B{sub 2}O{sub 3}-Na{sub 2}O) over a large range of compositions and alteration conditions. The basic models are presented, as well as their current evolutions to describe more complex glasses (introduction of Al, Zr, Ca oxides) and to take into account phenomena which may be predominant in the long run (such as diffusion in the alteration layer or secondary phase precipitation). The predictions are compared with the observations performed by techniques giving structural or textural information on the alteration layer (e.g. NMR, Small Angle X-ray Scattering). The paper concludes with proposals for further evolutions of Monte-Carlo models towards integration into a predictive modeling framework. (authors)

  10. Fission yield calculation using toy model based on Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)

    2015-09-30

    Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90

  11. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  12. Corner wetting in the two-dimensional Ising model: Monte Carlo results

    Science.gov (United States)

    Albano, E. V.; DeVirgiliis, A.; Müller, M.; Binder, K.

    2003-01-01

    Square L × L (L = 24-128) Ising lattices with nearest neighbour ferromagnetic exchange are considered using free boundary conditions at which boundary magnetic fields ± h are applied, i.e., at the two boundary rows ending at the lower left corner a field +h acts, while at the two boundary rows ending at the upper right corner a field -h acts. For temperatures T less than the critical temperature Tc of the bulk, this boundary condition leads to the formation of two domains with opposite orientations of the magnetization direction, separated by an interface which for T larger than the filling transition temperature Tf (h) runs from the upper left corner to the lower right corner, while for T interface is localized either close to the lower left corner or close to the upper right corner. Numerous theoretical predictions for the critical behaviour of this 'corner wetting' or 'wedge filling' transition are tested by Monte Carlo simulations. In particular, it is shown that for T = Tf (h) the magnetization profile m(z) in the z-direction normal to the interface is simply linear and the interfacial width scales as w propto L, while for T > Tf (h) it scales as w proptosurd L. The distribution P (ell) of the interface position ell (measured along the z-direction from the corners) decays exponentially for T Tf (h). Furthermore, the Monte Carlo data are compatible with langleellrangle propto (Tf (h) - T)-1 and a finite size scaling of the total magnetization according to M(L, T) = tilde M {(1 - T/Tf (h))nubot L} with nubot = 1. Unlike the findings for critical wetting in the thin film geometry of the Ising model, the Monte Carlo results for corner wetting are in very good agreement with the theoretical predictions.

  13. Bayesian phylogenetic model selection using reversible jump Markov chain Monte Carlo.

    Science.gov (United States)

    Huelsenbeck, John P; Larget, Bret; Alfaro, Michael E

    2004-06-01

    A common problem in molecular phylogenetics is choosing a model of DNA substitution that does a good job of explaining the DNA sequence alignment without introducing superfluous parameters. A number of methods have been used to choose among a small set of candidate substitution models, such as the likelihood ratio test, the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and Bayes factors. Current implementations of any of these criteria suffer from the limitation that only a small set of models are examined, or that the test does not allow easy comparison of non-nested models. In this article, we expand the pool of candidate substitution models to include all possible time-reversible models. This set includes seven models that have already been described. We show how Bayes factors can be calculated for these models using reversible jump Markov chain Monte Carlo, and apply the method to 16 DNA sequence alignments. For each data set, we compare the model with the best Bayes factor to the best models chosen using AIC and BIC. We find that the best model under any of these criteria is not necessarily the most complicated one; models with an intermediate number of substitution types typically do best. Moreover, almost all of the models that are chosen as best do not constrain a transition rate to be the same as a transversion rate, suggesting that it is the transition/transversion rate bias that plays the largest role in determining which models are selected. Importantly, the reversible jump Markov chain Monte Carlo algorithm described here allows estimation of phylogeny (and other phylogenetic model parameters) to be performed while accounting for uncertainty in the model of DNA substitution.

  14. Monte-Carlo Simulation of Ising Model%Ising 模型的Monte-Carlo模拟

    Institute of Scientific and Technical Information of China (English)

    吴国军; 胡经国

    2000-01-01

    在平面四角点阵上,以Ising模型为框架,在IBM-PC机上用Mont e-Carlo方法模拟了螺旋边界、半自由边界及自由边界条件下铁磁系统的相图,并与周期性边界条件下的结果作了比较.

  15. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  16. Evaluation of angular scattering models for electron-neutral collisions in Monte Carlo simulations

    Science.gov (United States)

    Janssen, J. F. J.; Pitchford, L. C.; Hagelaar, G. J. M.; van Dijk, J.

    2016-10-01

    In Monte Carlo simulations of electron transport through a neutral background gas, simplifying assumptions related to the shape of the angular distribution of electron-neutral scattering cross sections are usually made. This is mainly because full sets of differential scattering cross sections are rarely available. In this work simple models for angular scattering are compared to results from the recent quantum calculations of Zatsarinny and Bartschat for differential scattering cross sections (DCS’s) from zero to 200 eV in argon. These simple models represent in various ways an approach to forward scattering with increasing electron energy. The simple models are then used in Monte Carlo simulations of range, straggling, and backscatter of electrons emitted from a surface into a volume filled with a neutral gas. It is shown that the assumptions of isotropic elastic scattering and of forward scattering for the inelastic collision process yield results within a few percent of those calculated using the DCS’s of Zatsarinny and Bartschat. The quantities which were held constant in these comparisons are the elastic momentum transfer and total inelastic cross sections.

  17. Derivation of a Monte Carlo method for modeling heterodyne detection in optical coherence tomography systems

    DEFF Research Database (Denmark)

    Tycho, Andreas; Jørgensen, Thomas Martini; Andersen, Peter E.

    2002-01-01

    A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach to this opti......A Monte Carlo (MC) method for modeling optical coherence tomography (OCT) measurements of a diffusely reflecting discontinuity emb edded in a scattering medium is presented. For the first time to the authors' knowledge it is shown analytically that the applicability of an MC approach...... to this optical geometry is firmly justified, because, as we show, in the conjugate image plane the field reflected from the sample is delta-correlated from which it follows that the heterodyne signal is calculated from the intensity distribution only. This is not a trivial result because, in general, the light...... focused beam, and it is shown that in free space the full three-dimensional intensity distribution of a Gaussian beam is obtained. The OCT signal and the intensity distribution in a scattering medium have been obtained for several geometries with the suggested MC method; when this model and a recently...

  18. The First 24 Years of Reverse Monte Carlo Modelling, Budapest, Hungary, 20-22 September 2012

    Science.gov (United States)

    Keen, David A.; Pusztai, László

    2013-11-01

    This special issue contains a collection of papers reflecting the content of the fifth workshop on reverse Monte Carlo (RMC) methods, held in a hotel on the banks of the Danube in the Budapest suburbs in the autumn of 2012. Over fifty participants gathered to hear talks and discuss a broad range of science based on the RMC technique in very convivial surroundings. Reverse Monte Carlo modelling is a method for producing three-dimensional disordered structural models in quantitative agreement with experimental data. The method was developed in the late 1980s and has since achieved wide acceptance within the scientific community [1], producing an average of over 90 papers and 1200 citations per year over the last five years. It is particularly suitable for the study of the structures of liquid and amorphous materials, as well as the structural analysis of disordered crystalline systems. The principal experimental data that are modelled are obtained from total x-ray or neutron scattering experiments, using the reciprocal space structure factor and/or the real space pair distribution function (PDF). Additional data might be included from extended x-ray absorption fine structure spectroscopy (EXAFS), Bragg peak intensities or indeed any measured data that can be calculated from a three-dimensional atomistic model. It is this use of total scattering (diffuse and Bragg), rather than just the Bragg peak intensities more commonly used for crystalline structure analysis, which enables RMC modelling to probe the often important deviations from the average crystal structure, to probe the structures of poorly crystalline or nanocrystalline materials, and the local structures of non-crystalline materials where only diffuse scattering is observed. This flexibility across various condensed matter structure-types has made the RMC method very attractive in a wide range of disciplines, as borne out in the contents of this special issue. It is however important to point out that since

  19. Equilibrium Statistics: Monte Carlo Methods

    Science.gov (United States)

    Kröger, Martin

    Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].

  20. Microsopic nuclear level densities by the shell model Monte Carlo method

    CERN Document Server

    Alhassid, Y; Gilbreth, C N; Nakada, H; Özen, C

    2016-01-01

    The configuration-interaction shell model approach provides an attractive framework for the calculation of nuclear level densities in the presence of correlations, but the large dimensionality of the model space has hindered its application in mid-mass and heavy nuclei. The shell model Monte Carlo (SMMC) method permits calculations in model spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods. We discuss recent progress in the SMMC approach to level densities, and in particular the calculation of level densities in heavy nuclei. We calculate the distribution of the axial quadrupole operator in the laboratory frame at finite temperature and demonstrate that it is a model-independent signature of deformation in the rotational invariant framework of the shell model. We propose a method to use these distributions for calculating level densities as a function of intrinsic deformation.

  1. Kinetic Monte-Carlo modeling of hydrogen retention and re-emission from Tore Supra deposits

    Energy Technology Data Exchange (ETDEWEB)

    Rai, A. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany)], E-mail: Abha.Rai@ipp.mpg.de; Schneider, R. [Max-Planck-Institut fuer Plasmaphysik, D-17491 Greifswald (Germany); Warrier, M. [Computational Analysis Division, BARC, Trombay, Mumbai 400085 (India); Roubin, P.; Martin, C.; Richou, M. [PIIM, Universite de Provence, Centre Saint-Jerome, (service 242) F-13397 Marseille cedex 20 (France)

    2009-04-30

    A multi-scale model has been developed to study the reactive-diffusive transport of hydrogen in porous graphite [A. Rai, R. Schneider, M. Warrier, J. Nucl. Mater. (submitted for publication). http://dx.doi.org/10.1016/j.jnucmat.2007.08.013.]. The deposits found on the leading edge of the neutralizer of Tore Supra are multi-scale in nature, consisting of micropores with typical size lower than 2 nm ({approx}11%), mesopores ({approx}5%) and macropores with a typical size more than 50 nm [C. Martin, M. Richou, W. Sakaily, B. Pegourie, C. Brosset, P. Roubin, J. Nucl. Mater. 363-365 (2007) 1251]. Kinetic Monte-Carlo (KMC) has been used to study the hydrogen transport at meso-scales. Recombination rate and the diffusion coefficient calculated at the meso-scale was used as an input to scale up and analyze the hydrogen transport at macro-scale. A combination of KMC and MCD (Monte-Carlo diffusion) method was used at macro-scales. Flux dependence of hydrogen recycling has been studied. The retention and re-emission analysis of the model has been extended to study the chemical erosion process based on the Kueppers-Hopf cycle [M. Wittmann, J. Kueppers, J. Nucl. Mater. 227 (1996) 186].

  2. A Monte-Carlo based model of the AX-PET demonstrator and its experimental validation.

    Science.gov (United States)

    Solevi, P; Oliver, J F; Gillam, J E; Bolle, E; Casella, C; Chesi, E; De Leo, R; Dissertori, G; Fanti, V; Heller, M; Lai, M; Lustermann, W; Nappi, E; Pauss, F; Rudge, A; Ruotsalainen, U; Schinzel, D; Schneider, T; Séguinot, J; Stapnes, S; Weilhammer, P; Tuna, U; Joram, C; Rafecas, M

    2013-08-21

    AX-PET is a novel PET detector based on axially oriented crystals and orthogonal wavelength shifter (WLS) strips, both individually read out by silicon photo-multipliers. Its design decouples sensitivity and spatial resolution, by reducing the parallax error due to the layered arrangement of the crystals. Additionally the granularity of AX-PET enhances the capability to track photons within the detector yielding a large fraction of inter-crystal scatter events. These events, if properly processed, can be included in the reconstruction stage further increasing the sensitivity. Its unique features require dedicated Monte-Carlo simulations, enabling the development of the device, interpreting data and allowing the development of reconstruction codes. At the same time the non-conventional design of AX-PET poses several challenges to the simulation and modeling tasks, mostly related to the light transport and distribution within the crystals and WLS strips, as well as the electronics readout. In this work we present a hybrid simulation tool based on an analytical model and a Monte-Carlo based description of the AX-PET demonstrator. It was extensively validated against experimental data, providing excellent agreement.

  3. Optical model for port-wine stain skin and its Monte Carlo simulation

    Science.gov (United States)

    Xu, Lanqing; Xiao, Zhengying; Chen, Rong; Wang, Ying

    2008-12-01

    Laser irradiation is the most acceptable therapy for PWS patient at present time. Its efficacy is highly dependent on the energy deposition rules in skin. To achieve optimal PWS treatment parameters a better understanding of light propagation in PWS skin is indispensable. Traditional Monte Carlo simulations using simple geometries such as planar layer tissue model can not provide energy deposition in the skin with enlarged blood vessels. In this paper the structure of normal skin and the pathological character of PWS skin was analyzed in detail and the true structure were simplified into a hybrid layered mathematical model to character two most important aspects of PWS skin: layered structure and overabundant dermal vessels. The basic laser-tissue interaction mechanisms in skin were investigated and the optical parameters of PWS skin tissue at the therapeutic wavelength. Monte Carlo (MC) based techniques were choused to calculate the energy deposition in the skin. Results can be used in choosing optical dosage. Further simulations can be used to predict optimal laser parameters to achieve high-efficacy laser treatment of PWS.

  4. New Generation of the Monte Carlo Shell Model for the K Computer Era

    CERN Document Server

    Shimizu, Noritaka; Tsunoda, Yusuke; Utsuno, Yutaka; Yoshida, Tooru; Mizusaki, Takahiro; Honma, Michio; Otsuka, Takaharu

    2012-01-01

    We present a newly enhanced version of the Monte Carlo Shell Model method by incorporating the conjugate gradient method and energy-variance extrapolation. This new method enables us to perform large-scale shell-model calculations that the direct diagonalization method cannot reach. This new generation framework of the MCSM provides us with a powerful tool to perform most-advanced large-scale shell-model calculations on current massively parallel computers such as the K computer. We discuss the validity of this method in ab initio calculations of light nuclei, and propose a new method to describe the intrinsic wave function in terms of the shell-model picture. We also apply this new MCSM to the study of neutron-rich Cr and Ni isotopes using the conventional shell-model calculations with an inert 40Ca core and discuss how the magicity of N = 28, 40, 50 remains or is broken.

  5. Quantitative photoacoustic tomography using forward and adjoint Monte Carlo models of radiance

    CERN Document Server

    Hochuli, Roman; Arridge, Simon; Cox, Ben

    2016-01-01

    Forward and adjoint Monte Carlo (MC) models of radiance are proposed for use in model-based quantitative photoacoustic tomography. A 2D radiance MC model using a harmonic angular basis is introduced and validated against analytic solutions for the radiance in heterogeneous media. A gradient-based optimisation scheme is then used to recover 2D absorption and scattering coefficients distributions from simulated photoacoustic measurements. It is shown that the functional gradients, which are a challenge to compute efficiently using MC models, can be calculated directly from the coefficients of the harmonic angular basis used in the forward and adjoint models. This work establishes a framework for transport-based quantitative photoacoustic tomography that can fully exploit emerging highly parallel computing architectures.

  6. Modeling of radiation-induced bystander effect using Monte Carlo methods

    Science.gov (United States)

    Xia, Junchao; Liu, Liteng; Xue, Jianming; Wang, Yugang; Wu, Lijun

    2009-03-01

    Experiments showed that the radiation-induced bystander effect exists in cells, or tissues, or even biological organisms when irradiated with energetic ions or X-rays. In this paper, a Monte Carlo model is developed to study the mechanisms of bystander effect under the cells sparsely populated conditions. This model, based on our previous experiment which made the cells sparsely located in a round dish, focuses mainly on the spatial characteristics. The simulation results successfully reach the agreement with the experimental data. Moreover, other bystander effect experiment is also computed by this model and finally the model succeeds in predicting the results. The comparison of simulations with the experimental results indicates the feasibility of the model and the validity of some vital mechanisms assumed.

  7. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  8. Study of dispersion forces with quantum Monte Carlo: toward a continuum model for solvation.

    Science.gov (United States)

    Amovilli, Claudio; Floris, Franca Maria

    2015-05-28

    We present a general method to compute dispersion interaction energy that, starting from London's interpretation, is based on the measure of the electronic electric field fluctuations, evaluated on electronic sampled configurations generated by quantum Monte Carlo. A damped electric field was considered in order to avoid divergence in the variance. Dispersion atom-atom C6 van der Waals coefficients were computed by coupling electric field fluctuations with static dipole polarizabilities. The dipole polarizability was evaluated at the diffusion Monte Carlo level by studying the response of the system to a constant external electric field. We extended the method to the calculation of the dispersion contribution to the free energy of solvation in the framework of the polarizable continuum model. We performed test calculations on pairs of some atomic systems. We considered He in ground and low lying excited states and Ne in the ground state and obtained a good agreement with literature data. We also made calculations on He, Ne, and F(-) in water as the solvent. Resulting dispersion contribution to the free energy of solvation shows the reliability of the method illustrated here.

  9. Spreaders and sponges define metastasis in lung cancer: a Markov chain Monte Carlo mathematical model.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Norton, Larry; Kuhn, Peter

    2013-05-01

    The classic view of metastatic cancer progression is that it is a unidirectional process initiated at the primary tumor site, progressing to variably distant metastatic sites in a fairly predictable, although not perfectly understood, fashion. A Markov chain Monte Carlo mathematical approach can determine a pathway diagram that classifies metastatic tumors as "spreaders" or "sponges" and orders the timescales of progression from site to site. In light of recent experimental evidence highlighting the potential significance of self-seeding of primary tumors, we use a Markov chain Monte Carlo (MCMC) approach, based on large autopsy data sets, to quantify the stochastic, systemic, and often multidirectional aspects of cancer progression. We quantify three types of multidirectional mechanisms of progression: (i) self-seeding of the primary tumor, (ii) reseeding of the primary tumor from a metastatic site (primary reseeding), and (iii) reseeding of metastatic tumors (metastasis reseeding). The model shows that the combined characteristics of the primary and the first metastatic site to which it spreads largely determine the future pathways and timescales of systemic disease.

  10. A virtual source method for Monte Carlo simulation of Gamma Knife Model C

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    2016-05-15

    The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.

  11. Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations

    Science.gov (United States)

    Reims, N.; Sukowski, F.; Uhlmann, N.

    2011-01-01

    Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.

  12. Collectivity in Heavy Nuclei in the Shell Model Monte Carlo Approach

    CERN Document Server

    Özen, C; Nakada, H

    2013-01-01

    The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase tran...

  13. Monte Carlo evaluation of biological variation: Random generation of correlated non-Gaussian model parameters

    Science.gov (United States)

    Hertog, Maarten L. A. T. M.; Scheerlinck, Nico; Nicolaï, Bart M.

    2009-01-01

    When modelling the behaviour of horticultural products, demonstrating large sources of biological variation, we often run into the issue of non-Gaussian distributed model parameters. This work presents an algorithm to reproduce such correlated non-Gaussian model parameters for use with Monte Carlo simulations. The algorithm works around the problem of non-Gaussian distributions by transforming the observed non-Gaussian probability distributions using a proposed SKN-distribution function before applying the covariance decomposition algorithm to generate Gaussian random co-varying parameter sets. The proposed SKN-distribution function is based on the standard Gaussian distribution function and can exhibit different degrees of both skewness and kurtosis. This technique is demonstrated using a case study on modelling the ripening of tomato fruit evaluating the propagation of biological variation with time.

  14. Monte Carlo Simulation of a Novel Classical Spin Model with a Tricritical Point

    Science.gov (United States)

    Cary, Tyler; Scalettar, Richard; Singh, Rajiv

    Recent experimental findings along with motivation from the well known Blume-Capel model has led to the development of a novel two-dimensional classical spin model defined on a square lattice. This model consists of two Ising spin species per site with each species interacting with its own kind as perpendicular one dimensional Ising chains along with complex and frustrating interactions between species. Probing this model with Mean Field Theory, Metropolis Monte Carlo, and Wang Landau sampling has revealed a rich phase diagram which includes a tricritical point separating a first order magnetic phase transition from a continuous one, along with three ordered phases. Away from the tricritical point, the expected 2D Ising critical exponents have been recovered. Ongoing work focuses on finding the tricritical exponents and their connection to a supersymmetric critical point.

  15. Iterative optimisation of Monte Carlo detector models using measurements and simulations

    Energy Technology Data Exchange (ETDEWEB)

    Marzocchi, O., E-mail: olaf@marzocchi.net [European Patent Office, Rijswijk (Netherlands); Leone, D., E-mail: debora.leone@kit.edu [Institute for Nuclear Waste Disposal, Karlsruhe Institute of Technology, Karlsruhe (Germany)

    2015-04-11

    This work proposes a new technique to optimise the Monte Carlo models of radiation detectors, offering the advantage of a significantly lower user effort and therefore an improved work efficiency compared to the prior techniques. The method consists of four steps, two of which are iterative and suitable for automation using scripting languages. The four steps consist in the acquisition in the laboratory of measurement data to be used as reference; the modification of a previously available detector model; the simulation of a tentative model of the detector to obtain the coefficients of a set of linear equations; the solution of the system of equations and the update of the detector model. Steps three and four can be repeated for more accurate results. This method avoids the “try and fail” approach typical of the prior techniques.

  16. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations.

    Science.gov (United States)

    Farah, J; Martinetti, F; Sayah, R; Lacoste, V; Donadille, L; Trompier, F; Nauraye, C; De Marzi, L; Vabre, I; Delacroix, S; Hérault, J; Clairand, I

    2014-06-07

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  17. Monte Carlo modeling of proton therapy installations: a global experimental method to validate secondary neutron dose calculations

    Science.gov (United States)

    Farah, J.; Martinetti, F.; Sayah, R.; Lacoste, V.; Donadille, L.; Trompier, F.; Nauraye, C.; De Marzi, L.; Vabre, I.; Delacroix, S.; Hérault, J.; Clairand, I.

    2014-06-01

    Monte Carlo calculations are increasingly used to assess stray radiation dose to healthy organs of proton therapy patients and estimate the risk of secondary cancer. Among the secondary particles, neutrons are of primary concern due to their high relative biological effectiveness. The validation of Monte Carlo simulations for out-of-field neutron doses remains however a major challenge to the community. Therefore this work focused on developing a global experimental approach to test the reliability of the MCNPX models of two proton therapy installations operating at 75 and 178 MeV for ocular and intracranial tumor treatments, respectively. The method consists of comparing Monte Carlo calculations against experimental measurements of: (a) neutron spectrometry inside the treatment room, (b) neutron ambient dose equivalent at several points within the treatment room, (c) secondary organ-specific neutron doses inside the Rando-Alderson anthropomorphic phantom. Results have proven that Monte Carlo models correctly reproduce secondary neutrons within the two proton therapy treatment rooms. Sensitive differences between experimental measurements and simulations were nonetheless observed especially with the highest beam energy. The study demonstrated the need for improved measurement tools, especially at the high neutron energy range, and more accurate physical models and cross sections within the Monte Carlo code to correctly assess secondary neutron doses in proton therapy applications.

  18. Adaptive Markov chain Monte Carlo forward projection for statistical analysis in epidemic modelling of human papillomavirus.

    Science.gov (United States)

    Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G

    2013-05-20

    A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.

  19. Clinical management and burden of prostate cancer: a Markov Monte Carlo model.

    Directory of Open Access Journals (Sweden)

    Chiranjeev Sanyal

    Full Text Available BACKGROUND: Prostate cancer (PCa is the most common non-skin cancer among men in developed countries. Several novel treatments have been adopted by healthcare systems to manage PCa. Most of the observational studies and randomized trials on PCa have concurrently evaluated fewer treatments over short follow-up. Further, preceding decision analytic models on PCa management have not evaluated various contemporary management options. Therefore, a contemporary decision analytic model was necessary to address limitations to the literature by synthesizing the evidence on novel treatments thereby forecasting short and long-term clinical outcomes. OBJECTIVES: To develop and validate a Markov Monte Carlo model for the contemporary clinical management of PCa, and to assess the clinical burden of the disease from diagnosis to end-of-life. METHODS: A Markov Monte Carlo model was developed to simulate the management of PCa in men 65 years and older from diagnosis to end-of-life. Health states modeled were: risk at diagnosis, active surveillance, active treatment, PCa recurrence, PCa recurrence free, metastatic castrate resistant prostate cancer, overall and PCa death. Treatment trajectories were based on state transition probabilities derived from the literature. Validation and sensitivity analyses assessed the accuracy and robustness of model predicted outcomes. RESULTS: Validation indicated model predicted rates were comparable to observed rates in the published literature. The simulated distribution of clinical outcomes for the base case was consistent with sensitivity analyses. Predicted rate of clinical outcomes and mortality varied across risk groups. Life expectancy and health adjusted life expectancy predicted for the simulated cohort was 20.9 years (95%CI 20.5-21.3 and 18.2 years (95% CI 17.9-18.5, respectively. CONCLUSION: Study findings indicated contemporary management strategies improved survival and quality of life in patients with PCa. This

  20. Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code

    Science.gov (United States)

    Merheb, C.; Petegnief, Y.; Talbot, J. N.

    2007-02-01

    Positron emission tomography (PET) systems dedicated to animal imaging are now widely used for biological studies. The scanner performance strongly depends on the design and the characteristics of the system. Many parameters must be optimized like the dimensions and type of crystals, geometry and field-of-view (FOV), sampling, electronics, lightguide, shielding, etc. Monte Carlo modelling is a powerful tool to study the effect of each of these parameters on the basis of realistic simulated data. Performance assessment in terms of spatial resolution, count rates, scatter fraction and sensitivity is an important prerequisite before the model can be used instead of real data for a reliable description of the system response function or for optimization of reconstruction algorithms. The aim of this study is to model the performance of the Philips Mosaic™ animal PET system using a comprehensive PET simulation code in order to understand and describe the origin of important factors that influence image quality. We use GATE, a Monte Carlo simulation toolkit for a realistic description of the ring PET model, the detectors, shielding, cap, electronic processing and dead times. We incorporate new features to adjust signal processing to the Anger logic underlying the Mosaic™ system. Special attention was paid to dead time and energy spectra descriptions. Sorting of simulated events in a list mode format similar to the system outputs was developed to compare experimental and simulated sensitivity and scatter fractions for different energy thresholds using various models of phantoms describing rat and mouse geometries. Count rates were compared for both cylindrical homogeneous phantoms. Simulated spatial resolution was fitted to experimental data for 18F point sources at different locations within the FOV with an analytical blurring function for electronic processing effects. Simulated and measured sensitivities differed by less than 3%, while scatter fractions agreed

  1. Dynamic Value at Risk: A Comparative Study Between Heteroscedastic Models and Monte Carlo Simulation

    Directory of Open Access Journals (Sweden)

    José Lamartine Távora Junior

    2006-12-01

    Full Text Available The objective of this paper was to analyze the risk management of a portfolio composed by Petrobras PN, Telemar PN and Vale do Rio Doce PNA stocks. It was verified if the modeling of Value-at-Risk (VaR through the place Monte Carlo simulation with volatility of GARCH family is supported by hypothesis of efficient market. The results have shown that the statistic evaluation in inferior to dynamics, evidencing that the dynamic analysis supplies support to the hypothesis of efficient market of the Brazilian share holding market, in opposition of some empirical evidences. Also, it was verified that the GARCH models of volatility is enough to accommodate the variations of the shareholding Brazilian market, since the model is capable to accommodate the great dynamic of the Brazilian market.

  2. An Efficient Monte Carlo Method for Modeling Radiative Transfer in Protoplanetary Disks

    Science.gov (United States)

    Kim, Stacy

    2011-01-01

    Monte Carlo methods have been shown to be effective and versatile in modeling radiative transfer processes to calculate model temperature profiles for protoplanetary disks. Temperatures profiles are important for connecting physical structure to observation and for understanding the conditions for planet formation and migration. However, certain areas of the disk such as the optically thick disk interior are under-sampled, or are of particular interest such as the snow line (where water vapor condenses into ice) and the area surrounding a protoplanet. To improve the sampling, photon packets can be preferentially scattered and reemitted toward the preferred locations at the cost of weighting packet energies to conserve the average energy flux. Here I report on the weighting schemes developed, how they can be applied to various models, and how they affect simulation mechanics and results. We find that improvements in sampling do not always imply similar improvements in temperature accuracies and calculation speeds.

  3. Monte Carlo simulations of a supersymmetric matrix model of dynamical compactification in non perturbative string theory

    CERN Document Server

    Anagnostopoulos, Konstantinos N; Nishimura, Jun

    2012-01-01

    The IKKT or IIB matrix model has been postulated to be a non perturbative definition of superstring theory. It has the attractive feature that spacetime is dynamically generated, which makes possible the scenario of dynamical compactification of extra dimensions, which in the Euclidean model manifests by spontaneously breaking the SO(10) rotational invariance (SSB). In this work we study using Monte Carlo simulations the 6 dimensional version of the Euclidean IIB matrix model. Simulations are found to be plagued by a strong complex action problem and the factorization method is used for effective sampling and computing expectation values of the extent of spacetime in various dimensions. Our results are consistent with calculations using the Gaussian Expansion method which predict SSB to SO(3) symmetric vacua, a finite universal extent of the compactified dimensions and finite spacetime volume.

  4. Monte Carlo Particle Lists: MCPL

    CERN Document Server

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  5. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  6. A Monte Carlo simulation model for stationary non-Gaussian processes

    DEFF Research Database (Denmark)

    Grigoriu, M.; Ditlevsen, Ove Dalager; Arwade, S. R.

    2003-01-01

    includes translation processes and is useful for both Monte Carlo simulation and analytical studies. As for translation processes, the mixture of translation processes can have a wide range of marginal distributions and correlation functions. Moreover, these processes can match a broader range of second...... athe proposed Monte Carlo algorithm and compare features of translation processes and mixture of translation processes. Keywords: Monte Carlo simulation, non-Gaussian processes, sampling theorem, stochastic processes, translation processes......A class of stationary non-Gaussian processes, referred to as the class of mixtures of translation processes, is defined by their finite dimensional distributions consisting of mixtures of finite dimensional distributions of translation processes. The class of mixtures of translation processes...

  7. Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks

    Directory of Open Access Journals (Sweden)

    Gareth W. Peters

    2017-09-01

    Full Text Available The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal Sequential Monte Carlo algorithms are described and their efficiency is analysed.

  8. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Alhassid Y.

    2014-04-01

    Full Text Available The shell model Monte Carlo (SMMC method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes 59−64Ni and of a heavy deformed rare-earth nucleus 162Dy and found them to be in close agreement with various experimental data sets.

  9. Recent Advances in the Microscopic Calculations of Level Densities by the Shell Model Monte Carlo Method

    CERN Document Server

    Alhassid, Y; Liu, S; Mukherjee, A; Nakada, H

    2014-01-01

    The shell model Monte Carlo (SMMC) method enables calculations in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods, and is particularly suitable for the calculation of level densities in the presence of correlations. We review recent advances and applications of SMMC for the microscopic calculation of level densities. Recent developments include (i) a method to calculate accurately the ground-state energy of an odd-mass nucleus, circumventing a sign problem that originates in the projection on an odd number of particles, and (ii) a method to calculate directly level densities, which, unlike state densities, do not include the spin degeneracy of the levels. We calculated the level densities of a family of nickel isotopes $^{59-64}$Ni and of a heavy deformed rare-earth nucleus $^{162}$Dy and found them to be in close agreement with various experimental data sets.

  10. Business Scenario Evaluation Method Using Monte Carlo Simulation on Qualitative and Quantitative Hybrid Model

    Science.gov (United States)

    Samejima, Masaki; Akiyoshi, Masanori; Mitsukuni, Koshichiro; Komoda, Norihisa

    We propose a business scenario evaluation method using qualitative and quantitative hybrid model. In order to evaluate business factors with qualitative causal relations, we introduce statistical values based on propagation and combination of effects of business factors by Monte Carlo simulation. In propagating an effect, we divide a range of each factor by landmarks and decide an effect to a destination node based on the divided ranges. In combining effects, we decide an effect of each arc using contribution degree and sum all effects. Through applied results to practical models, it is confirmed that there are no differences between results obtained by quantitative relations and results obtained by the proposed method at the risk rate of 5%.

  11. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error.

  12. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    Energy Technology Data Exchange (ETDEWEB)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  13. A Monte Carlo method for critical systems in infinite volume: the planar Ising model

    CERN Document Server

    Herdeiro, Victor

    2016-01-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three- and four-point functions of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  14. Monte Carlo method for critical systems in infinite volume: The planar Ising model.

    Science.gov (United States)

    Herdeiro, Victor; Doyon, Benjamin

    2016-10-01

    In this paper we propose a Monte Carlo method for generating finite-domain marginals of critical distributions of statistical models in infinite volume. The algorithm corrects the problem of the long-range effects of boundaries associated to generating critical distributions on finite lattices. It uses the advantage of scale invariance combined with ideas of the renormalization group in order to construct a type of "holographic" boundary condition that encodes the presence of an infinite volume beyond it. We check the quality of the distribution obtained in the case of the planar Ising model by comparing various observables with their infinite-plane prediction. We accurately reproduce planar two-, three-, and four-point of spin and energy operators. We also define a lattice stress-energy tensor, and numerically obtain the associated conformal Ward identities and the Ising central charge.

  15. Development of numerical models for Monte Carlo simulations of Th-Pb fuel assembly

    Directory of Open Access Journals (Sweden)

    Oettingen Mikołaj

    2017-01-01

    Full Text Available The thorium-uranium fuel cycle is a promising alternative against uranium-plutonium fuel cycle, but it demands many advanced research before starting its industrial application in commercial nuclear reactors. The paper presents the development of the thorium-lead (Th-Pb fuel assembly numerical models for the integral irradiation experiments. The Th-Pb assembly consists of a hexagonal array of ThO2 fuel rods and metallic Pb rods. The design of the assembly allows different combinations of rods for various types of irradiations and experimental measurements. The numerical model of the Th-Pb assembly was designed for the numerical simulations with the continuous energy Monte Carlo Burnup code (MCB implemented on the supercomputer Prometheus of the Academic Computer Centre Cyfronet AGH.

  16. Monte Carlo study of Lefschetz thimble structure in one-dimensional Thirring model at finite density

    CERN Document Server

    Fujii, Hirotsugu; Kikukawa, Yoshio

    2015-01-01

    We consider the one-dimensional massive Thirring model formulated on the lattice with staggered fermions and an auxiliary compact vector (link) field, which is exactly solvable and shows a phase transition with increasing the chemical potential of fermion number: the crossover at a finite temperature and the first order transition at zero temperature. We complexify its path-integration on Lefschetz thimbles and examine its phase transition by hybrid Monte Carlo simulations on the single dominant thimble. We observe a discrepancy between the numerical and exact results in the crossover region for small inverse coupling $\\beta$ and/or large lattice size $L$, while they are in good agreement at the lower and higher density regions. We also observe that the discrepancy persists in the continuum limit keeping the temperature finite and it becomes more significant toward the low-temperature limit. This numerical result is consistent with our analytical study of the model's thimble structure. And these results imply...

  17. Conformal or Walking? Monte Carlo renormalization group studies of SU(3) gauge models with fundamental fermions

    CERN Document Server

    Hasenfratz, Anna

    2010-01-01

    Strongly coupled gauge systems with many fermions are important in many phenomenological models. I use the 2-lattice matching Monte Carlo renormalization group method to study the fixed point structure and critical indexes of SU(3) gauge models with 8 and 12 flavors of fundamental fermions. With an improved renormalization group block transformation I am able to connect the perturbative and confining regimes of the N_f=8 flavor system, thus verifying its QCD-like nature. With N_f=12 flavors the data favor the existence of an infrared fixed point and conformal phase, though the results are also consistent with very slow walking. I measure the anomalous mass dimension in both systems at several gauge couplings and find that they are barely different from the free field value.

  18. Measurement and Monte Carlo modeling of the spatial response of scintillation screens

    Energy Technology Data Exchange (ETDEWEB)

    Pistrui-Maximean, S.A. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: spistrui@gmail.com; Letang, J.M. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)], E-mail: jean-michel.letang@insa-lyon.fr; Freud, N. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France); Koch, A. [Thales Electron Devices, 38430 Moirans (France); Walenta, A.H. [Detectors and Electronics Department, FB Physik, Siegen University, 57068 Siegen (Germany); Montarou, G. [Corpuscular Physics Laboratory, Blaise Pascal University, 63177 Aubiere (France); Babot, D. [CNDRI (NDT using Ionizing Radiation) Laboratory, INSA-Lyon, 69621 Villeurbanne (France)

    2007-11-01

    In this article, we propose a detailed protocol to carry out measurements of the spatial response of scintillation screens and to assess the agreement with simulated results. The experimental measurements have been carried out using a practical implementation of the slit method. A Monte Carlo simulation model of scintillator screens, implemented with the toolkit Geant4, has been used to study the influence of the acquisition setup parameters and to compare with the experimental results. An algorithm of global stochastic optimization based on a localized random search method has been implemented to adjust the optical parameters (optical scattering and absorption coefficients). The algorithm has been tested for different X-ray tube voltages (40, 70 and 100 kV). A satisfactory convergence between the results simulated with the optimized model and the experimental measurements is obtained.

  19. The OH distribution in cometary atmospheres - A collisional Monte Carlo model for heavy species

    Science.gov (United States)

    Combi, Michael R.; Bos, Brent J.; Smyth, William H.

    1993-01-01

    The study presents an extension of the cometary atmosphere Monte Carlo particle trajectory model formalism which makes it both physically correct for heavy species and yet computationally reasonable. The derivation accounts for the collision path and scattering redirection of a heavy radical traveling through a fluid coma with a given radial distribution in outflow speed and temperature. The revised model verifies that the earlier fast-H atom approximations used in earlier work are valid, and it is applied to a case where the heavy radical formalism is necessary: the OH distribution. It is found that a steeper variation of water production rate with heliocentric distance is required for a water coma which is consistent with the velocity-resolved observations of Comet P/Halley.

  20. Monte Carlo markovian modeling of modal competition in dual-wavelength semiconductor lasers

    Science.gov (United States)

    Chusseau, Laurent; Philippe, Fabrice; Jean-Marie, Alain

    2014-03-01

    Monte Carlo markovian models of a dual-mode semiconductor laser with quantum well (QW) or quantum dot (QD) active regions are proposed. Accounting for carriers and photons as particles that may exchange energy in the course of time allows an ab initio description of laser dynamics such as the mode competition and intrinsic laser noise. We used these models to evaluate the stability of the dual-mode regime when laser characteristics are varied: mode gains and losses, non-radiative recombination rates, intraband relaxation time, capture time in QD, transfer of excitation between QD via the wetting layer. . . As a major result, a possible steady-sate dualmode regime is predicted for specially designed QD semiconductor lasers thereby acting as a CW microwave or terahertz-beating source whereas it does not occur for QW lasers.

  1. Applications of Monte Carlo Methods in Calculus.

    Science.gov (United States)

    Gordon, Sheldon P.; Gordon, Florence S.

    1990-01-01

    Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)

  2. A brief introduction to Monte Carlo simulation.

    Science.gov (United States)

    Bonate, P L

    2001-01-01

    Simulation affects our life every day through our interactions with the automobile, airline and entertainment industries, just to name a few. The use of simulation in drug development is relatively new, but its use is increasing in relation to the speed at which modern computers run. One well known example of simulation in drug development is molecular modelling. Another use of simulation that is being seen recently in drug development is Monte Carlo simulation of clinical trials. Monte Carlo simulation differs from traditional simulation in that the model parameters are treated as stochastic or random variables, rather than as fixed values. The purpose of this paper is to provide a brief introduction to Monte Carlo simulation methods.

  3. Hidden zero-temperature bicritical point in the two-dimensional anisotropic Heisenberg model: Monte Carlo simulations and proper finite-size scaling

    OpenAIRE

    Zhou, Chenggang; Landau, D. P.; Schulthess, Thomas C.

    2006-01-01

    By considering the appropriate finite-size effect, we explain the connection between Monte Carlo simulations of two-dimensional anisotropic Heisenberg antiferromagnet in a field and the early renormalization group calculation for the bicritical point in $2+\\epsilon$ dimensions. We found that the long length scale physics of the Monte Carlo simulations is indeed captured by the anisotropic nonlinear $\\sigma$ model. Our Monte Carlo data and analysis confirm that the bicritical point in two dime...

  4. Monte Carlo model of the Studsvik BNCT clinical beam: description and validation.

    Science.gov (United States)

    Giusti, Valerio; Munck af Rosenschöld, Per M; Sköld, Kurt; Montagnini, Bruno; Capala, Jacek

    2003-12-01

    The neutron beam at the Studsvik facility for boron neutron capture therapy (BNCT) and the validation of the related computational model developed for the MCNP-4B Monte Carlo code are presented. Several measurements performed at the epithermal neutron port used for clinical trials have been made in order to validate the Monte Carlo computational model. The good general agreement between the MCNP calculations and the experimental results has provided an adequate check of the calculation procedure. In particular, at the nominal reactor power of 1 MW, the calculated in-air epithermal neutron flux in the energy interval between 0.4 eV-10 keV is 3.24 x 10(9) n cm(-2) s(-1) (+/- 1.2% 1 std. dev.) while the measured value is 3.30 x 10(9) n cm(-20 s(-1) (+/- 5.0% 1 std. dev.). Furthermore, the calculated in-phantom thermal neutron flux, equal to 6.43 x 10(9) n cm(-2) s(-1) (+/- 1.0% 1 std. dev.), and the corresponding measured value of 6.33 X 10(9) n cm(-2) s(-1) (+/- 5.3% 1 std. dev.) agree within their respective uncertainties. The only statistically significant disagreement is a discrepancy of 39% between the MCNP calculations of the in-air photon kerma and the corresponding experimental value. Despite this, a quite acceptable overall in-phantom beam performance was obtained, with a maximum value of the therapeutic ratio (the ratio between the local tumor dose and the maximum healthy tissue dose) equal to 6.7. The described MCNP model of the Studsvik facility has been deemed adequate to evaluate further improvements in the beam design as well as to plan experimental work.

  5. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  6. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  7. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)

    2010-12-15

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  8. Monte Carlo modeling of cavity imaging in pure iron using back-scatter electron scanning microscopy

    Science.gov (United States)

    Yan, Qiang; Gigax, Jonathan; Chen, Di; Garner, F. A.; Shao, Lin

    2016-11-01

    Backscattered electrons (BSE) in a scanning electron microscope (SEM) can produce images of subsurface cavity distributions as a nondestructive characterization technique. Monte Carlo simulations were performed to understand the mechanism of void imaging and to identify key parameters in optimizing void resolution. The modeling explores an iron target of different thicknesses, electron beams of different energies, beam sizes, and scan pitch, evaluated for voids of different sizes and depths below the surface. The results show that the void image contrast is primarily caused by discontinuity of energy spectra of backscattered electrons, due to increased outward path lengths for those electrons which penetrate voids and are backscattered at deeper depths. Size resolution of voids at specific depths, and maximum detection depth of specific voids sizes are derived as a function of electron beam energy. The results are important for image optimization and data extraction.

  9. Macroion solutions in the cell model studied by field theory and Monte Carlo simulations.

    Science.gov (United States)

    Lue, Leo; Linse, Per

    2011-12-14

    Aqueous solutions of charged spherical macroions with variable dielectric permittivity and their associated counterions are examined within the cell model using a field theory and Monte Carlo simulations. The field theory is based on separation of fields into short- and long-wavelength terms, which are subjected to different statistical-mechanical treatments. The simulations were performed by using a new, accurate, and fast algorithm for numerical evaluation of the electrostatic polarization interaction. The field theory provides counterion distributions outside a macroion in good agreement with the simulation results over the full range from weak to strong electrostatic coupling. A low-dielectric macroion leads to a displacement of the counterions away from the macroion.

  10. Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene

    CERN Document Server

    Smith, Dominik

    2013-01-01

    In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.

  11. Of bugs and birds: Markov Chain Monte Carlo for hierarchical modeling in wildlife research

    Science.gov (United States)

    Link, W.A.; Cam, E.; Nichols, J.D.; Cooch, E.G.

    2002-01-01

    Markov chain Monte Carlo (MCMC) is a statistical innovation that allows researchers to fit far more complex models to data than is feasible using conventional methods. Despite its widespread use in a variety of scientific fields, MCMC appears to be underutilized in wildlife applications. This may be due to a misconception that MCMC requires the adoption of a subjective Bayesian analysis, or perhaps simply to its lack of familiarity among wildlife researchers. We introduce the basic ideas of MCMC and software BUGS (Bayesian inference using Gibbs sampling), stressing that a simple and satisfactory intuition for MCMC does not require extraordinary mathematical sophistication. We illustrate the use of MCMC with an analysis of the association between latent factors governing individual heterogeneity in breeding and survival rates of kittiwakes (Rissa tridactyla). We conclude with a discussion of the importance of individual heterogeneity for understanding population dynamics and designing management plans.

  12. Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models

    CERN Document Server

    Peixoto, Tiago P

    2014-01-01

    We present an efficient algorithm for the inference of stochastic block models in large networks. The algorithm can be used as an optimized Markov chain Monte Carlo (MCMC) method, with a fast mixing time and a much reduced susceptibility to getting trapped in metastable states, or as a greedy agglomerative heuristic, with an almost linear $O(N\\ln^2N)$ complexity, where $N$ is the number of nodes in the network, independent on the number of blocks being inferred. We show that the heuristic is capable of delivering results which are indistinguishable from the more exact and numerically expensive MCMC method in many artificial and empirical networks, despite being much faster. The method is entirely unbiased towards any specific mixing pattern, and in particular it does not favor assortative community structures.

  13. Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.

    Science.gov (United States)

    Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar

    2010-12-01

    A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.

  14. FPGA Hardware Acceleration of Monte Carlo Simulations for the Ising Model

    CERN Document Server

    Ortega-Zamorano, Francisco; Cannas, Sergio A; Jerez, José M; Franco, Leonardo

    2016-01-01

    A two-dimensional Ising model with nearest-neighbors ferromagnetic interactions is implemented in a Field Programmable Gate Array (FPGA) board.Extensive Monte Carlo simulations were carried out using an efficient hardware representation of individual spins and a combined global-local LFSR random number generator. Consistent results regarding the descriptive properties of magnetic systems, like energy, magnetization and susceptibility are obtained while a speed-up factor of approximately 6 times is achieved in comparison to previous FPGA-based published works and almost $10^4$ times in comparison to a standard CPU simulation. A detailed description of the logic design used is given together with a careful analysis of the quality of the random number generator used. The obtained results confirm the potential of FPGAs for analyzing the statistical mechanics of magnetic systems.

  15. A Thermodynamic Model for Square-well Chain Fluid: Theory and Monte Carlo Simulation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A thermodynamic model for the freely jointed square-well chain fluids was developed based on the thermodynamic perturbation theory of Barker-Henderson, Zhang and Wertheim. In this derivation Zhang's expressions for square-well monomers improved from Barker-Henderson compressibility approximation were adopted as the reference fluid, and Wertheim's polymerization method was used to obtain the free energy term due to the bond connectivity. An analytic expression for the Helmholtz free energy of the square-well chain fluids was obtained. The expression without adjustable parameters leads to the thermodynamic consistent predictions of the compressibility factors, residual internal energy and constant-volume heat capacity for dimer,4-mer, 8-mer and 16-mer square-well fluids. The results are in good agreement with the Monte Carlo simulation. To obtain the MC data of residual internal energy and the constant-volume heat capacity needed, NVT MC simulations were performed for these square-well chain fluids.

  16. World-line quantum Monte Carlo algorithm for a one-dimensional Bose model

    Energy Technology Data Exchange (ETDEWEB)

    Batrouni, G.G. (Thinking Machines Corporation, 245 First Street, Cambridge, Massachusetts 02142 (United States)); Scalettar, R.T. (Physics Department, University of California, Davis, California 95616 (United States))

    1992-10-01

    In this paper we provide a detailed description of the ground-state phase diagram of interacting, disordered bosons on a lattice. We describe a quantum Monte Carlo algorithm that incorporates in an efficient manner the required bosonic wave-function symmetry. We consider the ordered case, where we evaluate the compressibility gap and show the lowest three Mott insulating lobes. We obtain the critical ratio of interaction strength to hopping at which the onset of superfluidity occurs for the first lobe, and the critical exponents {nu} and {ital z}. For the disordered model we show the effect of randomness on the phase diagram and the superfluid correlations. We also measure the response of the superfluid density, {rho}{sub {ital s}}, to external perturbations. This provides an unambiguous characterization of the recently observed Bose and Anderson glass phases.

  17. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  18. Corner wetting in the two-dimensional Ising model: Monte Carlo results

    Energy Technology Data Exchange (ETDEWEB)

    Albano, E V [INIFTA, Universidad Nacional de La Plata, CC 16 Suc. 4, 1900 La Plata (Argentina); Virgiliis, A De [INIFTA, Universidad Nacional de La Plata, CC 16 Suc. 4, 1900 La Plata (Argentina); Mueller, M [Institut fuer Physik, Johannes Gutenberg Universitaet, Staudinger Weg 7, D-55099 Mainz (Germany); Binder, K [Institut fuer Physik, Johannes Gutenberg Universitaet, Staudinger Weg 7, D-55099 Mainz (Germany)

    2003-01-29

    Square LxL (L=24-128) Ising lattices with nearest neighbour ferromagnetic exchange are considered using free boundary conditions at which boundary magnetic fields are applied, i.e., at the two boundary rows ending at the lower left corner a field +h acts, while at the two boundary rows ending at the upper right corner a field -h acts. For temperatures T less than the critical temperature T{sub c} of the bulk, this boundary condition leads to the formation of two domains with opposite orientations of the magnetization direction, separated by an interface which for T larger than the filling transition temperature T{sub f} (h) runs from the upper left corner to the lower right corner, while for TMonte Carlo simulations. In particular, it is shown that for T=T{sub f} (h) the magnetization profile m(z) in the z-direction normal to the interface is simply linear and the interfacial width scales as w {proportional_to} L, while for T>T{sub f} (h) it scales as w {proportional_to}{radical} L. The distribution P (l) of the interface position l (measured along the z-direction from the corners) decays exponentially for TT{sub f} (h). Furthermore, the Monte Carlo data are compatible with (l) {proportional_to} (T{sub f} (h) - T){sup -1} and a finite size scaling of the total magnetization according to M(L, T) M-tilde ((1 - T/T{sub f} (h)){sup {nu}}{sub perp} L) with {nu}{sub perp} = 1. Unlike the findings for critical wetting in the thin film geometry of the Ising model, the Monte Carlo results for corner wetting are in very good agreement with the theoretical predictions.

  19. Self-learning Monte Carlo method

    Science.gov (United States)

    Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang

    2017-01-01

    Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup.

  20. Parallel Markov chain Monte Carlo simulations.

    Science.gov (United States)

    Ren, Ruichao; Orkoulas, G

    2007-06-07

    With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.

  1. Monte Carlo Hamiltonian:Linear Potentials

    Institute of Scientific and Technical Information of China (English)

    LUOXiang-Qian; HelmutKROEGER; 等

    2002-01-01

    We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.

  2. Monte carlo simulations of organic photovoltaics.

    Science.gov (United States)

    Groves, Chris; Greenham, Neil C

    2014-01-01

    Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.

  3. Core-scale solute transport model selection using Monte Carlo analysis

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.

    2013-06-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.

  4. Lattice gas models and kinetic Monte Carlo simulations of epitaxial growth

    NARCIS (Netherlands)

    Biehl, Michael; Voigt, A

    2005-01-01

    A brief introduction is given to Kinetic Monte Carlo (KMC) simulations of epitaxial crystal growth. Molecular Beam Epitaxy (MBE) serves as the prototype example for growth far from equilibrium. However, many of the aspects discussed here would carry over to other techniques as well. A variety of app

  5. Monte Carlo Estimation of the Conditional Rasch Model. Research Report 94-09.

    Science.gov (United States)

    Akkermans, Wies M. W.

    In order to obtain conditional maximum likelihood estimates, the so-called conditioning estimates have to be calculated. In this paper a method is examined that does not calculate these constants exactly, but approximates them using Monte Carlo Markov Chains. As an example, the method is applied to the conditional estimation of both item and…

  6. Lattice gas models and kinetic Monte Carlo simulations of epitaxial growth

    NARCIS (Netherlands)

    Biehl, Michael; Voigt, A

    2005-01-01

    A brief introduction is given to Kinetic Monte Carlo (KMC) simulations of epitaxial crystal growth. Molecular Beam Epitaxy (MBE) serves as the prototype example for growth far from equilibrium. However, many of the aspects discussed here would carry over to other techniques as well. A variety of app

  7. Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)

    2012-06-15

    Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.

  8. Monte Carlo based verification of a beam model used in a treatment planning system

    Science.gov (United States)

    Wieslander, E.; Knöös, T.

    2008-02-01

    Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.

  9. Improved hybrid Monte Carlo-fluid model for the electrical characteristics in an analytical radio-frequency glow discharge in argon

    NARCIS (Netherlands)

    Bogaerts, A.; Gijbels, R.; W. Goedheer,

    2001-01-01

    An improved hybrid Monte Carlo-fluid model for electrons, argon ions and fast argon atoms, is presented for the rf Grimm-type glow discharge. In this new approach, all electrons, including the large slow electron group in the bulk plasma, are treated with the Monte Carlo model. The calculation

  10. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte , H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity transf

  11. Quantum Monte Carlo study of the itinerant-localized model of strongly correlated electrons: Spin-spin correlation functions

    OpenAIRE

    Ivantsov, Ilya; Ferraz, Alvaro; Kochetov, Evgenii

    2016-01-01

    We perform quantum Monte Carlo simulations of the itinerant-localized periodic Kondo-Heisenberg model for the underdoped cuprates to calculate the associated spin correlation functions. The strong electron correlations are shown to play a key role in the abrupt destruction of the quasi long-range antiferromagnetic order in the lightly doped regime.

  12. Quantum Monte Carlo study of the itinerant-localized model of strongly correlated electrons: Spin-spin correlation functions

    Science.gov (United States)

    Ivantsov, Ilya; Ferraz, Alvaro; Kochetov, Evgenii

    2016-12-01

    We perform quantum Monte Carlo simulations of the itinerant-localized periodic Kondo-Heisenberg model for the underdoped cuprates to calculate the associated spin correlation functions. The strong electron correlations are shown to play a key role in the abrupt destruction of the quasi-long-range antiferromagnetic order in the lightly doped regime.

  13. Quantum Monte Carlo study of the cooperative binding of NO2 to fragment models of carbon nanotubes

    NARCIS (Netherlands)

    Lawson, John W.; Bauschlicher Jr., Charles W.; Toulouse, Julien; Filippi, Claudia; Umrigar, C.J.

    2008-01-01

    Previous calculations on model systems for the cooperative binding of two NO2 molecules to carbon nanotubes using density functional theory and second order Moller–Plesset perturbation theory gave results differing by 30 kcal/mol. Quantum Monte Carlo calculations are performed to study the role of e

  14. Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model

    Science.gov (United States)

    Owens, Corina M.

    2011-01-01

    Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…

  15. A Monte-Carlo study for the critical exponents of the three-dimensional O(6) model

    Science.gov (United States)

    Loison, D.

    1999-09-01

    Using Wolff's single-cluster Monte-Carlo update algorithm, the three-dimensional O(6)-Heisenberg model on a simple cubic lattice is simulated. With the help of finite size scaling we compute the critical exponents ν, β, γ and η. Our results agree with the field-theory predictions but not so well with the prediction of the series expansions.

  16. Transfer-Matrix Monte Carlo Estimates of Critical Points in the Simple Cubic Ising, Planar and Heisenberg Models

    NARCIS (Netherlands)

    Nightingale, M.P.; Blöte , H.W.J.

    1996-01-01

    The principle and the efficiency of the Monte Carlo transfer-matrix algorithm are discussed. Enhancements of this algorithm are illustrated by applications to several phase transitions in lattice spin models. We demonstrate how the statistical noise can be reduced considerably by a similarity

  17. A Monte Carlo study of time-aggregation in continuous-time and discrete-time parametric hazard models.

    NARCIS (Netherlands)

    Hofstede, ter F.; Wedel, M.

    1998-01-01

    This study investigates the effects of time aggregation in discrete and continuous-time hazard models. A Monte Carlo study is conducted in which data are generated according to various continuous and discrete-time processes, and aggregated into daily, weekly and monthly intervals. These data are

  18. Automatic generation of a JET 3D neutronics model from CAD geometry data for Monte Carlo calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tsige-Tamirat, H. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany)]. E-mail: tsige@irs.fzk.de; Fischer, U. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany); Carman, P.P. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Loughlin, M. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom)

    2005-11-15

    The paper describes the automatic generation of a JET 3D neutronics model from data of computer aided design (CAD) system for Monte Carlo (MC) calculations. The applied method converts suitable CAD data into a representation appropriate for MC codes. The converted geometry is fully equivalent to the CAD geometry.

  19. Monte Carlo method based QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors.

    Science.gov (United States)

    Živković, Jelena V; Trutić, Nataša V; Veselinović, Jovana B; Nikolić, Goran M; Veselinović, Aleksandar M

    2015-09-01

    The Monte Carlo method was used for QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors. The first QSAR model was developed for a series of 74 3-anilino-4-arylmaleimide derivatives. The second QSAR model was developed for a series of 177 maleimide derivatives. QSAR models were calculated with the representation of the molecular structure by the simplified molecular input-line entry system. Two splits have been examined: one split into the training and test set for the first QSAR model, and one split into the training, test and validation set for the second. The statistical quality of the developed model is very good. The calculated model for 3-anilino-4-arylmaleimide derivatives had following statistical parameters: r(2)=0.8617 for the training set; r(2)=0.8659, and r(m)(2)=0.7361 for the test set. The calculated model for maleimide derivatives had following statistical parameters: r(2)=0.9435, for the training, r(2)=0.9262 and r(m)(2)=0.8199 for the test and r(2)=0.8418, r(av)(m)(2)=0.7469 and ∆r(m)(2)=0.1476 for the validation set. Structural indicators considered as molecular fragments responsible for the increase and decrease in the inhibition activity have been defined. The computer-aided design of new potential glycogen synthase kinase-3β inhibitors has been presented by using defined structural alerts.

  20. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    Science.gov (United States)

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  1. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  2. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr

  3. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr

  4. A Coarse-Grained DNA Model Parameterized from Atomistic Simulations by Inverse Monte Carlo

    Directory of Open Access Journals (Sweden)

    Nikolay Korolev

    2014-05-01

    Full Text Available Computer modeling of very large biomolecular systems, such as long DNA polyelectrolytes or protein-DNA complex-like chromatin cannot reach all-atom resolution in a foreseeable future and this necessitates the development of coarse-grained (CG approximations. DNA is both highly charged and mechanically rigid semi-flexible polymer and adequate DNA modeling requires a correct description of both its structural stiffness and salt-dependent electrostatic forces. Here, we present a novel CG model of DNA that approximates the DNA polymer as a chain of 5-bead units. Each unit represents two DNA base pairs with one central bead for bases and pentose moieties and four others for phosphate groups. Charges, intra- and inter-molecular force field potentials for the CG DNA model were calculated using the inverse Monte Carlo method from all atom molecular dynamic (MD simulations of 22 bp DNA oligonucleotides. The CG model was tested by performing dielectric continuum Langevin MD simulations of a 200 bp double helix DNA in solutions of monovalent salt with explicit ions. Excellent agreement with experimental data was obtained for the dependence of the DNA persistent length on salt concentration in the range 0.1–100 mM. The new CG DNA model is suitable for modeling various biomolecular systems with adequate description of electrostatic and mechanical properties.

  5. A Monte Carlo model of hot electron trapping and detrapping in SiO2

    Science.gov (United States)

    Kamocsai, R. L.; Porod, W.

    1991-02-01

    High-field stressing and oxide degradation of SiO2 are studied using a microscopic model of electron heating and charge trapping and detrapping. Hot electrons lead to a charge buildup in the oxide according to the dynamic trapping-detrapping model by Nissan-Cohen and co-workers [Y. Nissan-Cohen, J. Shappir, D. Frohman-Bentchkowsky, J. Appl. Phys. 58, 2252 (1985)]. Detrapping events are modeled as trap-to-band impact ionization processes initiated by high energy conduction electrons. The detailed electronic distribution function obtained from Monte Carlo transport simulations is utilized for the determination of the detrapping rates. We apply our microscopic model to the calculation of the flat-band voltage shift in silicon dioxide as a function of the electric field, and we show that our model is able to reproduce the experimental results. We also compare these results to the predictions of the empirical trapping-detrapping model which assumes a heuristic detrapping cross section. Our microscopic theory accounts for the nonlocal nature of impact ionization which leads to a dark space close to the injecting cathode, which is unaccounted for in the empirical model.

  6. Backbone exponents of the two-dimensional q-state Potts model: a Monte Carlo investigation.

    Science.gov (United States)

    Deng, Youjin; Blöte, Henk W J; Nienhuis, Bernard

    2004-02-01

    We determine the backbone exponent X(b) of several critical and tricritical q-state Potts models in two dimensions. The critical systems include the bond percolation, the Ising, the q=2-sqrt[3], 3, and 4 state Potts, and the Baxter-Wu model, and the tricritical ones include the q=1 Potts model and the Blume-Capel model. For this purpose, we formulate several efficient Monte Carlo methods and sample the probability P2 of a pair of points connected via at least two independent paths. Finite-size-scaling analysis of P2 yields X(b) as 0.3566(2), 0.2696(3), 0.2105(3), and 0.127(4) for the critical q=2-sqrt[3], 1,2, 3, and 4 state Potts model, respectively. At tricriticality, we obtain X(b)=0.0520(3) and 0.0753(6) for the q=1 and 2 Potts model, respectively. For the critical q-->0 Potts model it is derived that X(b)=3/4. From a scaling argument, we find that, at tricriticality, X(b) reduces to the magnetic exponent, as confirmed by the numerical results.

  7. Development of a Monte Carlo multiple source model for inclusion in a dose calculation auditing tool.

    Science.gov (United States)

    Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S

    2017-09-01

    The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm(2) field size and dose profiles for a 40 × 40 cm(2) field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm(2) to 30 × 30 cm(2) . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.

  8. A software tool to assess uncertainty in transient-storage model parameters using Monte Carlo simulations

    Science.gov (United States)

    Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.

    2017-01-01

    Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.

  9. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    H. Machguth

    2008-12-01

    Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.

  10. Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    H. Machguth

    2008-06-01

    Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.

  11. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    Energy Technology Data Exchange (ETDEWEB)

    Procassini, R.J. [Lawrence Livermore National lab., CA (United States)

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  12. Study of Monte Carlo Simulation Method for Methane Phase Diagram Prediction using Two Different Potential Models

    KAUST Repository

    Kadoura, Ahmad

    2011-06-06

    Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.

  13. Worm Monte Carlo study of the honeycomb-lattice loop model

    Energy Technology Data Exchange (ETDEWEB)

    Liu Qingquan, E-mail: liuqq@mail.ustc.edu.c [Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027 (China); Deng Youjin, E-mail: yjdeng@ustc.edu.c [Hefei National Laboratory for Physical Sciences at Microscale, Department of Modern Physics, University of Science and Technology of China, Hefei, 230027 (China); Garoni, Timothy M., E-mail: t.garoni@ms.unimelb.edu.a [ARC Centre of Excellence for Mathematics and Statistics of Complex Systems, Department of Mathematics and Statistics, University of Melbourne, Victoria 3010 (Australia)

    2011-05-11

    We present a Markov-chain Monte Carlo algorithm of worm type that correctly simulates the O(n) loop model on any (finite and connected) bipartite cubic graph, for any real n>0, and any edge weight, including the fully-packed limit of infinite edge weight. Furthermore, we prove rigorously that the algorithm is ergodic and has the correct stationary distribution. We emphasize that by using known exact mappings when n=2, this algorithm can be used to simulate a number of zero-temperature Potts antiferromagnets for which the Wang-Swendsen-Kotecky cluster algorithm is non-ergodic, including the 3-state model on the kagome lattice and the 4-state model on the triangular lattice. We then use this worm algorithm to perform a systematic study of the honeycomb-lattice loop model as a function of n{<=}2, on the critical line and in the densely-packed and fully-packed phases. By comparing our numerical results with Coulomb gas theory, we identify a set of exact expressions for scaling exponents governing some fundamental geometric and dynamic observables. In particular, we show that for all n{<=}2, the scaling of a certain return time in the worm dynamics is governed by the magnetic dimension of the loop model, thus providing a concrete dynamical interpretation of this exponent. The case n>2 is also considered, and we confirm the existence of a phase transition in the 3-state Potts universality class that was recently observed via numerical transfer matrix calculations.

  14. Voxel2MCNP: software for handling voxel models for Monte Carlo radiation transport calculations.

    Science.gov (United States)

    Hegenbart, Lars; Pölz, Stefan; Benzler, Andreas; Urban, Manfred

    2012-02-01

    Voxel2MCNP is a program that sets up radiation protection scenarios with voxel models and generates corresponding input files for the Monte Carlo code MCNPX. Its technology is based on object-oriented programming, and the development is platform-independent. It has a user-friendly graphical interface including a two- and three-dimensional viewer. A row of equipment models is implemented in the program. Various voxel model file formats are supported. Applications include calculation of counting efficiency of in vivo measurement scenarios and calculation of dose coefficients for internal and external radiation scenarios. Moreover, anthropometric parameters of voxel models, for instance chest wall thickness, can be determined. Voxel2MCNP offers several methods for voxel model manipulations including image registration techniques. The authors demonstrate the validity of the program results and provide references for previous successful implementations. The authors illustrate the reliability of calculated dose conversion factors and specific absorbed fractions. Voxel2MCNP is used on a regular basis to generate virtual radiation protection scenarios at Karlsruhe Institute of Technology while further improvements and developments are ongoing.

  15. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  16. Modeling of composite latex particle morphology by off-lattice Monte Carlo simulation.

    Science.gov (United States)

    Duda, Yurko; Vázquez, Flavio

    2005-02-01

    Composite latex particles have shown a great range of applications such as paint resins, varnishes, water borne adhesives, impact modifiers, etc. The high-performance properties of this kind of materials may be explained in terms of a synergistical combination of two different polymers (usually a rubber and a thermoplastic). A great variety of composite latex particles with very different morphologies may be obtained by two-step emulsion polymerization processes. The formation of specific particle morphology depends on the chemical and physical nature of the monomers used during the synthesis, the process temperature, the reaction initiator, the surfactants, etc. Only a few models have been proposed to explain the appearance of the composite particle morphologies. These models have been based on the change of the interfacial energies during the synthesis. In this work, we present a new three-component model: Polymer blend (flexible and rigid chain particles) is dispersed in water by forming spherical cavities. Monte Carlo simulations of the model in two dimensions are used to determine the density distribution of chains and water molecules inside the suspended particle. This approach allows us to study the dependence of the morphology of the composite latex particles on the relative hydrophilicity and flexibility of the chain molecules as well as on their density and composition. It has been shown that our simple model is capable of reproducing the main features of the various morphologies observed in synthesis experiments.

  17. Unified description of pf-shell nuclei by the Monte Carlo shell model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio

    1998-03-01

    The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)

  18. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Science.gov (United States)

    2016-01-01

    Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R) methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness) on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set). The original analysis of these data used a linear model (GLS) of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS) linear models and generalized estimating equation (GEE) models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${r}_{F}^{2}$\\end{document}rF2-test, two permutation F-tests (including GlobalAncova), and a rotation z-test (Roast). The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors) of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data

  19. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS

  20. Calibration, characterisation and Monte Carlo modelling of a fast-UNCL

    Energy Technology Data Exchange (ETDEWEB)

    Tagziria, Hamid, E-mail: hamid.tagziria@jrc.ec.europa.eu [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Bagi, Janos; Peerani, Paolo [European Commission, Joint Research Center, ITU-Nuclear Security Unit, I-21027 Ispra (Italy); Belian, Antony [Department of Safeguards, SGTS/TAU, IAEA Vienna Austria (Austria)

    2012-09-21

    This paper describes the calibration, characterisation and Monte Carlo modelling of a new IAEA Uranium Neutron Collar (UNCL) for LWR fuel, which can be operated in both passive and active modes. It can employ either 35 {sup 3}He tubes (in active configuration) or 44 tubes at 10 atm pressure (in its passive configuration) and thus can be operated in fast mode (with Cd liner) as its efficiency is higher than that of the standard UNCL. Furthermore, it has an adjustable internal cavity which allows the measurement of varying sizes of fuel assemblies such as WWER, PWR and BWR. It is intended to be used with Cd liners in active mode (with an AmLi interrogation source in place) by the inspectorate for the determination of the {sup 235}U content in fresh fuel assemblies, especially in cases where high concentrations of burnable poisons cause problems with accurate assays. A campaign of measurements has been carried out at the JRC Performance Laboratories (PERLA) in Ispra (Italy) using various radionuclide neutron sources ({sup 252}Cf, {sup 241}AmLi and PuGa) and our BWR and PWR reference assemblies, in order to calibrate and characterise the counter as well as assess its performance and determine its optimum operational parameters. Furthermore, the fast-UNCL has been extensively modelled at JRC using the Monte Carlo code, MCNP-PTA, which simulates both the neutron transport and the coincidence electronics. The model has been validated using our measurements which agreed well with calculations. The WWER1000 fuel assembly for which there are no representative reference materials for an adequate calibration of the counter, has also been modelled and the response of the counter to this fuel assembly has been simulated. Subsequently numerical calibrations curves have been obtained for the above fuel assemblies in various modes (fast and thermal). The sensitivity of the counter to fuel rods substitution as well as other important aspects and the parameters of the fast

  1. A Monte Carlo Method for Summing Modeled and Background Pollutant Concentrations.

    Science.gov (United States)

    Dhammapala, Ranil; Bowman, Clint; Schulte, Jill

    2017-02-23

    Air quality analyses for permitting new pollution sources often involve modeling dispersion of pollutants using models like AERMOD. Representative background pollutant concentrations must be added to modeled concentrations to determine compliance with air quality standards. Summing 98(th) (or 99(th)) percentiles of two independent distributions that are unpaired in time, overestimates air quality impacts and could needlessly burden sources with restrictive permit conditions. This problem is exacerbated when emissions and background concentrations peak during different seasons. Existing methods addressing this matter either require much input data, disregard source and background seasonality, or disregard the variability of the background by utilizing a single concentration for each season, month, hour-of-day, day-of-week or wind direction. Availability of representative background concentrations are another limitation. Here we report on work to improve permitting analyses, with the development of (1) daily gridded, background concentrations interpolated from 12km-CMAQ forecasts and monitored data. A two- step interpolation reproduced measured background concentrations to within 6.2%; and (2) a Monte Carlo (MC) method to combine AERMOD output and background concentrations while respecting their seasonality. The MC method randomly combines, with replacement, data from the same months, and calculates 1000 estimates of the 98(th) or 99(th) percentiles. The design concentration of background + new source is the median of these 1000 estimates. We found that the AERMOD design value (DV) + background DV lay at the upper end of the distribution of these thousand 99(th) percentiles, while measured DVs were at the lower end. Our MC method sits between these two metrics and is sufficiently protective of public health in that it overestimates design concentrations somewhat. We also calculated probabilities of exceeding specified thresholds at each receptor, better informing

  2. Solution of deterministic-stochastic epidemic models by dynamical Monte Carlo method

    Science.gov (United States)

    Aièllo, O. E.; Haas, V. J.; daSilva, M. A. A.; Caliri, A.

    2000-07-01

    This work is concerned with dynamical Monte Carlo (MC) method and its application to models originally formulated in a continuous-deterministic approach. Specifically, a susceptible-infected-removed-susceptible (SIRS) model is used in order to analyze aspects of the dynamical MC algorithm and achieve its applications in epidemic contexts. We first examine two known approaches to the dynamical interpretation of the MC method and follow with the application of one of them in the SIRS model. The working method chosen is based on the Poisson process where hierarchy of events, properly calculated waiting time between events, and independence of the events simulated, are the basic requirements. To verify the consistence of the method, some preliminary MC results are compared against exact steady-state solutions and other general numerical results (provided by Runge-Kutta method): good agreement is found. Finally, a space-dependent extension of the SIRS model is introduced and treated by MC. The results are interpreted under and in accordance with aspects of the herd-immunity concept.

  3. Critical Casimir force and its fluctuations in lattice spin models: exact and Monte Carlo results.

    Science.gov (United States)

    Dantchev, Daniel; Krech, Michael

    2004-04-01

    We present general arguments and construct a stress tensor operator for finite lattice spin models. The average value of this operator gives the Casimir force of the system close to the bulk critical temperature T(c). We verify our arguments via exact results for the force in the two-dimensional Ising model, d -dimensional Gaussian, and mean spherical model with 2Monte Carlo simulations for three-dimensional Ising, XY, and Heisenberg models we demonstrate that the standard deviation of the Casimir force F(C) in a slab geometry confining a critical substance in-between is k(b) TD(T) (A/ a(d-1) )(1/2), where A is the surface area of the plates, a is the lattice spacing, and D(T) is a slowly varying nonuniversal function of the temperature T. The numerical calculations demonstrate that at the critical temperature T(c) the force possesses a Gaussian distribution centered at the mean value of the force = k(b) T(c) (d-1)Delta/ (L/a)(d), where L is the distance between the plates and Delta is the (universal) Casimir amplitude.

  4. Monte Carlo tests of renormalization-group predictions for critical phenomena in Ising models

    Science.gov (United States)

    Binder, Kurt; Luijten, Erik

    2001-04-01

    A critical review is given of status and perspectives of Monte Carlo simulations that address bulk and interfacial phase transitions of ferromagnetic Ising models. First, some basic methodological aspects of these simulations are briefly summarized (single-spin flip vs. cluster algorithms, finite-size scaling concepts), and then the application of these techniques to the nearest-neighbor Ising model in d=3 and 5 dimensions is described, and a detailed comparison to theoretical predictions is made. In addition, the case of Ising models with a large but finite range of interaction and the crossover scaling from mean-field behavior to the Ising universality class are treated. If one considers instead a long-range interaction described by a power-law decay, new classes of critical behavior depending on the exponent of this power law become accessible, and a stringent test of the ε-expansion becomes possible. As a final type of crossover from mean-field type behavior to two-dimensional Ising behavior, the interface localization-delocalization transition of Ising films confined between “competing” walls is considered. This problem is still hampered by questions regarding the appropriate coarse-grained model for the fluctuating interface near a wall, which is the starting point for both this problem and the theory of critical wetting.

  5. Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations.

    Science.gov (United States)

    Edwards, Stephen; Schick, Daniel

    2016-04-01

    Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model.

  6. Modeling of molecular nitrogen collisions and dissociation processes for direct simulation Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, Neal, E-mail: neal.parsons@cd-adapco.com; Levin, Deborah A., E-mail: deblevin@illinois.edu [Department of Aerospace Engineering, The Pennsylvania State University, 233 Hammond Building, University Park, Pennsylvania 16802 (United States); Duin, Adri C. T. van, E-mail: acv13@engr.psu.edu [Department of Mechanical and Nuclear Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States); Zhu, Tong, E-mail: tvz5037@psu.edu [Department of Aerospace Engineering, The Pennsylvania State University, 136 Research East, University Park, Pennsylvania 16802 (United States)

    2014-12-21

    The Direct Simulation Monte Carlo (DSMC) method typically used for simulating hypersonic Earth re-entry flows requires accurate total collision cross sections and reaction probabilities. However, total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, so their reliability is unknown for the high temperatures observed in hypersonic flows. Existing DSMC reaction models accurately reproduce experimental equilibrium reaction rates, but the applicability of these rates to the strong thermal nonequilibrium observed in hypersonic shocks is unknown. For hypersonic flows, these modeling issues are particularly relevant for nitrogen, the dominant species of air. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method is used to accurately compute collision and reaction cross sections for the N{sub 2}({sup 1}Σ{sub g}{sup +})-N{sub 2}({sup 1}Σ{sub g}{sup +}) collision pair for conditions expected in hypersonic shocks using a new potential energy surface developed using a ReaxFF fit to recent advanced ab initio calculations. The MD/QCT-computed reaction probabilities were found to exhibit better physical behavior and predict less dissociation than the baseline total collision energy reaction model for strong nonequilibrium conditions expected in a shock. The MD/QCT reaction model compared well with computed equilibrium reaction rates and shock-tube data. In addition, the MD/QCT-computed total cross sections were found to agree well with established variable hard sphere total cross sections.

  7. A background error covariance model of significant wave height employing Monte Carlo simulation

    Institute of Scientific and Technical Information of China (English)

    GUO Yanyou; HOU Yijun; ZHANG Chunmei; YANG Jie

    2012-01-01

    The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model.The background error covariance(BEC)of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain.However,error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition.In this paper,we simulated the BEC of the significant wave height(SWH)employing Monte Carlo methods.An interesting result is that the BEC varies consistently with the mean wave direction(MWD).In the model domain,the BEC of the SWH decreases significantly when the MWD changes abruptly.A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed.A case study of regional data assimilation was performed,where the SWH observations of buoy 22001 were used to assess the SWH hindcast.The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.

  8. Monte Carlo modeling of Lead-Cooled Fast Reactor in adiabatic equilibrium state

    Energy Technology Data Exchange (ETDEWEB)

    Stanisz, Przemysław, E-mail: pstanisz@agh.edu.pl; Oettingen, Mikołaj, E-mail: moettin@agh.edu.pl; Cetnar, Jerzy, E-mail: cetnar@mail.ftj.agh.edu.pl

    2016-05-15

    Graphical abstract: - Highlights: • We present the Monte Carlo modeling of the LFR in the adiabatic equilibrium state. • We assess the adiabatic equilibrium fuel composition using the MCB code. • We define the self-adjusting process of breeding gain by the control rod operation. • The designed LFR can work in the adiabatic cycle with zero fuel breeding. - Abstract: Nuclear power would appear to be the only energy source able to satisfy the global energy demand while also achieving a significant reduction of greenhouse gas emissions. Moreover, it can provide a stable and secure source of electricity, and plays an important role in many European countries. However, nuclear power generation from its birth has been doomed by the legacy of radioactive nuclear waste. In addition, the looming decrease in the available resources of fissile U235 may influence the future sustainability of nuclear energy. The integrated solution to both problems is not trivial, and postulates the introduction of a closed-fuel cycle strategy based on breeder reactors. The perfect choice of a novel reactor system fulfilling both requirements is the Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state. In such a state, the reactor converts depleted or natural uranium into plutonium while consuming any self-generated minor actinides and transferring only fission products as waste. We present the preliminary design of a Lead-Cooled Fast Reactor operating in the adiabatic equilibrium state with the Monte Carlo Continuous Energy Burnup Code – MCB. As a reference reactor model we apply the core design developed initially under the framework of the European Lead-cooled SYstem (ELSY) project and refined in the follow-up Lead-cooled European Advanced DEmonstration Reactor (LEADER) project. The major objective of the study is to show to what extent the constraints of the adiabatic cycle are maintained and to indicate the phase space for further improvements. The analysis

  9. Mathematical modelling of scanner-specific bowtie filters for Monte Carlo CT dosimetry

    Science.gov (United States)

    Kramer, R.; Cassola, V. F.; Andrade, M. E. A.; de Araújo, M. W. C.; Brenner, D. J.; Khoury, H. J.

    2017-02-01

    The purpose of bowtie filters in CT scanners is to homogenize the x-ray intensity measured by the detectors in order to improve the image quality and at the same time to reduce the dose to the patient because of the preferential filtering near the periphery of the fan beam. For CT dosimetry, especially for Monte Carlo calculations of organ and tissue absorbed doses to patients, it is important to take the effect of bowtie filters into account. However, material composition and dimensions of these filters are proprietary. Consequently, a method for bowtie filter simulation independent of access to proprietary data and/or to a specific scanner would be of interest to many researchers involved in CT dosimetry. This study presents such a method based on the weighted computer tomography dose index, CTDIw, defined in two cylindrical PMMA phantoms of 16 cm and 32 cm diameter. With an EGSnrc-based Monte Carlo (MC) code, ratios CTDIw/CTDI100,a were calculated for a specific CT scanner using PMMA bowtie filter models based on sigmoid Boltzmann functions combined with a scanner filter factor (SFF) which is modified during calculations until the calculated MC CTDIw/CTDI100,a matches ratios CTDIw/CTDI100,a, determined by measurements or found in publications for that specific scanner. Once the scanner-specific value for an SFF has been found, the bowtie filter algorithm can be used in any MC code to perform CT dosimetry for that specific scanner. The bowtie filter model proposed here was validated for CTDIw/CTDI100,a considering 11 different CT scanners and for CTDI100,c, CTDI100,p and their ratio considering 4 different CT scanners. Additionally, comparisons were made for lateral dose profiles free in air and using computational anthropomorphic phantoms. CTDIw/CTDI100,a determined with this new method agreed on average within 0.89% (max. 3.4%) and 1.64% (max. 4.5%) with corresponding data published by CTDosimetry (www.impactscan.org) for the CTDI HEAD and BODY phantoms

  10. Monte Carlo modeling of photon transport in buried bone tissue layer for quantitative Raman spectroscopy

    Science.gov (United States)

    Wilson, Robert H.; Dooley, Kathryn A.; Morris, Michael D.; Mycek, Mary-Ann

    2009-02-01

    Light-scattering spectroscopy has the potential to provide information about bone composition via a fiber-optic probe placed on the skin. In order to design efficient probes, one must understand the effect of all tissue layers on photon transport. To quantitatively understand the effect of overlying tissue layers on the detected bone Raman signal, a layered Monte Carlo model was modified for Raman scattering. The model incorporated the absorption and scattering properties of three overlying tissue layers (dermis, subdermis, muscle), as well as the underlying bone tissue. The attenuation of the collected bone Raman signal, predominantly due to elastic light scattering in the overlying tissue layers, affected the carbonate/phosphate (C/P) ratio by increasing the standard deviation of the computational result. Furthermore, the mean C/P ratio varied when the relative thicknesses of the layers were varied and the elastic scattering coefficient at the Raman scattering wavelength of carbonate was modeled to be different from that at the Raman scattering wavelength of phosphate. These results represent the first portion of a computational study designed to predict optimal probe geometry and help to analyze detected signal for Raman scattering experiments involving bone.

  11. Modeling Monte Carlo of multileaf collimators using the code GEANT4

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Alex C.H.; Lima, Fernando R.A., E-mail: oliveira.ach@yahoo.com, E-mail: falima@cnen.gov.br [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil); Lima, Luciano S.; Vieira, Jose W., E-mail: lusoulima@yahoo.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil)

    2014-07-01

    Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation is linear accelerator (Linac). Among the many algorithms developed for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo (MC) methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. of millions of particles (photons, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). Accurate modeling of the Linac head is of particular interest in the calculation of dose distributions for intensity modulated radiation therapy (IMRT), where complex intensity distributions are delivered using a multileaf collimator (MLC). The objective of this work is to describe a methodology for modeling MC of MLCs using code Geant4. To exemplify this methodology, the Varian Millennium 120-leaf MLC was modeled, whose physical description is available in BEAMnrc Users Manual (20 11). The dosimetric characteristics (i.e., penumbra, leakage, and tongue-and-groove effect) of this MLC were evaluated. The results agreed with data published in the literature concerning the same MLC. (author)

  12. Tests of the modified Sigmund model of ion sputtering using Monte Carlo simulations

    Science.gov (United States)

    Hofsäss, Hans; Bradley, R. Mark

    2015-05-01

    Monte Carlo simulations are used to evaluate the Modified Sigmund Model of Sputtering. Simulations were carried out for a range of ion incidence angles and surface curvatures for different ion species, ion energies, and target materials. Sputter yields, moments of erosive crater functions, and the fraction of backscattered energy were determined. In accordance with the Modified Sigmund Model of Sputtering, we find that for sufficiently large incidence angles θ the curvature dependence of the erosion crater function tends to destabilize the solid surface along the projected direction of the incident ions. For the perpendicular direction, however, the curvature dependence always leads to a stabilizing contribution. The simulation results also show that, for larger values of θ, a significant fraction of the ions is backscattered, carrying off a substantial amount of the incident ion energy. This provides support for the basic idea behind the Modified Sigmund Model of Sputtering: that the incidence angle θ should be replaced by a larger angle Ψ to account for the reduced energy that is deposited in the solid for larger values of θ.

  13. Two electric field Monte Carlo models of coherent backscattering of polarized light.

    Science.gov (United States)

    Doronin, Alexander; Radosevich, Andrew J; Backman, Vadim; Meglinski, Igor

    2014-11-01

    Modeling of coherent polarized light propagation in turbid scattering medium by the Monte Carlo method provides an ultimate understanding of coherent effects of multiple scattering, such as enhancement of coherent backscattering and peculiarities of laser speckle formation in dynamic light scattering (DLS) and optical coherence tomography (OCT) diagnostic modalities. In this report, we consider two major ways of modeling the coherent polarized light propagation in scattering tissue-like turbid media. The first approach is based on tracking transformations of the electric field along the ray propagation. The second one is developed in analogy to the iterative procedure of the solution of the Bethe-Salpeter equation. To achieve a higher accuracy in the results and to speed up the modeling, both codes utilize the implementation of parallel computing on NVIDIA Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA). We compare these two approaches through simulations of the enhancement of coherent backscattering of polarized light and evaluate the accuracy of each technique with the results of a known analytical solution. The advantages and disadvantages of each computational approach and their further developments are discussed. Both codes are available online and are ready for immediate use or download.

  14. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    Science.gov (United States)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  15. Monte Carlo simulation of depth dose distribution in several organic models for boron neutron capture therapy

    Science.gov (United States)

    Matsumoto, T.

    2007-09-01

    Monte Carlo simulations are performed to evaluate depth-dose distributions for possible treatment of cancers by boron neutron capture therapy (BNCT). The ICRU computational model of ADAM & EVA was used as a phantom to simulate tumors at a depth of 5 cm in central regions of the lungs, liver and pancreas. Tumors of the prostate and osteosarcoma were also centered at the depth of 4.5 and 2.5 cm in the phantom models. The epithermal neutron beam from a research reactor was the primary neutron source for the MCNP calculation of the depth-dose distributions in those cancer models. For brain tumor irradiations, the whole-body dose was also evaluated. The MCNP simulations suggested that a lethal dose of 50 Gy to the tumors can be achieved without reaching the tolerance dose of 25 Gy to normal tissue. The whole-body phantom calculations also showed that the BNCT could be applied for brain tumors without significant damage to whole-body organs.

  16. Study on quantification method based on Monte Carlo sampling for multiunit probabilistic safety assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)

    2017-06-15

    In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.

  17. Monte Carlo method based QSAR modelling of natural lipase inhibitors using hybrid optimal descriptors.

    Science.gov (United States)

    Kumar, A; Chauhan, S

    2017-03-08

    Obesity is one of the most provoking health burdens in the developed countries. One of the strategies to prevent obesity is the inhibition of pancreatic lipase enzyme. The aim of this study was to build QSAR models for natural lipase inhibitors by using the Monte Carlo method. The molecular structures were represented by the simplified molecular input line entry system (SMILES) notation and molecular graphs. Three sets - training, calibration and test set of three splits - were examined and validated. Statistical quality of all the described models was very good. The best QSAR model showed the following statistical parameters: r(2) = 0.864 and Q(2) = 0.836 for the test set and r(2) = 0.824 and Q(2) = 0.819 for the validation set. Structural attributes for increasing and decreasing the activity (expressed as pIC50) were also defined. Using defined structural attributes, the design of new potential lipase inhibitors is also presented. Additionally, a molecular docking study was performed for the determination of binding modes of designed molecules.

  18. Monte Carlo Uncertainty Quantification Using Quasi-1D SRM Ballistic Model

    Directory of Open Access Journals (Sweden)

    Davide Viganò

    2016-01-01

    Full Text Available Compactness, reliability, readiness, and construction simplicity of solid rocket motors make them very appealing for commercial launcher missions and embarked systems. Solid propulsion grants high thrust-to-weight ratio, high volumetric specific impulse, and a Technology Readiness Level of 9. However, solid rocket systems are missing any throttling capability at run-time, since pressure-time evolution is defined at the design phase. This lack of mission flexibility makes their missions sensitive to deviations of performance from nominal behavior. For this reason, the reliability of predictions and reproducibility of performances represent a primary goal in this field. This paper presents an analysis of SRM performance uncertainties throughout the implementation of a quasi-1D numerical model of motor internal ballistics based on Shapiro’s equations. The code is coupled with a Monte Carlo algorithm to evaluate statistics and propagation of some peculiar uncertainties from design data to rocker performance parameters. The model has been set for the reproduction of a small-scale rocket motor, discussing a set of parametric investigations on uncertainty propagation across the ballistic model.

  19. Monte Carlo renormalization: the triangular Ising model as a test case.

    Science.gov (United States)

    Guo, Wenan; Blöte, Henk W J; Ren, Zhiming

    2005-04-01

    We test the performance of the Monte Carlo renormalization method in the context of the Ising model on a triangular lattice. We apply a block-spin transformation which allows for an adjustable parameter so that the transformation can be optimized. This optimization purportedly brings the fixed point of the transformation to a location where the corrections to scaling vanish. To this purpose we determine corrections to scaling of the triangular Ising model with nearest- and next-nearest-neighbor interactions by means of transfer-matrix calculations and finite-size scaling. We find that the leading correction to scaling just vanishes for the nearest-neighbor model. However, the fixed point of the commonly used majority-rule block-spin transformation appears to lie well away from the nearest-neighbor critical point. This raises the question whether the majority rule is suitable as a renormalization transformation, because the standard assumptions of real-space renormalization imply that corrections to scaling vanish at the fixed point. We avoid this inconsistency by means of the optimized transformation which shifts the fixed point back to the vicinity of the nearest-neighbor critical Hamiltonian. The results of the optimized transformation in terms of the Ising critical exponents are more accurate than those obtained with the majority rule.

  20. Monte Carlo simulations for a Lotka-type model with reactant surface diffusion and interactions.

    Science.gov (United States)

    Zvejnieks, G; Kuzovkov, V N

    2001-05-01

    The standard Lotka-type model, which was introduced for the first time by Mai et al. [J. Phys. A 30, 4171 (1997)] for a simplified description of autocatalytic surface reactions, is generalized here for a case of mobile and energetically interacting reactants. The mathematical formalism is proposed for determining the dependence of transition rates on the interaction energy (and temperature) for the general mathematical model, and the Lotka-type model, in particular. By means of Monte Carlo computer simulations, we have studied the impact of diffusion (with and without energetic interactions between reactants) on oscillatory properties of the A+B-->2B reaction. The diffusion leads to a desynchronization of oscillations and a subsequent decrease of oscillation amplitude. The energetic interaction between reactants has a dual effect depending on the type of mobile reactants. In the limiting case of mobile reactants B the repulsion results in a decrease of amplitudes. However, these amplitudes increase if reactants A are mobile and repulse each other. A simplified interpretation of the obtained results is given.

  1. Monte carlo method-based QSAR modeling of penicillins binding to human serum proteins.

    Science.gov (United States)

    Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M; Veselinović, Aleksandar M

    2015-01-01

    The binding of penicillins to human serum proteins was modeled with optimal descriptors based on the Simplified Molecular Input-Line Entry System (SMILES). The concentrations of protein-bound drug for 87 penicillins expressed as percentage of the total plasma concentration were used as experimental data. The Monte Carlo method was used as a computational tool to build up the quantitative structure-activity relationship (QSAR) model for penicillins binding to plasma proteins. One random data split into training, test and validation set was examined. The calculated QSAR model had the following statistical parameters: r(2)  = 0.8760, q(2)  = 0.8665, s = 8.94 for the training set and r(2)  = 0.9812, q(2)  = 0.9753, s = 7.31 for the test set. For the validation set, the statistical parameters were r(2)  = 0.727 and s = 12.52, but after removing the three worst outliers, the statistical parameters improved to r(2)  = 0.921 and s = 7.18. SMILES-based molecular fragments (structural indicators) responsible for the increase and decrease of penicillins binding to plasma proteins were identified. The possibility of using these results for the computer-aided design of new penicillins with desired binding properties is presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. QSAR models for HEPT derivates as NNRTI inhibitors based on Monte Carlo method.

    Science.gov (United States)

    Toropova, Alla P; Toropov, Andrey A; Veselinović, Jovana B; Miljković, Filip N; Veselinović, Aleksandar M

    2014-04-22

    A series of 107 1-[(2-hydroxyethoxy)-methyl]-6-(phenylthio) thymine (HEPT) with anti-HIV-1 activity as a non-nucleoside reverse transcriptase inhibitor (NNRTI) has been studied. Monte Carlo method has been used as a tool to build up the quantitative structure-activity relationships (QSAR) for anti-HIV-1 activity. The QSAR models were calculated with the representation of the molecular structure by simplified molecular input-line entry system and by the molecular graph. Three various splits into training and test set were examined. Statistical quality of all build models is very good. Best calculated model had following statistical parameters: for training set r(2) = 0.8818, q(2) = 0.8774 and r(2) = 0.9360, q(2) = 0.9243 for test set. Structural indicators (alerts) for increase and decrease of the IC50 are defined. Using defined structural alerts computer aided design of new potential anti-HIV-1 HEPT derivates is presented. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  3. Mathematical modeling, analysis and Markov Chain Monte Carlo simulation of Ebola epidemics

    Science.gov (United States)

    Tulu, Thomas Wetere; Tian, Boping; Wu, Zunyou

    Ebola virus infection is a severe infectious disease with the highest case fatality rate which become the global public health treat now. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. In this article a new mathematical model incorporating both vaccination and quarantine to study the dynamics of Ebola epidemic has been developed and comprehensively analyzed. The existence as well as uniqueness of the solution to the model is also verified and the basic reproduction number is calculated. Besides, stability conditions are also checked and finally simulation is done using both Euler method and one of the top ten most influential algorithm known as Markov Chain Monte Carlo (MCMC) method. Different rates of vaccination to predict the effect of vaccination on the infected individual over time and that of quarantine are discussed. The results show that quarantine and vaccination are very effective ways to control Ebola epidemic. From our study it was also seen that there is less possibility of an individual for getting Ebola virus for the second time if they survived his/her first infection. Last but not least real data has been fitted to the model, showing that it can used to predict the dynamic of Ebola epidemic.

  4. LPM-Effect in Monte Carlo Models of Radiative Energy Loss

    CERN Document Server

    Zapp, Korinna C; Wiedemann, Urs Achim

    2009-01-01

    Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.

  5. LPM-Effect in Monte Carlo Models of Radiative Energy Loss

    Energy Technology Data Exchange (ETDEWEB)

    Zapp, Korinna C. [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt (Germany); Stachel, Johanna [Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, D-69120 Heidelberg (Germany); Wiedemann, Urs Achim [Physics Department, Theory Unit, CERN, CH-1211 Geneve 23 (Switzerland)

    2009-11-01

    Extending the use of Monte Carlo (MC) event generators to jets in nuclear collisions requires a probabilistic implementation of the non-abelian LPM effect. We demonstrate that a local, probabilistic MC implementation based on the concept of formation times can account fully for the LPM-effect. The main features of the analytically known eikonal and collinear approximation can be reproduced, but we show how going beyond this approximation can lead to qualitatively different results.

  6. Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics

    DEFF Research Database (Denmark)

    Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.

    2014-01-01

    Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....

  7. Magnetic properties of a ferrimagnetic core/shell nanocube Ising model: A Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Zaim, A. [LPMMS, Faculte des Sciences, B.P. 11201, Zitoune, Meknes (Morocco); LPSMS, FST Errachidia, B.P. 509, Boutalamine, Errachidia (Morocco); Kerouad, M. [LPMMS, Faculte des Sciences, B.P. 11201, Zitoune, Meknes (Morocco)], E-mail: kerouad@fs-umi.ac.ma; EL Amraoui, Y. [LPSMS, FST Errachidia, B.P. 509, Boutalamine, Errachidia (Morocco)

    2009-04-15

    Monte Carlo simulation has been used to study the magnetic properties and hysteresis loops of a single nanocube, consisting of a ferromagnetic core of spin-1/2 surrounded by a ferromagnetic shell of spin-1 with antiferromagnetic interface coupling. We find a number of characteristic phenomena. In particular, the effects of the shell coupling and the interface coupling on both the compensation temperature and the magnetization profiles are investigated. The effects of the interface coupling on the hysteresis loops are also examined.

  8. The two-phase issue in the O(n) non-linear $\\sigma$-model: A Monte Carlo study

    OpenAIRE

    Alles, B.; Buonanno, A.; Cella, G.

    1996-01-01

    We have performed a high statistics Monte Carlo simulation to investigate whether the two-dimensional O(n) non-linear sigma models are asymptotically free or they show a Kosterlitz- Thouless-like phase transition. We have calculated the mass gap and the magnetic susceptibility in the O(8) model with standard action and the O(3) model with Symanzik action. Our results for O(8) support the asymptotic freedom scenario.

  9. Water leaching of borosilicate glasses: experiments, modeling and Monte Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-15

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  10. Aqueous corrosion of borosilicate glasses: experiments, modeling and Monte-Carlo simulations; Alteration par l'eau des verres borosilicates: experiences, modelisation et simulations Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Ledieu, A

    2004-10-01

    This work is concerned with the corrosion of borosilicate glasses with variable oxide contents. The originality of this study is the complementary use of experiments and numerical simulations. This study is expected to contribute to a better understanding of the corrosion of nuclear waste confinement glasses. First, the corrosion of glasses containing only silicon, boron and sodium oxides has been studied. The kinetics of leaching show that the rate of leaching and the final degree of corrosion sharply depend on the boron content through a percolation mechanism. For some glass contents and some conditions of leaching, the layer which appears at the glass surface stops the release of soluble species (boron and sodium). This altered layer (also called the gel layer) has been characterized with nuclear magnetic resonance (NMR) and small angle X-ray scattering (SAXS) techniques. Second, additional elements have been included in the glass composition. It appears that calcium, zirconium or aluminum oxides strongly modify the final degree of corrosion so that the percolation properties of the boron sub-network is no more a sufficient explanation to account for the behavior of these glasses. Meanwhile, we have developed a theoretical model, based on the dissolution and the reprecipitation of the silicon. Kinetic Monte Carlo simulations have been used in order to test several concepts such as the boron percolation, the local reactivity of weakly soluble elements and the restructuring of the gel layer. This model has been fully validated by comparison with the results on the three oxide glasses. Then, it has been used as a comprehensive tool to investigate the paradoxical behavior of the aluminum and zirconium glasses: although these elements slow down the corrosion kinetics, they lead to a deeper final degree of corrosion. The main contribution of this work is that the final degree of corrosion of borosilicate glasses results from the competition of two opposite mechanisms

  11. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Science.gov (United States)

    Pfeiffer, M.; Nizenkov, P.; Mirza, A.; Fasoulas, S.

    2016-02-01

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn's Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  12. Monte Carlo ice flow modeling projects a new stable configuration for Columbia Glacier, Alaska, c. 2020

    Directory of Open Access Journals (Sweden)

    W. Colgan

    2012-11-01

    Full Text Available Due to the abundance of observational datasets collected since the onset of its retreat (c. 1983, Columbia Glacier, Alaska, provides an exciting modeling target. We perform Monte Carlo simulations of the form and flow of Columbia Glacier, using a 1-D (depth-integrated flowline model, over a wide range of parameter values and forcings. An ensemble filter is imposed following spin-up to ensure that only simulations that accurately reproduce observed pre-retreat glacier geometry are retained; all other simulations are discarded. The selected ensemble of simulations reasonably reproduces numerous highly transient post-retreat observed datasets. The selected ensemble mean projection suggests that Columbia Glacier will achieve a new dynamic equilibrium (i.e. "stable" ice geometry c. 2020, at which time iceberg calving rate will have returned to approximately pre-retreat values. Comparison of the observed 1957 and 2007 glacier geometries with the projected 2100 glacier geometry suggests that Columbia Glacier had already discharged ~82% of its projected 1957–2100 sea level rise contribution by 2007. This case study therefore highlights the difficulties associated with the future extrapolation of observed glacier mass loss rates that are dominated by iceberg calving.

  13. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    Science.gov (United States)

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  14. Nanostructure evolution of neutron-irradiated reactor pressure vessel steels: Revised Object kinetic Monte Carlo model

    Science.gov (United States)

    Chiapetto, M.; Messina, L.; Becquart, C. S.; Olsson, P.; Malerba, L.

    2017-02-01

    This work presents a revised set of parameters to be used in an Object kinetic Monte Carlo model to simulate the microstructure evolution under neutron irradiation of reactor pressure vessel steels at the operational temperature of light water reactors (∼300 °C). Within a "grey-alloy" approach, a more physical description than in a previous work is used to translate the effect of Mn and Ni solute atoms on the defect cluster diffusivity reduction. The slowing down of self-interstitial clusters, due to the interaction between solutes and crowdions in Fe is now parameterized using binding energies from the latest DFT calculations and the solute concentration in the matrix from atom-probe experiments. The mobility of vacancy clusters in the presence of Mn and Ni solute atoms was also modified on the basis of recent DFT results, thereby removing some previous approximations. The same set of parameters was seen to predict the correct microstructure evolution for two different types of alloys, under very different irradiation conditions: an Fe-C-MnNi model alloy, neutron irradiated at a relatively high flux, and a high-Mn, high-Ni RPV steel from the Swedish Ringhals reactor surveillance program. In both cases, the predicted self-interstitial loop density matches the experimental solute cluster density, further corroborating the surmise that the MnNi-rich nanofeatures form by solute enrichment of immobilized small interstitial loops, which are invisible to the electron microscope.

  15. Monte Carlo model of neutral-particle transport in diverted plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Heifetz, D.; Post, D.; Petravic, M.; Weisheit, J.; Bateman, G.

    1981-11-01

    The transport of neutral atoms and molecules in the edge and divertor regions of fusion experiments has been calculated using Monte-Carlo techniques. The deuterium, tritium, and helium atoms are produced by recombination in the plasma and at the walls. The relevant collision processes of charge exchange, ionization, and dissociation between the neutrals and the flowing plasma electrons and ions are included, along with wall reflection models. General two-dimensional wall and plasma geometries are treated in a flexible manner so that varied configurations can be easily studied. The algorithm uses a pseudo-collision method. Splitting with Russian roulette, suppression of absorption, and efficient scoring techniques are used to reduce the variance. The resulting code is sufficiently fast and compact to be incorporated into iterative treatments of plasma dynamics requiring numerous neutral profiles. The calculation yields the neutral gas densities, pressures, fluxes, ionization rates, momentum transfer rates, energy transfer rates, and wall sputtering rates. Applications have included modeling of proposed INTOR/FED poloidal divertor designs and other experimental devices.

  16. A stochastic Markov chain approach for tennis: Monte Carlo simulation and modeling

    Science.gov (United States)

    Aslam, Kamran

    This dissertation describes the computational formulation of probability density functions (pdfs) that facilitate head-to-head match simulations in tennis along with ranking systems developed from their use. A background on the statistical method used to develop the pdfs , the Monte Carlo method, and the resulting rankings are included along with a discussion on ranking methods currently being used both in professional sports and in other applications. Using an analytical theory developed by Newton and Keller in [34] that defines a tennis player's probability of winning a game, set, match and single elimination tournament, a computational simulation has been developed in Matlab that allows further modeling not previously possible with the analytical theory alone. Such experimentation consists of the exploration of non-iid effects, considers the concept the varying importance of points in a match and allows an unlimited number of matches to be simulated between unlikely opponents. The results of these studies have provided pdfs that accurately model an individual tennis player's ability along with a realistic, fair and mathematically sound platform for ranking them.

  17. Dynamical Models for NGC 6503 using a Markov Chain Monte Carlo Technique

    CERN Document Server

    Puglielli, David; Courteau, Stéphane

    2010-01-01

    We use Bayesian statistics and Markov chain Monte Carlo (MCMC) techniques to construct dynamical models for the spiral galaxy NGC 6503. The constraints include surface brightness profiles which display a Freeman Type II structure; HI and ionized gas rotation curves; the stellar rotation, which is nearly coincident with the ionized gas curve; and the line of sight stellar dispersion, with a sigma-drop at the centre. The galaxy models consist of a Sersic bulge, an exponential disc with an optional inner truncation and a cosmologically motivated dark halo. The Bayesian/MCMC technique yields the joint posterior probability distribution function for the input parameters. We examine several interpretations of the data: the Type II surface brightness profile may be due to dust extinction, to an inner truncated disc or to a ring of bright stars; and we test separate fits to the gas and stellar rotation curves to determine if the gas traces the gravitational potential. We test each of these scenarios for bar stability...

  18. Inverse Monte Carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser Doppler flowmetry

    Science.gov (United States)

    Fredriksson, Ingemar; Burdakov, Oleg; Larsson, Marcus; Strömberg, Tomas

    2013-12-01

    The tissue fraction of red blood cells (RBCs) and their oxygenation and speed-resolved perfusion are estimated in absolute units by combining diffuse reflectance spectroscopy (DRS) and laser Doppler flowmetry (LDF). The DRS spectra (450 to 850 nm) are assessed at two source-detector separations (0.4 and 1.2 mm), allowing for a relative calibration routine, whereas LDF spectra are assessed at 1.2 mm in the same fiber-optic probe. Data are analyzed using nonlinear optimization in an inverse Monte Carlo technique by applying an adaptive multilayered tissue model based on geometrical, scattering, and absorbing properties, as well as RBC flow-speed information. Simulations of 250 tissue-like models including up to 2000 individual blood vessels were used to evaluate the method. The absolute root mean square (RMS) deviation between estimated and true oxygenation was 4.1 percentage units, whereas the relative RMS deviations for the RBC tissue fraction and perfusion were 19% and 23%, respectively. Examples of in vivo measurements on forearm and foot during common provocations are presented. The method offers several advantages such as simultaneous quantification of RBC tissue fraction and oxygenation and perfusion from the same, predictable, sampling volume. The perfusion estimate is speed resolved, absolute (% RBC×mm/s), and more accurate due to the combination with DRS.

  19. Momentum transfer Monte Carlo model for the simulation of laser speckle contrast imaging (Conference Presentation)

    Science.gov (United States)

    Regan, Caitlin; Hayakawa, Carole K.; Choi, Bernard

    2016-03-01

    Laser speckle imaging (LSI) enables measurement of relative blood flow in microvasculature and perfusion in tissues. To determine the impact of tissue optical properties and perfusion dynamics on speckle contrast, we developed a computational simulation of laser speckle contrast imaging. We used a discrete absorption-weighted Monte Carlo simulation to model the transport of light in tissue. We simulated optical excitation of a uniform flat light source and tracked the momentum transfer of photons as they propagated through a simulated tissue geometry. With knowledge of the probability distribution of momentum transfer occurring in various layers of the tissue, we calculated the expected laser speckle contrast arising with coherent excitation using both reflectance and transmission geometries. We simulated light transport in a single homogeneous tissue while independently varying either absorption (.001-100mm^-1), reduced scattering (.1-10mm^-1), or anisotropy (0.05-0.99) over a range of values relevant to blood and commonly imaged tissues. We observed that contrast decreased by 49% with an increase in optical scattering, and observed a 130% increase with absorption (exposure time = 1ms). We also explored how speckle contrast was affected by the depth (0-1mm) and flow speed (0-10mm/s) of a dynamic vascular inclusion. This model of speckle contrast is important to increase our understanding of how parameters such as perfusion dynamics, vessel depth, and tissue optical properties affect laser speckle imaging.

  20. Single-site Lennard-Jones models via polynomial chaos surrogates of Monte Carlo molecular simulation

    KAUST Repository

    Kadoura, Ahmad Salim

    2016-06-01

    In this work, two Polynomial Chaos (PC) surrogates were generated to reproduce Monte Carlo (MC) molecular simulation results of the canonical (single-phase) and the NVT-Gibbs (two-phase) ensembles for a system of normalized structureless Lennard-Jones (LJ) particles. The main advantage of such surrogates, once generated, is the capability of accurately computing the needed thermodynamic quantities in a few seconds, thus efficiently replacing the computationally expensive MC molecular simulations. Benefiting from the tremendous computational time reduction, the PC surrogates were used to conduct large-scale optimization in order to propose single-site LJ models for several simple molecules. Experimental data, a set of supercritical isotherms, and part of the two-phase envelope, of several pure components were used for tuning the LJ parameters (ε, σ). Based on the conducted optimization, excellent fit was obtained for different noble gases (Ar, Kr, and Xe) and other small molecules (CH4, N2, and CO). On the other hand, due to the simplicity of the LJ model used, dramatic deviations between simulation and experimental data were observed, especially in the two-phase region, for more complex molecules such as CO2 and C2 H6.

  1. Clustering and heterogeneous dynamics in a kinetic Monte Carlo model of self-propelled hard disks.

    Science.gov (United States)

    Levis, Demian; Berthier, Ludovic

    2014-06-01

    We introduce a kinetic Monte Carlo model for self-propelled hard disks to capture with minimal ingredients the interplay between thermal fluctuations, excluded volume, and self-propulsion in large assemblies of active particles. We analyze in detail the resulting (density, self-propulsion) nonequilibrium phase diagram over a broad range of parameters. We find that purely repulsive hard disks spontaneously aggregate into fractal clusters as self-propulsion is increased and rationalize the evolution of the average cluster size by developing a kinetic model of reversible aggregation. As density is increased, the nonequilibrium clusters percolate to form a ramified structure reminiscent of a physical gel. We show that the addition of a finite amount of noise is needed to trigger a nonequilibrium phase separation, showing that demixing in active Brownian particles results from a delicate balance between noise, interparticle interactions, and self-propulsion. We show that self-propulsion has a profound influence on the dynamics of the active fluid. We find that the diffusion constant has a nonmonotonic behavior as self-propulsion is increased at finite density and that activity produces strong deviations from Fickian diffusion that persist over large time scales and length scales, suggesting that systems of active particles generically behave as dynamically heterogeneous systems.

  2. Properties of Carbon-Oxygen White Dwarfs From Monte Carlo Stellar Models

    CERN Document Server

    Fields, C E; Petermann, I; Iliadis, C; Timmes, F X

    2016-01-01

    We investigate properties of carbon-oxygen white dwarfs with respect to the composite uncertainties in the reaction rates using the stellar evolution toolkit, Modules for Experiments in Stellar Astrophysics (MESA) and the probability density functions in the reaction rate library STARLIB. These are the first Monte Carlo stellar evolution studies that use complete stellar models. Focusing on 3 M$_{\\odot}$ models evolved from the pre main-sequence to the first thermal pulse, we survey the remnant core mass, composition, and structure properties as a function of 26 STARLIB reaction rates covering hydrogen and helium burning using a Principal Component Analysis and Spearman Rank-Order Correlation. Relative to the arithmetic mean value, we find the width of the 95\\% confidence interval to be $\\Delta M_{{\\rm 1TP}}$ $\\approx$ 0.019 M$_{\\odot}$ for the core mass at the first thermal pulse, $\\Delta$$t_{\\rm{1TP}}$ $\\approx$ 12.50 Myr for the age, $\\Delta \\log(T_{{\\rm c}}/{\\rm K}) \\approx$ 0.013 for the central temperat...

  3. Modeling the Thermal Conductivity of Nanocomposites Using Monte-Carlo Methods and Realistic Nanotube Configurations

    Science.gov (United States)

    Bui, Khoa; Papavassiliou, Dimitrios

    2012-02-01

    The effective thermal conductivity (Keff) of carbon nanotube (CNT) composites is affected by the thermal boundary resistance (TBR) and by the dispersion pattern and geometry of the CNTs. We have previously modeled CNTs as straight cylinders and found that the TBR between CNTs (TBRCNT-CNT) can suppress Keff at high volume fractions of CNTs [1]. Effective medium theory results assume that the CNTs are in a perfect dispersion state and exclude the TBRCNT-CNT [2]. In this work, we report on the development of an algorithm for generating CNTs with worm-like geometry in 3D, and with different persistence lengths. These worm-like CNTs are then randomly placed in a periodic box representing a realistic state, since the persistence length of a CNT can be obtained from microscopic images. The use of these CNT geometries in conjunction with off-lattice Monte Carlo simulations [1] in order to study the effective thermal properties of nanocomposites will be discussed, as well as the effects of the persistence length on Keff and comparisons to straight cylinder models. References [1] K. Bui, B.P. Grady, D.V. Papavassiliou, Chem. Phys. Let., 508(4-6), 248-251, 2011 [2] C.W. Nan, G. Liu, Y. Lin, M. Li, App. Phys. Let., 85(16), 3549-3551, 2006

  4. A Monte-Carlo Model of Partially Trapped UV Radiation in a Plasma Display Panel Cell

    Science.gov (United States)

    van der Straaten, Trudy; Kushner, Mark J.

    1999-10-01

    Plasma Display Panels (PDPs) are being developed for large-area high-brightness flat panel displays. Color PDP cells generally use xenon gas mixtures to generate UV photons that are converted to visible light by phosphors. While the UV photons produced by Xe(6s'-5s5p6, 6s-5s5p6) are only in a quasi-optically thick regime due to the small dimensions (100s μms) of PDP cells, current models of PDPs do not explicitly address UV radiation transport other than by using radiation trapping factors. In this paper we report on results from a two-dimensional hybrid simulation of a PDP cell which models radiation transport using Monte Carlo (MC) photon transport and frequency redistribution algorithms. We examine the spectrum of UV photons incident on the phosphor and their escape probability. For typical operating conditions (400 Torr, 1-4% Xe mole fraction) there is significant frequency redistribution of resonance radiation due to absorption and subsequent re-emission at a different frequency within the lineshape. Significant line reversal occurs at Xe mole fractions of a few percent, the degree of which depends on PDP cell dimensions. The escape probability generally decreases during the current pulse due to additional quenching by electron impact processes.

  5. A geometrical model for the Monte Carlo simulation of the TrueBeam linac

    CERN Document Server

    Rodriguez, Miguel; Fogliata, Antonella; Cozzi, Luca; Sauerwein, Wolfgang; Brualla, Lorenzo

    2015-01-01

    Monte Carlo (MC) simulation of linacs depends on the accurate geometrical description of the head. The geometry of the Varian TrueBeam linac is not available to researchers. Instead, the company distributes phase-space files (PSFs) of the flattening-filter-free (FFF) beams tallied upstream the jaws. Yet, MC simulations based on third party tallied PSFs are subject to limitations. We present an experimentally-based geometry developed for the simulation of the FFF beams of the TrueBeam linac. The upper part of the TrueBeam linac was modeled modifying the Clinac 2100 geometry. The most important modification is the replacement of the standard flattening filters by {\\it ad hoc} thin filters which were modeled by comparing dose measurements and simulations. The experimental dose profiles for the 6~MV and 10~MV FFF beams were obtained from the Varian Golden Data Set and from in-house measurements for radiation fields ranging from $3\\times3$ to $40\\times40$ cm$^2$. The same comparisons were done for dose profiles ob...

  6. Direct simulation Monte Carlo modeling of relaxation processes in polyatomic gases

    Energy Technology Data Exchange (ETDEWEB)

    Pfeiffer, M., E-mail: mpfeiffer@irs.uni-stuttgart.de; Nizenkov, P., E-mail: nizenkov@irs.uni-stuttgart.de; Mirza, A., E-mail: mirza@irs.uni-stuttgart.de; Fasoulas, S., E-mail: fasoulas@irs.uni-stuttgart.de [Institute of Space Systems, University of Stuttgart, Pfaffenwaldring 29, D-70569 Stuttgart (Germany)

    2016-02-15

    Relaxation processes of polyatomic molecules are modeled and implemented in an in-house Direct Simulation Monte Carlo code in order to enable the simulation of atmospheric entry maneuvers at Mars and Saturn’s Titan. The description of rotational and vibrational relaxation processes is derived from basic quantum-mechanics using a rigid rotator and a simple harmonic oscillator, respectively. Strategies regarding the vibrational relaxation process are investigated, where good agreement for the relaxation time according to the Landau-Teller expression is found for both methods, the established prohibiting double relaxation method and the new proposed multi-mode relaxation. Differences and applications areas of these two methods are discussed. Consequently, two numerical methods used for sampling of energy values from multi-dimensional distribution functions are compared. The proposed random-walk Metropolis algorithm enables the efficient treatment of multiple vibrational modes within a time step with reasonable computational effort. The implemented model is verified and validated by means of simple reservoir simulations and the comparison to experimental measurements of a hypersonic, carbon-dioxide flow around a flat-faced cylinder.

  7. Non-Local effective SU(2) Polyakov-loop models from inverse Monte-Carlo methods

    CERN Document Server

    Bahrampour, Bardiya; von Smekal, Lorenz

    2016-01-01

    The strong-coupling expansion of the lattice gauge action leads to Polyakov-loop models that effectively describe gluodynamics at low temperatures, and together with the hopping expansion of the fermion determinant provides insight into the QCD phase diagram at finite density and low temperatures, although for rather heavy quarks. At higher temperatures the strong-coupling expansion breaks down and it is expected that the interactions between Polyakov loops become non-local. Here, we therefore test how well pure SU(2) gluodynamics can be mapped onto different non-local Polyakov models with inverse Monte-Carlo methods. We take into account Polyakov loops in higher representations and gradually add interaction terms at larger distances. We are particularly interested in extrapolating the range of non-local terms in sufficiently large volumes and higher representations. We study the characteristic fall-off in strength of the non-local couplings with the interaction distance, and its dependence on the gauge coupl...

  8. A Markov chain Monte Carlo with Gibbs sampling approach to anisotropic receiver function forward modeling

    Science.gov (United States)

    Wirth, Erin A.; Long, Maureen D.; Moriarty, John C.

    2017-01-01

    Teleseismic receiver functions contain information regarding Earth structure beneath a seismic station. P-to-SV converted phases are often used to characterize crustal and upper-mantle discontinuities and isotropic velocity structures. More recently, P-to-SH converted energy has been used to interrogate the orientation of anisotropy at depth, as well as the geometry of dipping interfaces. Many studies use a trial-and-error forward modeling approach for the interpretation of receiver functions, generating synthetic receiver functions from a user-defined input model of Earth structure and amending this model until it matches major features in the actual data. While often successful, such an approach makes it impossible to explore model space in a systematic and robust manner, which is especially important given that solutions are likely non-unique. Here, we present a Markov chain Monte Carlo algorithm with Gibbs sampling for the interpretation of anisotropic receiver functions. Synthetic examples are used to test the viability of the algorithm, suggesting that it works well for models with a reasonable number of free parameters (<˜20). Additionally, the synthetic tests illustrate that certain parameters are well constrained by receiver function data, while others are subject to severe trade-offs-an important implication for studies that attempt to interpret Earth structure based on receiver function data. Finally, we apply our algorithm to receiver function data from station WCI in the central United States. We find evidence for a change in anisotropic structure at mid-lithospheric depths, consistent with previous work that used a grid search approach to model receiver function data at this station. Forward modeling of receiver functions using model space search algorithms, such as the one presented here, provide a meaningful framework for interrogating Earth structure from receiver function data.

  9. Monte Carlo approach to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik

    2009-11-15

    The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)

  10. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  11. A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors

    Science.gov (United States)

    Wu, Dufan; Xu, Xiaofei; Zhang, Li; Wang, Sen

    2016-09-01

    In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.

  12. A hybrid Monte Carlo model for the energy response functions of X-ray photon counting detectors

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Dufan; Xu, Xiaofei [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Zhang, Li, E-mail: zli@mail.tsinghua.edu.cn [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wang, Sen [Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education (China); Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)

    2016-09-11

    In photon counting computed tomography (CT), it is vital to know the energy response functions of the detector for noise estimation and system optimization. Empirical methods lack flexibility and Monte Carlo simulations require too much knowledge of the detector. In this paper, we proposed a hybrid Monte Carlo model for the energy response functions of photon counting detectors in X-ray medical applications. GEANT4 was used to model the energy deposition of X-rays in the detector. Then numerical models were used to describe the process of charge sharing, anti-charge sharing and spectral broadening, which were too complicated to be included in the Monte Carlo model. Several free parameters were introduced in the numerical models, and they could be calibrated from experimental measurements such as X-ray fluorescence from metal elements. The method was used to model the energy response function of an XCounter Flite X1 photon counting detector. The parameters of the model were calibrated with fluorescence measurements. The model was further tested against measured spectrums of a VJ X-ray source to validate its feasibility and accuracy.

  13. A Monte Carlo model of the Varian IGRT couch top for RapidArc QA.

    Science.gov (United States)

    Teke, T; Gill, B; Duzenli, C; Popescu, I A

    2011-12-21

    The objectives of this study are to evaluate the effect of couch attenuation on quality assurance (QA) results and to present a couch top model for Monte Carlo (MC) dose calculation for RapidArc treatments. The IGRT couch top is modelled in Eclipse as a thin skin of higher density material with a homogeneous fill of foam of lower density and attenuation. The IGRT couch structure consists of two longitudinal sections referred to as thick and thin. The Hounsfield Unit (HU) characterization of the couch structure was determined using a cylindrical phantom by comparing ion chamber measurements with the dose predicted by the treatment planning system (TPS). The optimal set of HU for the inside of the couch and the surface shell was found to be respectively -960 and -700 HU in agreement with Vanetti et al (2009 Phys. Med. Biol. 54 N157-66). For each plan, the final dose calculation was performed with the thin, thick and without the couch top. Dose differences up to 2.6% were observed with TPS calculated doses not including the couch and up to 3.4% with MC not including the couch and were found to be treatment specific. A MC couch top model was created based on the TPS geometrical model. The carbon fibre couch top skin was modelled using carbon graphite; the density was adjusted until good agreement with experimental data was observed, while the density of the foam inside was kept constant. The accuracy of the couch top model was evaluated by comparison with ion chamber measurements and TPS calculated dose combined with a 3D gamma analysis. Similar to the TPS case, a single graphite density can be used for both the thin and thick MC couch top models. Results showed good agreement with ion chamber measurements (within 1.2%) and with TPS (within 1%). For each plan, over 95% of the points passed the 3D gamma test.

  14. Fullrmc, a rigid body Reverse Monte Carlo modeling package enabled with machine learning and artificial intelligence.

    Science.gov (United States)

    Aoun, Bachir

    2016-05-01

    A new Reverse Monte Carlo (RMC) package "fullrmc" for atomic or rigid body and molecular, amorphous, or crystalline materials is presented. fullrmc main purpose is to provide a fully modular, fast and flexible software, thoroughly documented, complex molecules enabled, written in a modern programming language (python, cython, C and C++ when performance is needed) and complying to modern programming practices. fullrmc approach in solving an atomic or molecular structure is different from existing RMC algorithms and software. In a nutshell, traditional RMC methods and software randomly adjust atom positions until the whole system has the greatest consistency with a set of experimental data. In contrast, fullrmc applies smart moves endorsed with reinforcement machine learning to groups of atoms. While fullrmc allows running traditional RMC modeling, the uniqueness of this approach resides in its ability to customize grouping atoms in any convenient way with no additional programming efforts and to apply smart and more physically meaningful moves to the defined groups of atoms. In addition, fullrmc provides a unique way with almost no additional computational cost to recur a group's selection, allowing the system to go out of local minimas by refining a group's position or exploring through and beyond not allowed positions and energy barriers the unrestricted three dimensional space around a group.

  15. Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.

    Science.gov (United States)

    Xin, Cao; Chongshi, Gu

    2016-01-01

    Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.

  16. Monte Carlo Study of Topological Defects in the 3D Heisenberg Model

    CERN Document Server

    Holm, C; Holm, Christian; Janke, Wolfhard

    1994-01-01

    We use single-cluster Monte Carlo simulations to study the role of topological defects in the three-dimensional classical Heisenberg model on simple cubic lattices of size up to $80^3$. By applying reweighting techniques to time series generated in the vicinity of the approximate infinite volume transition point $K_c$, we obtain clear evidence that the temperature derivative of the average defect density $d\\langle n \\rangle/dT$ behaves qualitatively like the specific heat, i.e., both observables are finite in the infinite volume limit. This is in contrast to results by Lau and Dasgupta [{\\em Phys. Rev.\\/} {\\bf B39} (1989) 7212] who extrapolated a divergent behavior of $d\\langle n \\rangle/dT$ at $K_c$ from simulations on lattices of size up to $16^3$. We obtain weak evidence that $d\\langle n \\rangle/dT$ scales with the same critical exponent as the specific heat.As a byproduct of our simulations, we obtain a very accurate estimate for the ratio $\\alpha/\

  17. Monte Carlo Modeling of Minor Actinide Burning in Fissile Spallation Targets

    Science.gov (United States)

    Malyshkin, Yury; Pshenichnov, Igor; Mishustin, Igor; Greiner, Walter

    2014-06-01

    Minor actinides (MA) present a harmful part of spent nuclear fuel due to their long half-lives and high radio-toxicity. Neutrons produced in spallation targets of Accelerator Driven Systems (ADS) can be used to transmute and burn MA. Non-fissile targets are commonly considered in ADS design. However, additional neutrons from fission reactions can be used in targets made of fissile materials. We developed a Geant4-based code MCADS (Monte Carlo model for Accelerator Driven Systems) for simulating neutron production and transport in different spallation targets. MCADS is suitable for calculating spatial distributions of neutron flux and energy deposition, neutron multiplication factors and other characteristics of produced neutrons and residual nuclei. Several modifications of the Geant4 source code described in this work were made in order to simulate targets containing MA. Results of MCADS simulations are reported for several cylindrical targets made of U+Am, Am or Am2O3 including more complicated design options with a neutron booster and a reflector. Estimations of Am burning rates are given for the considered cases.

  18. Modeling uncertainty in risk assessment: an integrated approach with fuzzy set theory and Monte Carlo simulation.

    Science.gov (United States)

    Arunraj, N S; Mandal, Saptarshi; Maiti, J

    2013-06-01

    Modeling uncertainty during risk assessment is a vital component for effective decision making. Unfortunately, most of the risk assessment studies suffer from uncertainty analysis. The development of tools and techniques for capturing uncertainty in risk assessment is ongoing and there has been a substantial growth in this respect in health risk assessment. In this study, the cross-disciplinary approaches for uncertainty analyses are identified and a modified approach suitable for industrial safety risk assessment is proposed using fuzzy set theory and Monte Carlo simulation. The proposed method is applied to a benzene extraction unit (BEU) of a chemical plant. The case study results show that the proposed method provides better measure of uncertainty than the existing methods as unlike traditional risk analysis method this approach takes into account both variability and uncertainty of information into risk calculation, and instead of a single risk value this approach provides interval value of risk values for a given percentile of risk. The implications of these results in terms of risk control and regulatory compliances are also discussed.

  19. Modeling the Biophysical Effects in a Carbon Beam Delivery Line using Monte Carlo Simulation

    CERN Document Server

    Cho, Ilsung; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-01-01

    Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion beam therapy. In this study the biological effectiveness of a carbon ion beam delivery system was investigated using Monte Carlo simulation. A carbon ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon beam transporting into media. An incident energy carbon ion beam in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model is applied to describe the RBE of 10% survival in human salivary gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetrating depth of the water phantom along the incident beam direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the water phantom depth.

  20. Monte Carlo study of the double and super-exchange model with lattice distortion

    Energy Technology Data Exchange (ETDEWEB)

    Suarez, J R; Vallejo, E; Navarro, O [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico, Apartado Postal 70-360, 04510 Mexico D. F. (Mexico); Avignon, M, E-mail: jrsuarez@iim.unam.m [Institut Neel, Centre National de la Recherche Scientifique (CNRS) and Universite Joseph Fourier, BP 166, 38042 Grenoble Cedex 9 (France)

    2009-05-01

    In this work a magneto-elastic phase transition was obtained in a linear chain due to the interplay between magnetism and lattice distortion in a double and super-exchange model. It is considered a linear chain consisting of localized classical spins interacting with itinerant electrons. Due to the double exchange interaction, localized spins tend to align ferromagnetically. This ferromagnetic tendency is expected to be frustrated by anti-ferromagnetic super-exchange interactions between neighbor localized spins. Additionally, lattice parameter is allowed to have small changes, which contributes harmonically to the energy of the system. Phase diagram is obtained as a function of the electron density and the super-exchange interaction using a Monte Carlo minimization. At low super-exchange interaction energy phase transition between electron-full ferromagnetic distorted and electron-empty anti-ferromagnetic undistorted phases occurs. In this case all electrons and lattice distortions were found within the ferromagnetic domain. For high super-exchange interaction energy, phase transition between two site distorted periodic arrangement of independent magnetic polarons ordered anti-ferromagnetically and the electron-empty anti-ferromagnetic undistorted phase was found. For this high interaction energy, Wigner crystallization, lattice distortion and charge distribution inside two-site polarons were obtained.

  1. Modeling the biophysical effects in a carbon beam delivery line by using Monte Carlo simulations

    Science.gov (United States)

    Cho, Ilsung; Yoo, SeungHoon; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun

    2016-09-01

    The Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion-beam therapy. In this study, the biological effectiveness of a carbon-ion beam delivery system was investigated using Monte Carlo simulations. A carbon-ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon-ion beam transport into media. An incident energy carbon-ion beam with energy in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model was applied to describe the RBE of 10% survival in human salivary-gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetration depth in the water phantom along the incident beam's direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE-weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the depth in the water phantom.

  2. Monte-Carlo event generation for a two-Higgs-doublet model with maximal CP symmetry

    CERN Document Server

    Brehmer, Johann

    2012-01-01

    Recently a two-Higgs-doublet model with maximal symmetry under generalised CP transformations, the MCPM, has been proposed. The theory features a unique fermion mass spectrum which, although not describing nature precisely, provides a good approximation. It also predicts the existence of five Higgs bosons with a particular signature. In this thesis I implemented the MCPM into the Monte-Carlo event generation package MadGraph, allowing the simulation of any MCPM tree-level process. The generated events are in a standardised format and can be used for further analysis with tools such as PYTHIA or GEANT, eventually leading to the comparison with experimental data and the exclusion or discovery of the theory. The implementation was successfully validated in different ways. It was then used for a first comparison of the MCPM signal events with the SM background and previous searches for new physics, hinting that the data expected at the LHC in the next years might provide exclusion limits or show signatures of thi...

  3. Monte Carlo study of half-magnetization plateau and magnetic phase diagram in pyrochlore antiferromagnetic Heisenberg model

    OpenAIRE

    Motome, Yukitoshi; Penc, Karlo; Shannon, Nic

    2005-01-01

    The antiferromagnetic Heisenberg model on a pyrochlore lattice under external magnetic field is studied by classical Monte Carlo simulation. The model includes bilinear and biquadratic interactions; the latter effectively describes the coupling to lattice distortions. The magnetization process shows a half-magnetization plateau at low temperatures, accompanied with strong suppression of the magnetic susceptibility. Temperature dependence of the plateau behavior is clarified. Finite-temperatur...

  4. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Science.gov (United States)

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  5. A nucleation and growth model of silicon nanoparticles produced by pulsed laser deposition via Monte Carlo simulation

    Science.gov (United States)

    Wang, Yinglong; Qin, Aili; Chu, Lizhi; Deng, Zechao; Ding, Xuecheng; Guan, Li

    2017-02-01

    We simulated the nucleation and growth of Si nanoparticles produced by pulse laser deposition using Monte Carlo method at the molecular (microscopic) level. In the model, the mechanism and thermodynamic conditions of nucleation and growth of Si nanoparticles were described. In a real physical scale of target-substrate configuration, the model was used to analyze the average size distribution of Si nanoparticles in argon ambient gas and the calculated results are in agreement with the experimental results.

  6. Monte-Carlo modeling of the central carbon metabolism of Lactococcus lactis: insights into metabolic regulation.

    Science.gov (United States)

    Murabito, Ettore; Verma, Malkhey; Bekker, Martijn; Bellomo, Domenico; Westerhoff, Hans V; Teusink, Bas; Steuer, Ralf

    2014-01-01

    Metabolic pathways are complex dynamic systems whose response to perturbations and environmental challenges are governed by multiple interdependencies between enzyme properties, reactions rates, and substrate levels. Understanding the dynamics arising from such a network can be greatly enhanced by the construction of a computational model that embodies the properties of the respective system. Such models aim to incorporate mechanistic details of cellular interactions to mimic the temporal behavior of the biochemical reaction system and usually require substantial knowledge of kinetic parameters to allow meaningful conclusions. Several approaches have been suggested to overcome the severe data requirements of kinetic modeling, including the use of approximative kinetics and Monte-Carlo sampling of reaction parameters. In this work, we employ a probabilistic approach to study the response of a complex metabolic system, the central metabolism of the lactic acid bacterium Lactococcus lactis, subject to perturbations and brief periods of starvation. Supplementing existing methodologies, we show that it is possible to acquire a detailed understanding of the control properties of a corresponding metabolic pathway model that is directly based on experimental observations. In particular, we delineate the role of enzymatic regulation to maintain metabolic stability and metabolic recovery after periods of starvation. It is shown that the feedforward activation of the pyruvate kinase by fructose-1,6-bisphosphate qualitatively alters the bifurcation structure of the corresponding pathway model, indicating a crucial role of enzymatic regulation to prevent metabolic collapse for low external concentrations of glucose. We argue that similar probabilistic methodologies will help our understanding of dynamic properties of small-, medium- and large-scale metabolic networks models.

  7. Approaching Chemical Accuracy with Quantum Monte Carlo

    OpenAIRE

    Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.

    2012-01-01

    International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...

  8. 基于Monte Carlo method的均衡度确定模型%Equilibrium Degree Determine Model based on the Monte Carlo method

    Institute of Scientific and Technical Information of China (English)

    朱颖; 程纪品

    2012-01-01

    The Monte Carlo method,also known as the statistical simulation method,is a very important kind of numerical methods guided by the theory of probability and statistics.It is applied to solve many computational problems using the random number (or pseudo-random number).%蒙特卡罗方法(Monte Carlo method),也称统计模拟方法,是一种以概率统计理论为指导的一类非常重要的数值计算方法,是指使用随机数(或更常见的伪随机数)来解决很多计算问题的方法,本文尝试建立警察服务平台的均衡度模型并用蒙特卡罗方法求解,实验结果可以满足一般的应用需求。

  9. Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy

    Science.gov (United States)

    Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James

    2012-03-01

    Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.

  10. Accelerated Monte Carlo by Embedded Cluster Dynamics

    Science.gov (United States)

    Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.

    1991-07-01

    We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.

  11. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    Science.gov (United States)

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation.

  12. Hybrid Monte Carlo and continuum modeling of electrolytes with concentration-induced dielectric variations

    Science.gov (United States)

    Guan, Xiaofei; Ma, Manman; Gan, Zecheng; Xu, Zhenli; Li, Bo

    2016-11-01

    The distribution of ions near a charged surface is an important quantity in many biological and material processes, and has been therefore investigated intensively. However, few theoretical and simulation approaches have included the influence of concentration-induced variations in the local dielectric permittivity of an underlying electrolyte solution. Such local variations have long been observed and known to affect the properties of ionic solution in the bulk and around the charged surface. We propose a hybrid computational model that combines Monte Carlo simulations with continuum electrostatic modeling to investigate such properties. A key component in our hybrid model is a semianalytical formula for the ion-ion interaction energy in a dielectrically inhomogeneous environment. This formula is obtained by solving for the Green's function Poisson's equation with ionic-concentration-dependent dielectric permittivity using a harmonic interpolation method and spherical harmonic series. We also construct a self-consistent continuum model of electrostatics to describe the effect of ionic-concentration-dependent dielectric permittivity and the resulting self-energy contribution. With extensive numerical simulations, we verify the convergence of our hybrid simulation scheme, show the qualitatively different structures of ionic distribution due to the concentration-induced dielectric variations, and compare our simulation results with the self-consistent continuum model. In particular, we study the differences between weakly and strongly charged surfaces and multivalencies of counterions. Our hybrid simulations conform particularly the depletion of ionic concentrations near a charged surface and also capture the charge inversion. We discuss several issues and possible further improvement of our approach for simulations of large charged systems.

  13. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  14. Modelling heterotachy in phylogenetic inference by reversible-jump Markov chain Monte Carlo.

    Science.gov (United States)

    Pagel, Mark; Meade, Andrew

    2008-12-27

    The rate at which a given site in a gene sequence alignment evolves over time may vary. This phenomenon--known as heterotachy--can bias or distort phylogenetic trees inferred from models of sequence evolution that assume rates of evolution are constant. Here, we describe a phylogenetic mixture model designed to accommodate heterotachy. The method sums the likelihood of the data at each site over more than one set of branch lengths on the same tree topology. A branch-length set that is best for one site may differ from the branch-length set that is best for some other site, thereby allowing different sites to have different rates of change throughout the tree. Because rate variation may not be present in all branches, we use a reversible-jump Markov chain Monte Carlo algorithm to identify those branches in which reliable amounts of heterotachy occur. We implement the method in combination with our 'pattern-heterogeneity' mixture model, applying it to simulated data and five published datasets. We find that complex evolutionary signals of heterotachy are routinely present over and above variation in the rate or pattern of evolution across sites, that the reversible-jump method requires far fewer parameters than conventional mixture models to describe it, and serves to identify the regions of the tree in which heterotachy is most pronounced. The reversible-jump procedure also removes the need for a posteriori tests of 'significance' such as the Akaike or Bayesian information criterion tests, or Bayes factors. Heterotachy has important consequences for the correct reconstruction of phylogenies as well as for tests of hypotheses that rely on accurate branch-length information. These include molecular clocks, analyses of tempo and mode of evolution, comparative studies and ancestral state reconstruction. The model is available from the authors' website, and can be used for the analysis of both nucleotide and morphological data.

  15. Modeling of continuous free-radical butadiene-styrene copolymerization process by the Monte Carlo method

    Directory of Open Access Journals (Sweden)

    T. A. Mikhailova

    2016-01-01

    Full Text Available In the paper the algorithm of modeling of continuous low-temperature free-radical butadiene-styrene copolymerization process in emulsion based on the Monte-Carlo method is offered. This process is the cornerstone of industrial production butadiene – styrene synthetic rubber which is the most widespread large-capacity rubber of general purpose. Imitation of growth of each macromolecule of the formed copolymer and tracking of the processes happening to it is the basis of algorithm of modeling. Modeling is carried out taking into account residence-time distribution of particles in system that gives the chance to research the process proceeding in the battery of consistently connected polymerization reactors. At the same time each polymerization reactor represents the continuous stirred tank reactor. Since the process is continuous, it is considered continuous addition of portions to the reaction mixture in the first reactor of battery. The constructed model allows to research molecular-weight and viscous characteristics of the formed copolymerization product, to predict the mass content of butadiene and styrene in copolymer, to carry out calculation of molecular-weight distribution of the received product at any moment of conducting process. According to the results of computational experiments analyzed the influence of mode of the process of the regulator introduced during the maintaining on change of characteristics of the formed butadiene-styrene copolymer. As the considered process takes place with participation of monomers of two types, besides listed the model allows to research compositional heterogeneity of the received product that is to carry out calculation of composite distribution and distribution of macromolecules for the size and structure. On the basis of the proposed algorithm created the software tool that allows you to keep track of changes in the characteristics of the resulting product in the dynamics.

  16. Monte Carlo simulation model for economic evaluation of rubble mound breakwater protection in Harbors

    Institute of Scientific and Technical Information of China (English)

    Richard M. Males; Jeffrey A. Melby

    2011-01-01

    The US Army Corps of Engineers has a mission to conduct a wide array of programs in the arenas of water resources,including coastal protection.Coastal projects must be evaluated according to sound economic principles,and considerations of risk assessment and sea level change must be included in the analysis.Breakwaters are typically nearshore structures designed to reduce wave action in the lee of the structure,resulting in calmer waters within the protected area,with attendant benefits in terms of usability by navigation interests,shoreline protection,reduction of wave runup and onshore flooding,and protection of navigation channels from sedimentation and wave action.A common method of breakwater construction is the rubble mound breakwater,constructed in a trapezoidal cross section with gradually increasing stone sizes from the core out.Rubble mound breakwaters are subject to degradation from storms,particularly for antiquated designs with under-sized stones insufficient to protect against intense wave energy.Storm waves dislodge the stones,resulting in lowering of crest height and associated protective capability for wave reduction.This behavior happens over a long period of time,so a lifecycle model (that can analyze the damage progression over a period of years) is appropriate.Because storms are highly variable,a model that can support risk analysis is also needed.Economic impacts are determined by the nature of the wave climate in the protected area,and by the nature of the protected assets.Monte Carlo simulation (MCS)modeling that incorporates engineering and economic impacts is a worthwhile method for handling the many complexities involved in real world problems.The Corps has developed and utilized a number of MCS models to compare project alternatives in terms of their costs and benefits.This paper describes one such model,Coastal Structure simulation (CSsim) that has been developed specifically for planning level analysis of breakwaters.

  17. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    Energy Technology Data Exchange (ETDEWEB)

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  18. An enhanced Monte Carlo outlier detection method.

    Science.gov (United States)

    Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi

    2015-09-30

    Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.

  19. Direct Monte Carlo and multifluid modeling of the circumnuclear dust coma. Spherical grain dynamics revisited

    Science.gov (United States)

    Crifo, J.-F.; Loukianov, G. A.; Rodionov, A. V.; Zakharov, V. V.

    2005-07-01

    This paper describes the first computations of dust distributions in the vicinity of an active cometary nucleus, using a multidimensional Direct Simulation Monte Carlo Method (DSMC). The physical model is simplistic: spherical grains of a broad range of sizes are liberated by H 2O sublimation from a selection of nonrotating sunlit spherical nuclei, and submitted to the nucleus gravity, the gas drag, and the solar radiation pressure. The results are compared to those obtained by the previously described Dust Multi-Fluid Method (DMF) and demonstrate an excellent agreement in the regions where the DMF is usable. Most importantly, the DSMC allows the discovery of hitherto unsuspected dust coma properties in those cases which cannot be treated by the DMF. This leads to a thorough reconsideration of the properties of the near-nucleus dust dynamics. In particular, the results show that (1) none of the three forces considered here can be neglected a priori, in particular not the radiation pressure; (2) hitherto unsuspected new families of grain trajectories exist, for instance trajectories leading from the nightside surface to the dayside coma; (3) a wealth of balistic-like trajectories leading from one point of the surface to another point exist; on the dayside, such trajectories lead to the formation of "mini-volcanoes." The present model and results are discussed carefully. It is shown that (1) the neglected forces (inertia associated with a nucleus rotation, solar tidal force) are, in general, not negligible everywhere, and (2) when allowing for these additional forces, a time-dependent model will, in general, have to be used. The future steps of development of the model are outlined.

  20. Simulating Photon Scattering Effects in Structurally Detailed Ventricular Models Using a Monte Carlo Approach

    Directory of Open Access Journals (Sweden)

    Martin J Bishop

    2014-09-01

    Full Text Available Light scattering during optical imaging of electrical activation within the heart is known to significantlydistort the optically-recorded action potential (AP upstroke, as well as affecting the magnitude of the measured response of ventricular tissue to strong electric shocks. Modelling approaches based on the photondiffusion equation have recently been instrumental in quantifying and helping to understand the origin of the resulting distortion. However, they are unable to faithfully represent regions of non-scattering media, such assmall cavities within the myocardium which are filled with perfusate during experiments. Stochastic Monte Carlo (MC approaches allow simulation and tracking of individual photon `packets' as they propagate through tissuewith differing scattering properties. Here, we present a novel application of the MC method of photon scattering simulation, applied for the first time to the simulation of cardiac optical mapping signals withinunstructured, tetrahedral, finite element computational ventricular models. The method faithfully allows simulation of optical signals over highly-detailed, anatomically-complex MR-based models, includingrepresentations of fine-scale anatomy and intramural cavities. We show that optical action potential upstroke is prolonged close to large subepicardial vessels than further away from vessels, at times having a distinct `humped' morphology.Furthermore, we uncover a novel mechanism by which photon scattering effects around vessels cavities interact with `virtual-electrode' regions of strong de-/hyper-polarised tissue surrounding cavitiesduring shocks, significantly reducing the apparent optically-measured epicardial polarisation. We therefore demonstrate the importance of this novel optical mapping simulation approach along with highly anatomically-detailed models to fully investigate electrophysiological phenomena driven by fine-scale structural heterogeneity.

  1. Inverse Modeling Using Markov Chain Monte Carlo Aided by Adaptive Stochastic Collocation Method with Transformation

    Science.gov (United States)

    Zhang, D.; Liao, Q.

    2016-12-01

    The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of

  2. Monte Carlo calculation model for heat radiation of inclined cylindrical flames and its application

    Science.gov (United States)

    Chang, Zhangyu; Ji, Jingwei; Huang, Yuankai; Wang, Zhiyi; Li, Qingjie

    2017-02-01

    Based on Monte Carlo method, a calculation model and its C++ calculating program for radiant heat transfer from an inclined cylindrical flame are proposed. In this model, the total radiation energy of the inclined cylindrical flame is distributed equally among a certain number of energy beams, which are emitted randomly from the flame surface. The incident heat flux on a surface is calculated by counting the number of energy beams which could reach the surface. The paper mainly studies the geometrical evaluation criterion for validity of energy beams emitted by inclined cylindrical flames and received by other surfaces. Compared to Mudan's formula results for a straight cylinder or a cylinder with 30° tilt angle, the calculated view factors range from 0.0043 to 0.2742 and the predicted view factors agree well with Mudan's results. The changing trend and values of incident heat fluxes computed by the model is consistent with experimental data measured by Rangwala et al. As a case study, incident heat fluxes on a gasoline tank, both the side and the top surface are calculated by the model. The heat radiation is from an inclined cylindrical flame generated by another 1000 m3 gasoline tank 4.6 m away from it. The cone angle of the flame to the adjacent oil tank is 45° and the polar angle is 0°. The top surface and the side surface of the tank are divided into 960 and 5760 grids during the calculation, respectively. The maximum incident heat flux on the side surface is 39.64 and 51.31 kW/m2 on the top surface. Distributions of the incident heat flux on the surface of the oil tank and on the ground around the fire tank are obtained, too.

  3. QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS

    NARCIS (Netherlands)

    DERAEDT, H

    1992-01-01

    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  4. Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications

    NARCIS (Netherlands)

    Raedt, H. De

    1992-01-01

    A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown

  5. Further experience in Bayesian analysis using Monte Carlo Integration

    NARCIS (Netherlands)

    H.K. van Dijk (Herman); T. Kloek (Teun)

    1980-01-01

    textabstractAn earlier paper [Kloek and Van Dijk (1978)] is extended in three ways. First, Monte Carlo integration is performed in a nine-dimensional parameter space of Klein's model I [Klein (1950)]. Second, Monte Carlo is used as a tool for the elicitation of a uniform prior on a finite region by

  6. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  7. Monte Carlo Technique Used to Model the Degradation of Internal Spacecraft Surfaces by Atomic Oxygen

    Science.gov (United States)

    Banks, Bruce A.; Miller, Sharon K.

    2004-01-01

    Atomic oxygen is one of the predominant constituents of Earth's upper atmosphere. It is created by the photodissociation of molecular oxygen (O2) into single O atoms by ultraviolet radiation. It is chemically very reactive because a single O atom readily combines with another O atom or with other atoms or molecules that can form a stable oxide. The effects of atomic oxygen on the external surfaces of spacecraft in low Earth orbit can have dire consequences for spacecraft life, and this is a well-known and much studied problem. Much less information is known about the effects of atomic oxygen on the internal surfaces of spacecraft. This degradation can occur when openings in components of the spacecraft exterior exist that allow the entry of atomic oxygen into regions that may not have direct atomic oxygen attack but rather scattered attack. Openings can exist because of spacecraft venting, microwave cavities, and apertures for Earth viewing, Sun sensors, or star trackers. The effects of atomic oxygen erosion of polymers interior to an aperture on a spacecraft were simulated at the NASA Glenn Research Center by using Monte Carlo computational techniques. A two-dimensional model was used to provide quantitative indications of the attenuation of atomic oxygen flux as a function of the distance into a parallel-walled cavity. The model allows the atomic oxygen arrival direction, the Maxwell Boltzman temperature, and the ram energy to be varied along with the interaction parameters of the degree of recombination upon impact with polymer or nonreactive surfaces, the initial reaction probability, the reaction probability dependence upon energy and angle of attack, degree of specularity of scattering of reactive and nonreactive surfaces, and the degree of thermal accommodation upon impact with reactive and non-reactive surfaces to be varied to allow the model to produce atomic oxygen erosion geometries that replicate actual experimental results from space. The degree of

  8. Hybrid method for fast Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with tumor-like heterogeneities.

    Science.gov (United States)

    Zhu, Caigang; Liu, Quan

    2012-01-01

    We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.

  9. Study of the validity of a combined potential model using the Hybrid Reverse Monte Carlo method in Fluoride glass system

    Directory of Open Access Journals (Sweden)

    M. Kotbi

    2013-03-01

    Full Text Available The choice of appropriate interaction models is among the major disadvantages of conventional methods such as Molecular Dynamics (MD and Monte Carlo (MC simulations. On the other hand, the so-called Reverse Monte Carlo (RMC method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the Hybrid Reverse Monte Carlo (HRMC method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a Fluoride glass system BaMnMF7 (M = Fe,V using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions (PDFs. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

  10. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  11. Properties of Carbon-Oxygen White Dwarfs From Monte Carlo Stellar Models

    Science.gov (United States)

    Fields, C. E.; Farmer, R.; Petermann, I.; Iliadis, C.; Timmes, F. X.

    2016-05-01

    We investigate properties of carbon-oxygen white dwarfs with respect to the composite uncertainties in the reaction rates using the stellar evolution toolkit, Modules for Experiments in Stellar Astrophysics (MESA) and the probability density functions in the reaction rate library STARLIB. These are the first Monte Carlo stellar evolution studies that use complete stellar models. Focusing on 3 {M}⊙ models evolved from the pre main-sequence to the first thermal pulse, we survey the remnant core mass, composition, and structure properties as a function of 26 STARLIB reaction rates covering hydrogen and helium burning using a Principal Component Analysis and Spearman Rank-Order Correlation. Relative to the arithmetic mean value, we find the width of the 95% confidence interval to be {{Δ }}{M}{{1TP}} ≈ 0.019 {M}⊙ for the core mass at the first thermal pulse, Δ{t}{{1TP}} ≈ 12.50 Myr for the age, {{Δ }}{log}({T}{{c}}/{{K}}) ≈ 0.013 for the central temperature, {{Δ }}{log}({ρ }{{c}}/{{g}} {{cm}}-3) ≈ 0.060 for the central density, {{Δ }}{Y}{{e,c}} ≈ 2.6 × 10-5 for the central electron fraction, {{Δ }}{X}{{c}}{(}22{{Ne}}) ≈ 5.8 × 10-4, {{Δ }}{X}{{c}}{(}12{{C}}) ≈ 0.392, and {{Δ }}{X}{{c}}{(}16{{O}}) ≈ 0.392. Uncertainties in the experimental 12C(α ,γ {)}16{{O}}, triple-α, and 14N({\\text{}}p,γ {)}15{{O}} reaction rates dominate these variations. We also consider a grid of 1-6 {M}⊙ models evolved from the pre main-sequence to the final white dwarf to probe the sensitivity of the initial-final mass relation to experimental uncertainties in the hydrogen and helium reaction rates.

  12. Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model

    Science.gov (United States)

    Prakash, Shashi; Kumar, Nitish; Kumar, Subrata

    2016-09-01

    CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.

  13. High precision single-cluster Monte Carlo measurement of the critical exponents of the classical 3D Heisenberg model

    CERN Document Server

    Holm, C

    1992-01-01

    We report measurements of the critical exponents of the classical three-dimensional Heisenberg model on simple cubic lattices of size $L^3$ with $L$ = 12, 16, 20, 24, 32, 40, and 48. The data was obtained from a few long single-cluster Monte Carlo simulations near the phase transition. We compute high precision estimates of the critical coupling $K_c$, Binder's parameter $U^* and the critical exponents $\

  14. Monte Carlo modeling in CT-based geometries: dosimetry for biological modeling experiments with particle beam radiation.

    Science.gov (United States)

    Diffenderfer, Eric S; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K; McDonough, James; Cengel, Keith A

    2014-03-01

    The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations.

  15. Dynamic Critical Behavior of Multi-Grid Monte Carlo for Two-Dimensional Nonlinear $\\sigma$-Models

    OpenAIRE

    Mana, Gustavo; Mendes, Tereza; Pelissetto, Andrea; Sokal, Alan D.

    1995-01-01

    We introduce a new and very convenient approach to multi-grid Monte Carlo (MGMC) algorithms for general nonlinear $\\sigma$-models: it is based on embedding an $XY$ model into the given $\\sigma$-model, and then updating the induced $XY$ model using a standard $XY$-model MGMC code. We study the dynamic critical behavior of this algorithm for the two-dimensional $O(N)$ $\\sigma$-models with $N = 3,4,8$ and for the $SU(3)$ principal chiral model. We find that the dynamic critical exponent $z$ vari...

  16. Monte-Carlo simulations of methane/carbon dioxide and ethane/carbon dioxide mixture adsorption in zeolites and comparison with matrix treatment of statistical mechanical lattice model

    Science.gov (United States)

    Dunne, Lawrence J.; Furgani, Akrem; Jalili, Sayed; Manos, George

    2009-05-01

    Adsorption isotherms have been computed by Monte-Carlo simulation for methane/carbon dioxide and ethane/carbon dioxide mixtures adsorbed in the zeolite silicalite. These isotherms show remarkable differences with the ethane/carbon dioxide mixtures displaying strong adsorption preference reversal at high coverage. To explain the differences in the Monte-Carlo mixture isotherms an exact matrix calculation of the statistical mechanics of a lattice model of mixture adsorption in zeolites has been made. The lattice model reproduces the essential features of the Monte-Carlo isotherms, enabling us to understand the differing adsorption behaviour of methane/carbon dioxide and ethane/carbon dioxide mixtures in zeolites.

  17. Validation of the Monte Carlo model developed to assess the activity generated in control rods of a BWR

    Science.gov (United States)

    Ródenas, José; Abarca, Agustín; Gallardo, Sergio; Sollet, Eduardo

    2010-07-01

    Control rods are activated by neutron reactions into the reactor. The activation is produced mainly in stainless steel and its impurities. The dose produced by this activity is not important inside the reactor, but it has to be taken into account when the rod is withdrawn from it. The neutron activation has been modeled with the MCNP5 code based on the Monte Carlo method. The number of reactions obtained with the code can be converted into activity. In this work, a detailed model of the control rod has been developed considering all its components: handle, tubes, gain, and central core. On the other hand, the rod has been divided into 5 zones in order to consider the different axial exposition to neutron flux into the reactor. Results of the Monte Carlo simulation for the neutron activation constitute a gamma source in the control rod. With this source, applying again the Monte Carlo method, doses at certain distance of the rod have been calculated. Comparison of calculated doses with experimental measurements leads to the validation of the model developed.

  18. The 3-Attractor Water Model: Monte-Carlo Simulations with a New, Effective 2-Body Potential (BMW

    Directory of Open Access Journals (Sweden)

    Francis Muguet

    2003-02-01

    Full Text Available According to the precepts of the 3-attractor (3-A water model, effective 2-body water potentials should feature as local minima the bifurcated and inverted water dimers in addition to the well-known linear water dimer global minimum. In order to test the 3-A model, a new pair wise effective intermolecular rigid water potential has been designed. The new potential is part of new class of potentials called BMW (Bushuev-Muguet-Water which is built by modifying existing empirical potentials. This version (BMW v. 0.1 has been designed by modifying the SPC/E empirical water potential. It is a preliminary version well suited for exploratory Monte-Carlo simulations. The shape of the potential energy surface (PES around each local minima has been approximated with the help of Gaussian functions. Classical Monte Carlo simulations have been carried out for liquid water in the NPT ensemble for a very wide range of state parameters up to the supercritical water regime. Thermodynamic properties are reported. The radial distributions functions (RDFs have been computed and are compared with the RDFs obtained from Neutron Scattering experimental data. Our preliminary Monte-Carlo simulations show that the seemingly unconventional hypotheses of the 3-A model are most plausible. The simulation has also uncovered a totally new role for 2-fold H-bonds.

  19. Stochastic geometrical model and Monte Carlo optimization methods for building reconstruction from InSAR data

    Science.gov (United States)

    Zhang, Yue; Sun, Xian; Thiele, Antje; Hinz, Stefan

    2015-10-01

    Synthetic aperture radar (SAR) systems, such as TanDEM-X, TerraSAR-X and Cosmo-SkyMed, acquire imagery with high spatial resolution (HR), making it possible to observe objects in urban areas with high detail. In this paper, we propose a new top-down framework for three-dimensional (3D) building reconstruction from HR interferometric SAR (InSAR) data. Unlike most methods proposed before, we adopt a generative model and utilize the reconstruction process by maximizing a posteriori estimation (MAP) through Monte Carlo methods. The reason for this strategy refers to the fact that the noisiness of SAR images calls for a thorough prior model to better cope with the inherent amplitude and phase fluctuations. In the reconstruction process, according to the radar configuration and the building geometry, a 3D building hypothesis is mapped to the SAR image plane and decomposed to feature regions such as layover, corner line, and shadow. Then, the statistical properties of intensity, interferometric phase and coherence of each region are explored respectively, and are included as region terms. Roofs are not directly considered as they are mixed with wall into layover area in most cases. When estimating the similarity between the building hypothesis and the real data, the prior, the region term, together with the edge term related to the contours of layover and corner line, are taken into consideration. In the optimization step, in order to achieve convergent reconstruction outputs and get rid of local extrema, special transition kernels are designed. The proposed framework is evaluated on the TanDEM-X dataset and performs well for buildings reconstruction.

  20. Comprehensive modeling of solid phase epitaxial growth using Lattice Kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Bragado, Ignacio, E-mail: ignacio.martin@imdea.org [IMDEA Materials Institute, C/ Eric Kandel 2, Parque Científico y Tecnológico de Getafe 28906 Madrid, Getafe (Spain)

    2013-05-15

    Damage evolution of irradiated silicon is, and has been, a topic of interest for the last decades for its applications to the semiconductor industry. In particular, sometimes, the damage is heavy enough to collapse the lattice and to locally amorphize the silicon, while in other cases amorphization is introduced explicitly to improve other implanted profiles. Subsequent annealing of the implanted samples heals the amorphized regions through Solid Phase Epitaxial Regrowth (SPER). SPER is a complicated process. It is anisotropic, it generates defects in the recrystallized silicon, it has a different amorphous/crystalline (A/C) roughness for each orientation, leaving pits in Si(1 1 0), and in Si(1 1 1) it produces two modes of recrystallization with different rates. The recently developed code MMonCa has been used to introduce a physically-based comprehensive model using Lattice Kinetic Monte Carlo that explains all the above singularities of silicon SPER. The model operates by having, as building blocks, the silicon lattice microconfigurations and their four twins. It detects the local configurations, assigns microscopical growth rates, and reconstructs the positions of the lattice locally with one of those building blocks. The overall results reproduce the (a) anisotropy as a result of the different growth rates, (b) localization of SPER induced defects, (c) roughness trends of the A/C interface, (d) pits on Si(1 1 0) regrown surfaces, and (e) bimodal Si(1 1 1) growth. It also provides physical insights of the nature and shape of deposited defects and how they assist in the occurrence of all the above effects.

  1. Range verification methods in particle therapy: underlying physics and Monte Carlo modelling

    Directory of Open Access Journals (Sweden)

    Aafke Christine Kraan

    2015-07-01

    Full Text Available Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients.Non-invasive in-vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including beta+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC predictions is a key issue. Correctly modelling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modelling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  2. Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.

    Science.gov (United States)

    Kraan, Aafke Christine

    2015-01-01

    Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.

  3. Vectorized Monte Carlo methods for reactor lattice analysis

    Science.gov (United States)

    Brown, F. B.

    1984-01-01

    Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.

  4. Three dimensional Monte-Carlo modeling of laser-tissue interaction

    Energy Technology Data Exchange (ETDEWEB)

    Gentile, N A; Kim, B M; London, R A; Trauner, K B

    1999-03-12

    A full three dimensional Monte-Carlo program was developed for analysis of the laser-tissue interactions. This project was performed as a part of the LATIS3D (3-D Laser-Tissue interaction) project. The accuracy was verified against results from a public domain two dimensional axisymmetric program. The code was used for simulation of light transport in simplified human knee geometry. Using the real human knee meshes which will be extracted from MRI images in the near future, a full analysis of dosimetry and surgical strategies for photodynamic therapy of rheumatoid arthritis will be followed.

  5. The Monte Carlo approach to transport modeling in deca-nanometer MOSFETs

    Science.gov (United States)

    Sangiorgi, Enrico; Palestri, Pierpaolo; Esseni, David; Fiegna, Claudio; Selmi, Luca

    2008-09-01

    In this paper, we review recent developments of the Monte Carlo approach to the simulation of semi-classical carrier transport in nano-MOSFETs, with particular focus on the inclusion of quantum-mechanical effects in the simulation (using either the multi-subband approach or quantum corrections to the electrostatic potential) and on the numerical stability issues related to the coupling of the transport with the Poisson equation. Selected applications are presented, including the analysis of quasi-ballistic transport, the determination of the RF characteristics of deca-nanometric MOSFETs, and the study of non-conventional device structures and channel materials.

  6. Comment on "Monte Carlo simulations for a Lotka-type model with reactant diffusion and interactions".

    Science.gov (United States)

    Zhdanov, Vladimir P

    2002-03-01

    Discussing the effect of adsorbate-adsorbate lateral interactions on the kinetics of heterogeneous catalytic reactions, Zvejnieks and Kuzovkov [Phys. Rev. E 63, 051104 (2001)] conclude that in the case of adsorbed particles the Metropolis Monte Carlo dynamics is meaningless and propose to use their own dynamics, which is equivalent to the Glauber dynamics. In this Comment, I show that these and other conclusions and prescriptions by Zvejnieks and Kuzovkov are not in line with the general principles of simulations of rate processes in adsorbed overlayers.

  7. Co-combustion of peanut hull and coal blends: Artificial neural networks modeling, particle swarm optimization and Monte Carlo simulation.

    Science.gov (United States)

    Buyukada, Musa

    2016-09-01

    Co-combustion of coal and peanut hull (PH) were investigated using artificial neural networks (ANN), particle swarm optimization, and Monte Carlo simulation as a function of blend ratio, heating rate, and temperature. The best prediction was reached by ANN61 multi-layer perception model with a R(2) of 0.99994. Blend ratio of 90 to 10 (PH to coal, wt%), temperature of 305°C, and heating rate of 49°Cmin(-1) were determined as the optimum input values and yield of 87.4% was obtained under PSO optimized conditions. The validation experiments resulted in yields of 87.5%±0.2 after three replications. Monte Carlo simulations were used for the probabilistic assessments of stochastic variability and uncertainty associated with explanatory variables of co-combustion process.

  8. Monte Carlo Modeling for in vivo MRS: Generating and quantifying simulations via the Windows, Linux and Android platform

    NARCIS (Netherlands)

    De Beer, R.; Van Ormondt, D.

    2014-01-01

    Work in context of European Union TRANSACT project. We have developed a Java/JNI/C/Fortran based software application, called MonteCarlo, with which the users can carry out Monte Carlo studies in the field of \\emph{in vivo} MRS. The application is supposed to be used as a tool for supporting the \\e

  9. Monte Carlo Modeling for in vivo MRS: Generating and quantifying simulations via the Windows, Linux and Android platform

    NARCIS (Netherlands)

    De Beer, R.; Van Ormondt, D.

    2014-01-01

    Work in context of European Union TRANSACT project. We have developed a Java/JNI/C/Fortran based software application, called MonteCarlo, with which the users can carry out Monte Carlo studies in the field of \\emph{in vivo} MRS. The application is supposed to be used as a tool for supporting the

  10. Monte Carlo study of phase transitions and magnetic properties of LaMnO3: Heisenberg model

    Science.gov (United States)

    Naji, S.; Benyoussef, A.; El Kenz, A.; Ez-Zahraouy, H.; Loulidi, M.

    2012-08-01

    On the basis of ab initio calculations (FPLO) and Monte Carlo Simulations (MCS) the phase diagrams and magnetic properties of the bulk perovskite LaMnO3 have been studied, using the Heisenberg model. It is shown, using ab initio calculations in the scalar relativistic scheme, that the stable phase is the antiferromagnetic A-type, which corresponds to ferromagnetic order of the manganese ions in the basal planes (a,b) and antiferromagnetic order of these ions between these planes along the c axis. Using the full four-component relativistic scheme, in order to calculate the magnetic anisotropy energy and constants, it is found that the favorable magnetic direction is the (010) b axis. The transition temperatures and the critical exponents are obtained in the framework of Monte Carlo simulations. The magnetic anisotropy and the exchange couplings of the Heisenberg model are deduced from ab initio calculations. They lead, by using Monte Carlo simulations, to a quantitative agreement with the experimental transition temperatures.

  11. Monte Carlo simulations of phase transitions and lattice dynamics in an atom-phonon model for spin transition compounds

    Energy Technology Data Exchange (ETDEWEB)

    Apetrei, Alin Marian, E-mail: alin.apetrei@uaic.r [Department of Physics, Alexandru Ioan Cuza University of Iasi, 11 Blvd. Carol I, Iasi 700506 (Romania); Enachescu, Cristian; Tanasa, Radu; Stoleriu, Laurentiu; Stancu, Alexandru [Department of Physics, Alexandru Ioan Cuza University of Iasi, 11 Blvd. Carol I, Iasi 700506 (Romania)

    2010-09-01

    We apply here the Monte Carlo Metropolis method to a known atom-phonon coupling model for 1D spin transition compounds (STC). These inorganic molecular systems can switch under thermal or optical excitation, between two states in thermodynamical competition, i.e. high spin (HS) and low spin (LS). In the model, the ST units (molecules) are linked by springs, whose elastic constants depend on the spin states of the neighboring atoms, and can only have three possible values. Several previous analytical papers considered a unique average value for the elastic constants (mean-field approximation) and obtained phase diagrams and thermal hysteresis loops. Recently, Monte Carlo simulation papers, taking into account all three values of the elastic constants, obtained thermal hysteresis loops, but no phase diagrams. Employing Monte Carlo simulation, in this work we obtain the phase diagram at T=0 K, which is fully consistent with earlier analytical work; however it is more complex. The main difference is the existence of two supplementary critical curves that mark a hysteresis zone in the phase diagram. This explains the pressure hysteresis curves at low temperature observed experimentally and predicts a 'chemical' hysteresis in STC at very low temperatures. The formation and the dynamics of the domains are also discussed.

  12. Monte Carlo Simulation for Particle Detectors

    CERN Document Server

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  13. Composite biasing in Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  14. Monte Carlo simulations on SIMD computer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Burmester, C.P.; Gronsky, R. [Lawrence Berkeley Lab., CA (United States); Wille, L.T. [Florida Atlantic Univ., Boca Raton, FL (United States). Dept. of Physics

    1992-03-01

    Algorithmic considerations regarding the implementation of various materials science applications of the Monte Carlo technique to single instruction multiple data (SMM) computer architectures are presented. In particular, implementation of the Ising model with nearest, next nearest, and long range screened Coulomb interactions on the SIMD architecture MasPar MP-1 (DEC mpp-12000) series of massively parallel computers is demonstrated. Methods of code development which optimize processor array use and minimize inter-processor communication are presented including lattice partitioning and the use of processor array spanning tree structures for data reduction. Both geometric and algorithmic parallel approaches are utilized. Benchmarks in terms of Monte Carlo updates per second for the MasPar architecture are presented and compared to values reported in the literature from comparable studies on other architectures.

  15. Development of CT scanner models for patient organ dose calculations using Monte Carlo methods

    Science.gov (United States)

    Gu, Jianwei

    There is a serious and growing concern about the CT dose delivered by diagnostic CT examinations or image-guided radiation therapy imaging procedures. To better understand and to accurately quantify radiation dose due to CT imaging, Monte Carlo based CT scanner models are needed. This dissertation describes the development, validation, and application of detailed CT scanner models including a GE LightSpeed 16 MDCT scanner and two image guided radiation therapy (IGRT) cone beam CT (CBCT) scanners, kV CBCT and MV CBCT. The modeling process considered the energy spectrum, beam geometry and movement, and bowtie filter (BTF). The methodology of validating the scanner models using reported CTDI values was also developed and implemented. Finally, the organ doses to different patients undergoing CT scan were obtained by integrating the CT scanner models with anatomically-realistic patient phantoms. The tube current modulation (TCM) technique was also investigated for dose reduction. It was found that for RPI-AM, thyroid, kidneys and thymus received largest dose of 13.05, 11.41 and 11.56 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. For RPI-AF, thymus, small intestine and kidneys received largest dose of 10.28, 12.08 and 11.35 mGy/100 mAs from chest scan, abdomen-pelvis scan and CAP scan, respectively using 120 kVp protocols. The dose to the fetus of the 3 month pregnant patient phantom was 0.13 mGy/100 mAs and 0.57 mGy/100 mAs from the chest and kidney scan, respectively. For the chest scan of the 6 month patient phantom and the 9 month patient phantom, the fetal doses were 0.21 mGy/100 mAs and 0.26 mGy/100 mAs, respectively. For MDCT with TCM schemas, the fetal dose can be reduced with 14%-25%. To demonstrate the applicability of the method proposed in this dissertation for modeling the CT scanner, additional MDCT scanner was modeled and validated by using the measured CTDI values. These results demonstrated that the

  16. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy.

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-01

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  17. Comprehensive modeling of special nuclear materials detection using three-dimensional deterministic and Monte Carlo methods

    Science.gov (United States)

    Ghita, Gabriel M.

    Our study aim to design a useful neutron signature characterization device based on 3He detectors, a standard neutron detection methodology used in homeland security applications. Research work involved simulation of the generation, transport, and detection of the leakage radiation from Special Nuclear Materials (SNM). To accomplish research goals, we use a new methodology to fully characterize a standard "1-Ci" Plutonium-Beryllium (Pu-Be) neutron source based on 3-D computational radiation transport methods, employing both deterministic SN and Monte Carlo methodologies. Computational model findings were subsequently validated through experimental measurements. Achieved results allowed us to design, build, and laboratory-test a Nickel composite alloy shield that enables the neutron leakage spectrum from a standard Pu-Be source to be transformed, through neutron scattering interactions in the shield, into a very close approximation of the neutron spectrum leaking from a large, subcritical mass of Weapons Grade Plutonium (WGPu) metal. This source will make possible testing with a nearly exact reproduction of the neutron spectrum from a 6.67 kg WGPu mass equivalent, but without the expense or risk of testing detector components with real materials. Moreover, over thirty moderator materials were studied in order to characterize their neutron energy filtering potential. Specific focus was made to establish the limits of He-3 spectroscopy using ideal filter materials. To demonstrate our methodology, we present the optimally detected spectral differences between SNM materials (Plutonium and Uranium), metal and oxide, using ideal filter materials. Finally, using knowledge gained from previous studies, the design of a He-3 spectroscopy system neutron detector, simulated entirely via computational methods, is proposed to resolve the spectra from SNM neutron sources of high interest. This was accomplished by replacing ideal filters with real materials, and comparing reaction

  18. Modeling of multi-band drift in nanowires using a full band Monte Carlo simulation

    Science.gov (United States)

    Hathwar, Raghuraj; Saraniti, Marco; Goodnick, Stephen M.

    2016-07-01

    We report on a new numerical approach for multi-band drift within the context of full band Monte Carlo (FBMC) simulation and apply this to Si and InAs nanowires. The approach is based on the solution of the Krieger and Iafrate (KI) equations [J. B. Krieger and G. J. Iafrate, Phys. Rev. B 33, 5494 (1986)], which gives the probability of carriers undergoing interband transitions subject to an applied electric field. The KI equations are based on the solution of the time-dependent Schrödinger equation, and previous solutions of these equations have used Runge-Kutta (RK) methods to numerically solve the KI equations. This approach made the solution of the KI equations numerically expensive and was therefore only applied to a small part of the Brillouin zone (BZ). Here we discuss an alternate approach to the solution of the KI equations using the Magnus expansion (also known as "exponential perturbation theory"). This method is more accurate than the RK method as the solution lies on the exponential map and shares important qualitative properties with the exact solution such as the preservation of the unitary character of the time evolution operator. The solution of the KI equations is then incorporated through a modified FBMC free-flight drift routine and applied throughout the nanowire BZ. The importance of the multi-band drift model is then demonstrated for the case of Si and InAs nanowires by simulating a uniform field FBMC and analyzing the average carrier energies and carrier populations under high electric fields. Numerical simulations show that the average energy of the carriers under high electric field is significantly higher when multi-band drift is taken into consideration, due to the interband transitions allowing carriers to achieve higher energies.

  19. Modeling parameterized geometry in GPU-based Monte Carlo particle transport simulation for radiotherapy

    Science.gov (United States)

    Chi, Yujie; Tian, Zhen; Jia, Xun

    2016-08-01

    Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0

  20. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    Science.gov (United States)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  1. Langevin Monte Carlo filtering for target tracking

    NARCIS (Netherlands)

    Iglesias Garcia, Fernando; Bocquel, Melanie; Driessen, Hans

    2015-01-01

    This paper introduces the Langevin Monte Carlo Filter (LMCF), a particle filter with a Markov chain Monte Carlo algorithm which draws proposals by simulating Hamiltonian dynamics. This approach is well suited to non-linear filtering problems in high dimensional state spaces where the bootstrap filte

  2. An introduction to Monte Carlo methods

    NARCIS (Netherlands)

    Walter, J. -C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim

  3. An introduction to Monte Carlo methods

    NARCIS (Netherlands)

    Walter, J. -C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim

  4. An introduction to Monte Carlo methods

    Science.gov (United States)

    Walter, J.-C.; Barkema, G. T.

    2015-01-01

    Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.

  5. Challenges of Monte Carlo Transport

    Energy Technology Data Exchange (ETDEWEB)

    Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-10

    These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.

  6. GPU Accelerated Monte Carlo Algorithm of Ising Model on Triangular Lattice%三角晶格Ising模型Monte Carlo模拟的GPU加速算法

    Institute of Scientific and Technical Information of China (English)

    陆星; 蔡静; 张伟

    2012-01-01

    In the statistical model, the efficiency of most Monte Carlo algorithm reduces quickly near the critical point. In the analysis of traditional local algorithms, a GPU-based parallel simulation algorithm on the triangular lattice Ising model, which greatly improves the efficiency of the Monte Carlo simulation, is raised. For the model with the size of 1 024 X 1 024, a speedup of 69 is achieved. Besides, the critical behavior is analyzed, a high-precision critical point (/Jc = 0.274 66( 1) ) and critical exponents (y, = 1.01(2), yh= 1. 875 6(3) ) of triangular lattice Ising model are obtained, which implies the effectiveness of the GPU algorithm.%在分析传统Monte Carlo算法的基础上,针对三角晶格Ising模型提出了一种基于GPU的并行模拟方法,大大提高了算法的效率.对1 024×1 024的模型,实现了69倍的加速比.通过该算法所得数据分析模型的临界行为,获得了高精度的临界点βc=0.27466(1)和临界指数y1=1.01(2),yh=1.875 6(3).

  7. Monte Carlo modeling of UBI-QEP coverage-dependent atomic chemisorption

    Science.gov (United States)

    Zeigarnik, Andrew V.; Abramova, Ludmila A.; Baranov, Sergey P.; Shustorovich, Evgeny

    2003-09-01

    The unity bond index-quadratic exponential potential (UBI-QEP) method is used to determine the equilibrium coverage-dependent atomic binding energies and overlayer structures for fcc(1 1 1) and (1 0 0) surfaces by Monte Carlo simulations. We modify the current UBI-QEP formalism to include changes in the total energy of the overlayer under adsorption and desorption of atoms, which gives a more accurate description of the reaction enthalpy and desorption activation barrier. Preferred coordination modes are found to be coverage-dependent and typically change with coverage as: (a) only hollow site occupancy; (b) hollow and bridge site occupancy with metal coordination remaining equal to unity; (c) bridge site occupancy with mono- and di-coordinated metal atoms. Although hops to the atop sites are allowed, their population is very rarely seen in the equilibrium state at any coverage. The results of Monte Carlo simulations are compared with the earlier projections made for various local coverage situations. Within the scope of comparability, the agreement is good. Some ad hoc modifications are mentioned, and possible applications of the results are discussed.

  8. Analysis and modeling of localized heat generation by tumor-targeted nanoparticles (Monte Carlo methods)

    Science.gov (United States)

    Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan

    2016-04-01

    We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.

  9. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Directory of Open Access Journals (Sweden)

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  10. Mean-field and Monte Carlo studies of the magnetization-reversal transition in the Ising model

    Energy Technology Data Exchange (ETDEWEB)

    Misra, Arkajyoti [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India)]. E-mail: arko@cmp.saha.ernet.in; Chakrabarti, Bikas K. [Saha Institute of Nuclear Physics, Bidhannagar, Calcutta (India)]. E-mail: bikas@cmp.saha.ernet.in

    2000-06-16

    Detailed mean-field and Monte Carlo studies of the dynamic magnetization-reversal transition in the Ising model in its ordered phase under a competing external magnetic field of finite duration have been presented here. An approximate analytical treatment of the mean-field equations of motion shows the existence of diverging length and time scales across this dynamic transition phase boundary. These are also supported by numerical solutions of the complete mean-field equations of motion and the Monte Carlo study of the system evolving under Glauber dynamics in both two and three dimensions. Classical nucleation theory predicts different mechanisms of domain growth in two regimes marked by the strength of the external field, and the nature of the Monte Carlo phase boundary can be comprehended satisfactorily using the theory. The order of the transition changes from a continuous to a discontinuous one as one crosses over from coalescence regime (stronger field) to a nucleation regime (weaker field). Finite-size scaling theory can be applied in the coalescence regime, where the best-fit estimates of the critical exponents are obtained for two and three dimensions. (author)

  11. The structure of molten CuCl: Reverse Monte Carlo modeling with high-energy X-ray diffraction data and molecular dynamics of a polarizable ion model

    Science.gov (United States)

    Alcaraz, Olga; Trullàs, Joaquim; Tahara, Shuta; Kawakita, Yukinobu; Takeda, Shin'ichi

    2016-09-01

    The results of the structural properties of molten copper chloride are reported from high-energy X-ray diffraction measurements, reverse Monte Carlo modeling method, and molecular dynamics simulations using a polarizable ion model. The simulated X-ray structure factor reproduces all trends observed experimentally, in particular the shoulder at around 1 Å-1 related to intermediate range ordering, as well as the partial copper-copper correlations from the reverse Monte Carlo modeling, which cannot be reproduced by using a simple rigid ion model. It is shown that the shoulder comes from intermediate range copper-copper correlations caused by the polarized chlorides.

  12. Comparing kinetic Monte Carlo and thin-film modeling of transversal instabilities of ridges on patterned substrates

    Science.gov (United States)

    Tewes, Walter; Buller, Oleg; Heuer, Andreas; Thiele, Uwe; Gurevich, Svetlana V.

    2017-03-01

    We employ kinetic Monte Carlo (KMC) simulations and a thin-film continuum model to comparatively study the transversal (i.e., Plateau-Rayleigh) instability of ridges formed by molecules on pre-patterned substrates. It is demonstrated that the evolution of the occurring instability qualitatively agrees between the two models for a single ridge as well as for two weakly interacting ridges. In particular, it is shown for both models that the instability occurs on well defined length and time scales which are, for the KMC model, significantly larger than the intrinsic scales of thermodynamic fluctuations. This is further evidenced by the similarity of dispersion relations characterizing the linear instability modes.

  13. Monte Carlo homogenized limit analysis model for randomly assembled blocks in-plane loaded

    Science.gov (United States)

    Milani, Gabriele; Lourenço, Paulo B.

    2010-11-01

    A simple rigid-plastic homogenization model for the limit analysis of masonry walls in-plane loaded and constituted by the random assemblage of blocks with variable dimensions is proposed. In the model, blocks constituting a masonry wall are supposed infinitely resistant with a Gaussian distribution of height and length, whereas joints are reduced to interfaces with frictional behavior and limited tensile and compressive strength. Block by block, a representative element of volume (REV) is considered, constituted by a central block interconnected with its neighbors by means of rigid-plastic interfaces. The model is characterized by a few material parameters, is numerically inexpensive and very stable. A sub-class of elementary deformation modes is a-priori chosen in the REV, mimicking typical failures due to joints cracking and crushing. Masonry strength domains are obtained equating the power dissipated in the heterogeneous model with the power dissipated by a fictitious homogeneous macroscopic plate. Due to the inexpensiveness of the approach proposed, Monte Carlo simulations can be repeated on the REV in order to have a stochastic estimation of in-plane masonry strength at different orientations of the bed joints with respect to external loads accounting for the geometrical statistical variability of blocks dimensions. Two cases are discussed, the former consisting on full stochastic REV assemblages (obtained considering a random variability of both blocks height an length) and the latter assuming the presence of a horizontal alignment along bed joints, i.e. allowing blocks height variability only row by row. The case of deterministic blocks height (quasi-periodic texture) can be obtained as a subclass of this latter case. Masonry homogenized failure surfaces are finally implemented in an upper bound FE limit analysis code for the analysis at collapse of entire walls in-plane loaded. Two cases of engineering practice, consisting on the prediction of the failure

  14. Monte-Carlo modelling of nano-material photocatalysis: bridging photocatalytic activity and microscopic charge kinetics.

    Science.gov (United States)

    Liu, Baoshun

    2016-04-28

    In photocatalysis, it is known that light intensity, organic concentration, and temperature affect the photocatalytic activity by changing the microscopic kinetics of holes and electrons. However, how the microscopic kinetics of holes and electrons relates to the photocatalytic activity was not well known. In the present research, we developed a Monte-Carlo random walking model that involved all of the charge kinetics, including the photo-generation, the recombination, the transport, and the interfacial transfer of holes and electrons, to simulate the overall photocatalytic reaction, which we called a "computer experiment" of photocatalysis. By using this model, we simulated the effect of light intensity, temperature, and organic surface coverage on the photocatalytic activity and the density of the free electrons that accumulate in the simulated system. It was seen that the increase of light intensity increases the electron density and its mobility, which increases the probability for a hole/electron to find an electron/hole for recombination, and consequently led to an apparent kinetics that the quantum yield (QY) decreases with the increase of light intensity. It was also seen that the increase of organic surface coverage could increase the rate of hole interfacial transfer and result in the decrease of the probability for an electron to recombine with a hole. Moreover, the increase of organic coverage on the nano-material surface can also increase the accumulation of electrons, which enhances the mobility for electrons to undergo interfacial transfer, and finally leads to the increase of photocatalytic activity. The simulation showed that the temperature had a more complicated effect, as it can simultaneously change the activation of electrons, the interfacial transfer of holes, and the interfacial transfer of electrons. It was shown that the interfacial transfer of holes might play a main role at low temperature, with the temperature-dependence of QY

  15. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Science.gov (United States)

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of

  16. The interface free energy: Comparison of accurate Monte Carlo results for the 3D Ising model with effective interface models

    CERN Document Server

    Caselle, Michele; Panero, Marco

    2007-01-01

    We provide accurate Monte Carlo results for the free energy of interfaces with periodic boundary conditions in the 3D Ising model. We study a large range of inverse temperatures, allowing to control corrections to scaling. In addition to square interfaces, we study rectangular interfaces for a large range of aspect ratios u=L_1/L_2. Our numerical results are compared with predictions of effective interface models. This comparison verifies clearly the effective Nambu-Goto model up to two-loop order. Our data also allow us to obtain the estimates T_c sigma^-1/2=1.235(2), m_0++ sigma^-1/2=3.037(16) and R_+=f_+ sigma_0^2 =0.387(2), which are more precise than previous ones.

  17. Treatment plan evaluation for interstitial photodynamic therapy in a mouse model by Monte Carlo simulation with FullMonte

    Science.gov (United States)

    Cassidy, Jeffrey; Betz, Vaughn; Lilge, Lothar

    2015-02-01

    Monte Carlo (MC) simulation is recognized as the “gold standard” for biophotonic simulation, capturing all relevant physics and material properties at the perceived cost of high computing demands. Tetrahedral-mesh-based MC simulations particularly are attractive due to the ability to refine the mesh at will to conform to complicated geometries or user-defined resolution requirements. Since no approximations of material or light-source properties are required, MC methods are applicable to the broadest set of biophotonic simulation problems. MC methods also have other implementation features including inherent parallelism, and permit a continuously-variable quality-runtime tradeoff. We demonstrate here a complete MC-based prospective fluence dose evaluation system for interstitial PDT to generate dose-volume histograms on a tetrahedral mesh geometry description. To our knowledge, this is the first such system for general interstitial photodynamic therapy employing MC methods and is therefore applicable to a very broad cross-section of anatomy and material properties. We demonstrate that evaluation of dose-volume histograms is an effective variance-reduction scheme in its own right which greatly reduces the number of packets required and hence runtime required to achieve acceptable result confidence. We conclude that MC methods are feasible for general PDT treatment evaluation and planning, and considerably less costly than widely believed.

  18. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    Energy Technology Data Exchange (ETDEWEB)

    Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se [Department of Radiation Physics, Göteborg University, Göteborg 41345 (Sweden); Kindblom, Jon [Department of Oncology, Sahlgrenska University Hospital, Göteborg 41345 (Sweden); Bernhardt, Peter [Department of Radiation Physics, Göteborg University, Göteborg 41345, Sweden and Department of Nuclear Medicine, Sahlgrenska University Hospital, Göteborg 41345 (Sweden)

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became

  19. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    Science.gov (United States)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data

  20. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods

    OpenAIRE

    NeuroData; Paninski, L

    2015-01-01

    Vogelstein JT, Paninski L. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods. Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Sequential Monte Carlo Methods, 2008

  1. Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA.

    Science.gov (United States)

    O'Hagan, Anthony; Stevenson, Matt; Madan, Jason

    2007-10-01

    Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.

  2. Modeling and simulation of radiation from hypersonic flows with Monte Carlo methods

    Science.gov (United States)

    Sohn, Ilyoup

    approximately 1 % was achieved with an efficiency about three times faster than the NEQAIR code. To perform accurate and efficient analyses of chemically reacting flowfield - radiation interactions, the direct simulation Monte Carlo (DSMC) and the photon Monte Carlo (PMC) radiative transport methods are used to simulate flowfield - radiation coupling from transitional to peak heating freestream conditions. The non-catalytic and fully catalytic surface conditions were modeled and good agreement of the stagnation-point convective heating between DSMC and continuum fluid dynamics (CFD) calculation under the assumption of fully catalytic surface was achieved. Stagnation-point radiative heating, however, was found to be very different. To simulate three-dimensional radiative transport, the finite-volume based PMC (FV-PMC) method was employed. DSMC - FV-PMC simulations with the goal of understanding the effect of radiation on the flow structure for different degrees of hypersonic non-equilibrium are presented. It is found that except for the highest altitudes, the coupling of radiation influences the flowfield, leading to a decrease in both heavy particle translational and internal temperatures and a decrease in the convective heat flux to the vehicle body. The DSMC - FV-PMC coupled simulations are compared with the previous coupled simulations and correlations obtained using continuum flow modeling and one-dimensional radiative transport. The modeling of radiative transport is further complicated by radiative transitions occurring during the excitation process of the same radiating gas species. This interaction affects the distribution of electronic state populations and, in turn, the radiative transport. The radiative transition rate in the excitation/de-excitation processes and the radiative transport equation (RTE) must be coupled simultaneously to account for non-local effects. The QSS model is presented to predict the electronic state populations of radiating gas species taking

  3. Monte Carlo study of the mixed Blume-Capel model with four-spin interactions

    Science.gov (United States)

    Jabar, A.; Tahiri, N.; Jetto, K.; Bahmad, L.

    2017-04-01

    Using Monte Carlo simulations, we study the magnetic properties of a ferri-magnetic mixed spins (3/2,2) in a three-dimensional lattice with four-spin interactions. In one hand, we elaborated analytically the ground state phase diagrams in different planes. We found that the all 4 × 5 = 20 configurations are found to be stable. On the other hand, for non null temperature values, the magnetic properties and phase diagrams are deduced. The total and partial magnetizations/susceptibilities are also presented and discussed for different values of the reduced exchange interactions. The critical temperature is displaced towardslower temperatures. To complete this study, we examined the corresponding hysteresis loop behaviors, of the studied system, for different values of the physical parameters.

  4. Validation of GEANT4 Monte Carlo models with a highly granular scintillator-steel hadron calorimeter

    Science.gov (United States)

    Adloff, C.; Blaha, J.; Blaising, J.-J.; Drancourt, C.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Schlereth, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S. T.; Sosebee, M.; White, A. P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N. K.; Mavromanolakis, G.; Thomson, M. A.; Ward, D. R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Apostolakis, J.; Dotti, A.; Folger, G.; Ivantchenko, V.; Uzhinskiy, V.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G. C.; Dyshkant, A.; Lima, J. G. R.; Zutshi, V.; Hostachy, J.-Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.-I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Dauncey, P. D.; Magnan, A.-M.; Bartsch, V.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Popov, V.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Tikhomirov, V.; Kiesling, C.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Amjad, M. S.; Bonis, J.; Callier, S.; Conforti di Lorenzo, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Fleury, J.; Frisson, T.; van der Kolk, N.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch; Pöschl, R.; Raux, L.; Rouëné, J.; Seguin-Moreau, N.; Anduze, M.; Boudry, V.; Brient, J.-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Götze, M.; Hartbrich, O.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2013-07-01

    Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8 GeV to 100 GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.

  5. Simulation on Mechanical Properties of Tungsten Carbide Thin Films Using Monte Carlo Model

    Directory of Open Access Journals (Sweden)

    Liliam C. Agudelo-Morimitsu

    2012-12-01

    Full Text Available The aim of this paper is to study the mechanical behavior of a system composed by substrate-coating using simulation methods. The contact stresses and the elastic deformation were analyzed by applying a normal load to the surface of the system consisting of a tungsten carbide (WC thin film, which is used as a wear resistant material and a stainless steel substrate. The analysis is based on Monte Carlo simulations using the Metropolis algorithm. The phenomenon was simulated from a fcc facecentered crystalline structure, for both, the coating and the substrate, assuming that the uniaxial strain is taken in the z-axis. Results were obtained for different values of normal applied load to the surface of the coating, obtaining the Strain-stress curves. From this curve, the Young´s modulus was obtained with a value of 600 Gpa, similar to the reports.

  6. Experiments and Monte Carlo modeling of a higher resolution Cadmium Zinc Telluride detector for safeguards applications

    Science.gov (United States)

    Borella, Alessandro

    2016-09-01

    The Belgian Nuclear Research Centre is engaged in R&D activity in the field of Non Destructive Analysis on nuclear materials, with focus on spent fuel characterization. A 500 mm3 Cadmium Zinc Telluride (CZT) with enhanced resolution was recently purchased. With a full width at half maximum of 1.3% at 662 keV, the detector is very promising in view of its use for applications such as determination of uranium enrichment and plutonium isotopic composition, as well as measurement on spent fuel. In this paper, I report about the work done with such a detector in terms of its characterization. The detector energy calibration, peak shape and efficiency were determined from experimental data. The data included measurements with calibrated sources, both in a bare and in a shielded environment. In addition, Monte Carlo calculations with the MCNPX code were carried out and benchmarked with experiments.

  7. Validation of GEANT4 Monte Carlo Models with a Highly Granular Scintillator-Steel Hadron Calorimeter

    CERN Document Server

    Adloff, C; Blaising, J J; Drancourt, C; Espargiliere, A; Gaglione, R; Geffroy, N; Karyotakis, Y; Prast, J; Vouters, G; Francis, K; Repond, J; Schlereth, J; Smith, J; Xia, L; Baldolemar, E; Li, J; Park, S T; Sosebee, M; White, A P; Yu, J; Buanes, T; Eigen, G; Mikami, Y; Watson, N K; Mavromanolakis, G; Thomson, M A; Ward, D R; Yan, W; Benchekroun, D; Hoummada, A; Khoulaki, Y; Apostolakis, J; Dotti, A; Folger, G; Ivantchenko, V; Uzhinskiy, V; Benyamna, M; Cârloganu, C; Fehr, F; Gay, P; Manen, S; Royer, L; Blazey, G C; Dyshkant, A; Lima, J G R; Zutshi, V; Hostachy, J Y; Morin, L; Cornett, U; David, D; Falley, G; Gadow, K; Gottlicher, P; Gunter, C; Hermberg, B; Karstensen, S; Krivan, F; Lucaci-Timoce, A I; Lu, S; Lutz, B; Morozov, S; Morgunov, V; Reinecke, M; Sefkow, F; Smirnov, P; Terwort, M; Vargas-Trevino, A; Feege, N; Garutti, E; Marchesini, I; Ramilli, M; Eckert, P; Harion, T; Kaplan, A; Schultz-Coulon, H Ch; Shen, W; Stamen, R; Bilki, B; Norbeck, E; Onel, Y; Wilson, G W; Kawagoe, K; Dauncey, P D; Magnan, A M; Bartsch, V; Wing, M; Salvatore, F; Alamillo, E Calvo; Fouz, M C; Puerta-Pelayo, J; Bobchenko, B; Chadeeva, M; Danilov, M; Epifantsev, A; Markin, O; Mizuk, R; Novikov, E; Popov, V; Rusinov, V; Tarkovsky, E; Kirikova, N; Kozlov, V; Smirnov, P; Soloviev, Y; Buzhan, P; Ilyin, A; Kantserov, V; Kaplin, V; Karakash, A; Popova, E; Tikhomirov, V; Kiesling, C; Seidel, K; Simon, F; Soldner, C; Szalay, M; Tesar, M; Weuste, L; Amjad, M S; Bonis, J; Callier, S; Conforti di Lorenzo, S; Cornebise, P; Doublet, Ph; Dulucq, F; Fleury, J; Frisson, T; van der Kolk, N; Li, H; Martin-Chassard, G; Richard, F; de la Taille, Ch; Poschl, R; Raux, L; Rouene, J; Seguin-Moreau, N; Anduze, M; Boudry, V; Brient, J-C; Jeans, D; Mora de Freitas, P; Musat, G; Reinhard, M; Ruan, M; Videau, H; Bulanek, B; Zacek, J; Cvach, J; Gallus, P; Havranek, M; Janata, M; Kvasnicka, J; Lednicky, D; Marcisovsky, M; Polak, I; Popule, J; Tomasek, L; Tomasek, M; Ruzicka, P; Sicho, P; Smolik, J; Vrba, V; Zalesak, J; Belhorma, B; Ghazlane, H; Takeshita, T; Uozumi, S; Gotze, M; Hartbrich, O; Sauer, J; Weber, S; Zeitnitz, C

    2013-01-01

    Calorimeters with a high granularity are a fundamental requirement of the Particle Flow paradigm. This paper focuses on the prototype of a hadron calorimeter with analog readout, consisting of thirty-eight scintillator layers alternating with steel absorber planes. The scintillator plates are finely segmented into tiles individually read out via Silicon Photomultipliers. The presented results are based on data collected with pion beams in the energy range from 8GeV to 100GeV. The fine segmentation of the sensitive layers and the high sampling frequency allow for an excellent reconstruction of the spatial development of hadronic showers. A comparison between data and Monte Carlo simulations is presented, concerning both the longitudinal and lateral development of hadronic showers and the global response of the calorimeter. The performance of several GEANT4 physics lists with respect to these observables is evaluated.

  8. Monte Carlo modeling and optimization of contrast-enhanced radiotherapy of brain tumors

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Lopez, C E; Garnica-Garza, H M, E-mail: hgarnica@cinvestav.mx [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional Unidad Monterrey, Via del Conocimiento 201 Parque de Investigacion e Innovacion Tecnologica, Apodaca NL CP 66600 (Mexico)

    2011-07-07

    Contrast-enhanced radiotherapy involves the use of a kilovoltage x-ray beam to impart a tumoricidal dose to a target into which a radiological contrast agent has previously been loaded in order to increase the x-ray absorption efficiency. In this treatment modality the selection of the proper x-ray spectrum is important since at the energy range of interest the penetration ability of the x-ray beam is limited. For the treatment of brain tumors, the situation is further complicated by the presence of the skull, which also absorbs kilovoltage x-ray in a very efficient manner. In this work, using Monte Carlo simulation, a realistic patient model and the Cimmino algorithm, several irradiation techniques and x-ray spectra are evaluated for two possible clinical scenarios with respect to the location of the target, these being a tumor located at the center of the head and at a position close to the surface of the head. It will be shown that x-ray spectra, such as those produced by a conventional x-ray generator, are capable of producing absorbed dose distributions with excellent uniformity in the target as well as dose differential of at least 20% of the prescribed tumor dose between this and the surrounding brain tissue, when the tumor is located at the center of the head. However, for tumors with a lateral displacement from the center and close to the skull, while the absorbed dose distribution in the target is also quite uniform and the dose to the surrounding brain tissue is within an acceptable range, hot spots in the skull arise which are above what is considered a safe limit. A comparison with previously reported results using mono-energetic x-ray beams such as those produced by a radiation synchrotron is also presented and it is shown that the absorbed dose distributions rendered by this type of beam are very similar to those obtained with a conventional x-ray beam.

  9. Particle-gamma and particle-particle correlations in nuclear reactions using Monte Carlo Hauser-Feshback model

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory

    2010-01-01

    Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.

  10. Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived

    Science.gov (United States)

    Cottrell, E.; Walter, M. J.

    2009-12-01

    The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most

  11. Modelling of neutron and photon transport in iron and concrete radiation shieldings by the Monte Carlo method - Version 2

    CERN Document Server

    Žukauskaite, A; Plukiene, R; Plukis, A

    2007-01-01

    Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.

  12. McSCIA: application of the Equivalence Theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    Directory of Open Access Journals (Sweden)

    F. Spada

    2006-02-01

    Full Text Available A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can thus perform simulations for both plane-parallel and spherical atmospheres. The latter geometry is essential for the interpretation of limb satellite measurements, as performed by SCIAMACHY on board of ESA's Envisat. The model can simulate UV-vis-NIR radiation.

    First the ray-tracing algorithm is presented in detail, and then successfully validated against literature references, both in plane-parallel and in spherical geometry. A simple 1-D model is used to explain two different ways of treating absorption. One method uses the single scattering albedo while the other uses the equivalence theorem. The equivalence theorem is based on a separation of absorption and scattering. It is shown that both methods give, in a statistical way, identical results for a wide variety of scenarios. Both absorption methods are included in McSCIA, and it is shown that also for a 3-D case both formulations give identical results. McSCIA limb profiles for atmospheres with and without absorption compare well with the one of the state of the art Monte Carlo radiative transfer model MCC++.

    A simplification of the photon statistics may lead to very fast calculations of absorption features in the atmosphere. However, these simplifications potentially introduce biases in the results. McSCIA does not use simplifications and is therefore a relatively slow implementation of the equivalence theorem. For the first time, however, the validity of the equivalence theorem is demonstrated in a spherical 3-D radiative transfer model.

  13. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  14. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  15. Lattice gauge theories and Monte Carlo simulations

    CERN Document Server

    Rebbi, Claudio

    1983-01-01

    This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.

  16. Quantum Monte Carlo for minimum energy structures

    CERN Document Server

    Wagner, Lucas K

    2010-01-01

    We present an efficient method to find minimum energy structures using energy estimates from accurate quantum Monte Carlo calculations. This method involves a stochastic process formed from the stochastic energy estimates from Monte Carlo that can be averaged to find precise structural minima while using inexpensive calculations with moderate statistical uncertainty. We demonstrate the applicability of the algorithm by minimizing the energy of the H2O-OH- complex and showing that the structural minima from quantum Monte Carlo calculations affect the qualitative behavior of the potential energy surface substantially.

  17. MCNP-X Monte Carlo Code Application for Mass Attenuation Coefficients of Concrete at Different Energies by Modeling 3 × 3 Inch NaI(Tl Detector and Comparison with XCOM and Monte Carlo Data

    Directory of Open Access Journals (Sweden)

    Huseyin Ozan Tekin

    2016-01-01

    Full Text Available Gamma-ray measurements in various research fields require efficient detectors. One of these research fields is mass attenuation coefficients of different materials. Apart from experimental studies, the Monte Carlo (MC method has become one of the most popular tools in detector studies. An NaI(Tl detector has been modeled, and, for a validation study of the modeled NaI(Tl detector, the absolute efficiency of 3 × 3 inch cylindrical NaI(Tl detector has been calculated by using the general purpose Monte Carlo code MCNP-X (version 2.4.0 and compared with previous studies in literature in the range of 661–2620 keV. In the present work, the applicability of MCNP-X Monte Carlo code for mass attenuation of concrete sample material as building material at photon energies 59.5 keV, 80 keV, 356 keV, 661.6 keV, 1173.2 keV, and 1332.5 keV has been tested by using validated NaI(Tl detector. The mass attenuation coefficients of concrete sample have been calculated. The calculated results agreed well with experimental and some other theoretical results. The results specify that this process can be followed to determine the data on the attenuation of gamma-rays with other required energies in other materials or in new complex materials. It can be concluded that data from Monte Carlo is a strong tool not only for efficiency studies but also for mass attenuation coefficients calculations.

  18. Cluster hybrid Monte Carlo simulation algorithms

    Science.gov (United States)

    Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.

    2002-06-01

    We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.

  19. McSCIA: application of the Equivalence Theorem in a Monte Carlo radiative transfer model for spherical shell atmospheres

    Directory of Open Access Journals (Sweden)

    F. Spada

    2006-01-01

    Full Text Available A new multiple-scattering Monte Carlo 3-D radiative transfer model named McSCIA (Monte Carlo for SCIAmachy is presented. The backward technique is used to efficiently simulate narrow field of view instruments. The McSCIA algorithm has been formulated as a function of the Earth's radius, and can thus perform simulations for both plane-parallel and spherical atmospheres. The latter geometry is essential for the interpretation of limb satellite measurements, as performed by SCIAMACHY on board of ESA's Envisat. The model can simulate UV-vis-NIR radiation. First the ray-tracing algorithm is presented in detail, and then successfully validated against literature references, both in plane-parallel and in spherical geometry. A simple 1-D model is used to explain two different ways of treating absorption. One method uses the single scattering albedo while the other uses the equivalence theorem. The equivalence theorem is based on a separation of absorption and scattering. It is shown that both methods give, in a statistical way, identical results for a wide variety of scenarios. Both absorption methods are included in McSCIA, and it is shown that also for a 3-D case both formulations give identical results. McSCIA limb profiles for atmospheres with and without absorption compare well with the one of the state of the art Monte Carlo radiative transfer model MCC++. A simplification of the photon statistics may lead to very fast calculations of absorption features in the atmosphere. However, these simplifications potentially introduce biases in the results. McSCIA does not use simplifications and is therefore a relatively slow implementation of the equivalence theorem.

  20. Development and validation of a measurement-based source model for kilovoltage cone-beam CT Monte Carlo dosimetry simulations

    Science.gov (United States)

    McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan

    2013-01-01

    Purpose: The purpose of this study is to adapt an equivalent source model originally developed for conventional CT Monte Carlo dose quantification to the radiation oncology context and validate its application for evaluating concomitant dose incurred by a kilovoltage (kV) cone-beam CT (CBCT) system integrated into a linear accelerator. Methods: In order to properly characterize beams from the integrated kV CBCT system, the authors have adapted a previously developed equivalent source model consisting of an equivalent spectrum module that takes into account intrinsic filtration and an equivalent filter module characterizing the added bowtie filtration. An equivalent spectrum was generated for an 80, 100, and 125 kVp beam with beam energy characterized by half-value layer measurements. An equivalent filter description was generated from bowtie profile measurements for both the full- and half-bowtie. Equivalent source models for each combination of equivalent spectrum and filter were incorporated into the Monte Carlo software package MCNPX. Monte Carlo simulations were then validated against in-phantom measurements for both the radiographic and CBCT mode of operation of the kV CBCT system. Radiographic and CBCT imaging dose was measured for a variety of protocols at various locations within a body (32 cm in diameter) and head (16 cm in diameter) CTDI phantom. The in-phantom radiographic and CBCT dose was simulated at all measurement locations and converted to absolute dose using normalization factors calculated from air scan measurements and corresponding simulations. The simulated results were compared with the physical measurements and their discrepancies were assessed quantitatively. Results: Strong agreement was observed between in-phantom simulations and measurements. For the radiographic protocols, simulations uniformly underestimated measurements by 0.54%–5.14% (mean difference = −3.07%, SD = 1.60%). For the CBCT protocols, simulations uniformly

  1. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  2. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  3. A Monte Carlo model for out-of-field dose calculation from high-energy photon therapy.

    Science.gov (United States)

    Kry, Stephen F; Titt, Uwe; Followill, David; Pönisch, Falk; Vassiliev, Oleg N; White, R Allen; Stovall, Marilyn; Salehpour, Mohammad

    2007-09-01

    As cancer therapy becomes more efficacious and patients survive longer, the potential for late effects increases, including effects induced by radiation dose delivered away from the treatment site. This out-of-field radiation is of particular concern with high-energy radiotherapy, as neutrons are produced in the accelerator head. We recently developed an accurate Monte Carlo model of a Varian 2100 accelerator using MCNPX for calculating the dose away from the treatment field resulting from low-energy therapy. In this study, we expanded and validated our Monte Carlo model for high-energy (18 MV) photon therapy, including both photons and neutrons. Simulated out-of-field photon doses were compared with measurements made with thermoluminescent dosimeters in an acrylic phantom up to 55 cm from the central axis. Simulated neutron fluences and energy spectra were compared with measurements using moderated gold foil activation in moderators and data from the literature. The average local difference between the calculated and measured photon dose was 17%, including doses as low as 0.01% of the central axis dose. The out-of-field photon dose varied substantially with field size and distance from the edge of the field but varied little with depth in the phantom, except at depths shallower than 3 cm, where the dose sharply increased. On average, the difference between the simulated and measured neutron fluences was 19% and good agreement was observed with the neutron spectra. The neutron dose equivalent varied little with field size or distance from the central axis but decreased with depth in the phantom. Neutrons were the dominant component of the out-of-field dose equivalent for shallow depths and large distances from the edge of the treatment field. This Monte Carlo model is useful to both physicists and clinicians when evaluating out-of-field doses and associated potential risks.

  4. Voxel2MCNP: a framework for modeling, simulation and evaluation of radiation transport scenarios for Monte Carlo codes.

    Science.gov (United States)

    Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian

    2013-08-21

    The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX's MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application.

  5. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  6. Periodic ordering of clusters and stripes in a two-dimensional lattice model. II. Results of Monte Carlo simulation.

    Science.gov (United States)

    Almarza, N G; Pȩkalski, J; Ciach, A

    2014-04-28

    The triangular lattice model with nearest-neighbor attraction and third-neighbor repulsion, introduced by Pȩkalski, Ciach, and Almarza [J. Chem. Phys. 140, 114701 (2014)] is studied by Monte Carlo simulation. Introduction of appropriate order parameters allowed us to construct a phase diagram, where different phases with patterns made of clusters, bubbles or stripes are thermodynamically stable. We observe, in particular, two distinct lamellar phases-the less ordered one with global orientational order and the more ordered one with both orientational and translational order. Our results concern spontaneous pattern formation on solid surfaces, fluid interfaces or membranes that is driven by competing interactions between adsorbing particles or molecules.

  7. Cloud-based Monte Carlo modelling of BSSRDF for the rendering of human skin appearance (Conference Presentation)

    Science.gov (United States)

    Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.

    2016-03-01

    We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.

  8. Calibrated multi-subband Monte Carlo modeling of tunnel-FETs in silicon and III-V channel materials

    Science.gov (United States)

    Revelant, A.; Palestri, P.; Osgnach, P.; Selmi, L.

    2013-10-01

    We present a semiclassical model for Tunnel-FET (TFET) devices capable to describe band-to-band tunneling (BtBT) as well as far from equilibrium transport of the generated carriers. BtBT generation is implemented as an add-on into an existing multi-subband Monte Carlo (MSMC) transport simulator that accounts as well for the effects typical to alternative channel materials and high-κ dielectrics. A simple but accurate correction for the calculation of the BtBT generation rate to account for carrier confinement in the subbands is proposed and verified by comparison with full 2D quantum calculation.

  9. Variational Monte Carlo study of a chiral spin liquid in the extended Heisenberg model on the kagome lattice

    Science.gov (United States)

    Hu, Wen-Jun; Zhu, Wei; Zhang, Yi; Gong, Shoushu; Becca, Federico; Sheng, D. N.

    2015-01-01

    We investigate the extended Heisenberg model on the kagome lattice by using Gutzwiller projected fermionic states and the variational Monte Carlo technique. In particular, when both second- and third-neighbor superexchanges are considered, we find that a gapped spin liquid described by nontrivial magnetic fluxes and long-range chiral-chiral correlations is energetically favored compared to the gapless U(1) Dirac state. Furthermore, the topological Chern number, obtained by integrating the Berry curvature, and the degeneracy of the ground state, by constructing linearly independent states, lead us to identify this flux state as the chiral spin liquid with a C =1 /2 fractionalized Chern number.

  10. Monte Carlo semi-empirical model for Si(Li) x-ray detector: Differences between nominal and fitted parameters

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Pino, N.; Padilla-Cabal, F.; Garcia-Alvarez, J. A.; Vazquez, L.; D' Alessandro, K.; Correa-Alfonso, C. M. [Departamento de Fisica Nuclear, Instituto Superior de Tecnologia y Ciencias Aplicadas (InSTEC) Ave. Salvador Allende y Luaces. Quinta de los Molinos. Habana 10600. A.P. 6163, La Habana (Cuba); Godoy, W.; Maidana, N. L.; Vanin, V. R. [Laboratorio do Acelerador Linear, Instituto de Fisica - Universidade de Sao Paulo Rua do Matao, Travessa R, 187, 05508-900, SP (Brazil)

    2013-05-06

    A detailed characterization of a X-ray Si(Li) detector was performed to obtain the energy dependence of efficiency in the photon energy range of 6.4 - 59.5 keV, which was measured and reproduced by Monte Carlo (MC) simulations. Significant discrepancies between MC and experimental values were found when the manufacturer parameters of the detector were used in the simulation. A complete Computerized Tomography (CT) detector scan allowed to find the correct crystal dimensions and position inside the capsule. The computed efficiencies with the resulting detector model differed with the measured values no more than 10% in most of the energy range.

  11. EXTENDED MONTE CARLO LOCALIZATION ALGORITHM FOR MOBILE SENSOR NETWORKS

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A real-world localization system for wireless sensor networks that adapts for mobility and irregular radio propagation model is considered.The traditional range-based techniques and recent range-free localization schemes are not welt competent for localization in mobile sensor networks,while the probabilistic approach of Bayesian filtering with particle-based density representations provides a comprehensive solution to such localization problem.Monte Carlo localization is a Bayesian filtering method that approximates the mobile node’S location by a set of weighted particles.In this paper,an enhanced Monte Carlo localization algorithm-Extended Monte Carlo Localization (Ext-MCL) is suitable for the practical wireless network environment where the radio propagation model is irregular.Simulation results show the proposal gets better localization accuracy and higher localizable node number than previously proposed Monte Carlo localization schemes not only for ideal radio model,but also for irregular one.

  12. Short-Term Variability of X-rays from Accreting Neutron Star Vela X-1: II. Monte-Carlo Modeling

    CERN Document Server

    Odaka, Hirokazu; Tanaka, Yasuyuki T; Watanabe, Shin; Takahashi, Tadayuki; Makishima, Kazuo

    2013-01-01

    We develop a Monte Carlo Comptonization model for the X-ray spectrum of accretion-powered pulsars. Simple, spherical, thermal Comptonization models give harder spectra for higher optical depth, while the observational data from Vela X-1 show that the spectra are harder at higher luminosity. This suggests a physical interpretation where the optical depth of the accreting plasma increases with mass accretion rate. We develop a detailed Monte-Carlo model of the accretion flow, including the effects of the strong magnetic field ($\\sim 10^{12}$ G) both in geometrically constraining the flow into an accretion column, and in reducing the cross section. We treat bulk-motion Comptonization of the infalling material as well as thermal Comptonization. These model spectra can match the observed broad-band {\\it Suzaku} data from Vela X-1 over a wide range of mass accretion rates. The model can also explain the so-called "low state", in which the uminosity decreases by an order of magnitude. Here, thermal Comptonization sh...

  13. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  14. Monte Carlo simulations for plasma physics

    Energy Technology Data Exchange (ETDEWEB)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  15. Quantum Monte Carlo Calculations of Light Nuclei

    CERN Document Server

    Pieper, Steven C

    2007-01-01

    During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.

  16. Improved Monte Carlo Renormalization Group Method

    Science.gov (United States)

    Gupta, R.; Wilson, K. G.; Umrigar, C.

    1985-01-01

    An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.

  17. Monte Carlo methods for particle transport

    CERN Document Server

    Haghighat, Alireza

    2015-01-01

    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  18. Smart detectors for Monte Carlo radiative transfer

    CERN Document Server

    Baes, Maarten

    2008-01-01

    Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...

  19. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  20. A replica exchange Monte Carlo algorithm for protein folding in the HP model

    Directory of Open Access Journals (Sweden)

    Shmygelska Alena

    2007-09-01

    Full Text Available Abstract Background The ab initio protein folding problem consists of predicting protein tertiary structure from a given amino acid sequence by minimizing an energy function; it is one of the most important and challenging problems in biochemistry, molecular biology and biophysics. The ab initio protein folding problem is computationally challenging and has been shown to be NP MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFneVtcqqGqbauaaa@3961@-hard even when conformations are restricted to a lattice. In this work, we implement and evaluate the replica exchange Monte Carlo (REMC method, which has already been applied very successfully to more complex protein models and other optimization problems with complex energy landscapes, in combination with the highly effective pull move neighbourhood in two widely studied Hydrophobic Polar (HP lattice models. Results We demonstrate that REMC is highly effective for solving instances of the square (2D and cubic (3D HP protein folding problem. When using the pull move neighbourhood, REMC outperforms current state-of-the-art algorithms for most benchmark instances. Additionally, we show that this new algorithm provides a larger ensemble of ground-state structures than the existing state-of-the-art methods. Furthermore, it scales well with sequence length, and it finds significantly better conformations on long biological sequences and sequences with a provably unique ground-state structure, which is believed to be a characteristic of real proteins. We also present evidence that our REMC algorithm can fold sequences which exhibit significant interaction between termini in the hydrophobic core relatively easily. Conclusion We demonstrate that REMC utilizing the pull move